Financial services institutions operating in today’s regulatory landscape face a myriad of challenges in ensuring compliance and quality assurance in their operations. To tackle these hurdles, Aveni has been at the forefront of developing advanced AI models to provide the best results for its clients.
A model is an intelligent software system that learns from examples. It analyses data, discerning patterns and relationships within it. Just like we become better at identifying things the more we see them, these models follow a similar process. The capability of learning from data enables AI to make predictions about new and unfamiliar things it encounters. This also includes understanding human language.
Low accuracy of models used in building RegTech solutions can have severe ramifications for financial institutions and their compliance efforts. It may lead to misclassification of customer interactions, overlooking critical compliance breaches, for example, failing to identify vulnerabilities.
That’s why Aveni leverages various innovative techniques and approaches to build models with high accuracy, including:
Semantic Similarity: automating QA Processes
We have developed specialised Quality Assurance (QA) models that help our customers automate QA processes. These models examine QA forms with specific questions to ensure that essential criteria have been addressed during customer calls. For example, it can focus on whether agents have asked certain questions or if customers have provided certain information.
One of our key techniques is semantic similarity. We use large language models to generate embeddings, which involves converting words, sentences, or paragraphs into numerical representations to capture their semantic meaning and context. This step ensures generalisation. This means we use a representation that generalises the meaning of the text we are looking for, instead of using patterns or the same words in the training examples.
These numerical representations enable language models to perform various natural language processing (NLP) tasks effectively, facilitating applications in understanding and processing human language. We also take sentences from the customer calls, just like snippets of conversation, and convert them into embeddings as well.
We compare these embeddings to find the sentences that are most similar to examples we have seen before. For example, if we want to check if the agent asked for the client’s name, we gather a few ways the agent might ask that question and create embeddings for those sentences. Now the model knows what we are looking for, it can find it in new call data. We set a threshold for the model for the level of similarity required for analysis. If a sentence is less similar than this threshold, it will not consider it for further evaluation.
By using this approach, we ensure that our analysis is incredibly precise and doesn’t make unnecessary errors or produce noise. During a call, there might be instances that appear similar to what we are trying to find, but are not actually correct. To ensure our analysis is accurate, we take this into account and this approach avoids making errors.
Human-in-the-Loop: Addressing Limitations and Enhancing Model Performance
While our semantic similarity approach is precise, it does have a limitation in terms of recall. Recall refers to the ability of the model to capture all relevant instances of a specific pattern or statement. Starting with a few examples during training may represent only a small fraction of the various ways people can express something. Consequently, our models might miss certain expressions not included in the initial training examples.
To overcome this limitation and continuously improve our models, we employ a valuable resource: human expertise. Aveni encourages user interactions and feedback, particularly from Chief Risk Officers (CROs) and Quality Assurance (QA) teams reviewing customer calls.
They use our platform to perform QA reviews and at the same time and without a noticeable added effort, they provide our models with data. This is where the human-in-the-loop process comes to play – the provided data serves as additional examples for our models, enhancing their accuracy and addressing any previous gaps.
This user feedback allows us to retrain our models with more diverse examples, capturing a broader range of expressions and enhancing the model’s recall.
By combining the power of large language models with user interactions, we create a synergistic approach that optimises our models based on real-world insights and human expertise.
Vulnerability Classification: Addressing Complex Challenges
Addressing more intricate issues like vulnerability classification poses unique challenges because determining customer vulnerability is subjective and context-dependent. For example, a customer mentioning a family member’s bad health condition may not be vulnerable. However, one sharing that they live with the sick family member and have to take care of them is vulnerable.
Addressing such complex problems requires cutting-edge tools. At Aveni, we have experimented with various approaches for vulnerability classification. Among these methods, the most successful one involves using a powerful language model called RoBERTa (Robustly Optimised BERT Approach). It is an advanced version of BERT, which is a popular language model used in Natural Language Processing tasks. RoBERTa uses a larger amount of data and trains for a longer time, which helps it better understand the complexities of language.
This model has been trained on vast amounts of text, allowing it to learn intricate patterns and relationships within languages. It is adept at predicting masked or hidden words in a sentence by analysing the surrounding context. It can also predict the next sentence in a given text.
We harness the capabilities of RoBERTa by fine-tuning it on our specific dataset. This process involves training the model on our data to adapt its understanding to the nuances of customer vulnerability, dissatisfaction, and complaints. By fine-tuning RoBERTa, we ensure that the model becomes highly specialised for the vulnerability classification task.
Few-Shot Learning: Maximising Model Knowledge with Minimal Data
Few-shot learning involves leveraging large language models, such as GPT models, to accomplish classification tasks with just a handful of examples.
The model’s ability to draw from its existing knowledge and the provided examples to make accurate classifications remarkable. With limited information, the model can effectively predict whether other pieces of text, not previously seen, demonstrate dissatisfaction, for example.
We accumulate more labelled data by asking the model to classify additional pieces of text based on the prompt and few examples. We can use this new data to train specialised models tailored to the unique needs of our clients. This empowers us to move beyond relying solely on expensive pre-trained models and instead build tailored models that precisely address our clients’ specific requirements.
We continuously refine our models with these and other advanced techniques. This approach ensures that Chief Risk Officers, advisers and financial institutions receive accurate and actionable results even in data-limited scenarios. It also makes AI for financial services more accessible.
From QA models and semantic similarity to vulnerability classification, we ensure high precision and recall in our AI models. Through these innovative approaches, Aveni continues to play a significant role in enhancing compliance and quality assurance in the financial services industry. Our model accuracy sets us apart from more generic NLP solutions in the market. It ensures customers get the best possible insights to drive better business decisions.
We provide a unique machine assessment solution, capitalising on our high model accuracy. This feature automatically evaluates all customer calls, offering valuable insights and presenting extensive generated data. The customer inputs customised QA questions on their assessment forms. We utilise machine learning models to automatically assess them and provide comprehensive reports. Think of classifiers as smart algorithms that learn from examples and can automatically categorise new data into predefined classes.
These classifiers quickly and accurately evaluate customer calls, providing valuable insights and organising them based on specific criteria, such as sentiment or compliance level. We distinguish ourselves as the only company offering such comprehensive and innovative capabilities.