Limitations of adopting chatGPT and other large language models in the financial services industry

Written on
byChisimdi Nzotta
limitation of ChatGPT in financial services

There has recently been a lot of hype surrounding ChatGPT, and other large language models (LLM’s) and rightly so. Their extensive capabilities make them impactful for a wide range of use cases across various industries. According to ChatGPT itself, some of its most popular applications include customer support, content creation- it edited this post – medical advice, gaming, research, etc. However, there limitation of adopting ChatGPT in financial services industry.

 

While it’s exciting to see the world discover and explore the power of generative AI and LLMs, it’s important to acknowledge that integrating this technology responsibly into business operations, particularly in the financial services industry, demands a considered approach.

 

Cutting through the hype, there are significant challenges of adopting LLMs that may be regarded as immaterial but can be detrimental for a business. 

Model error/inaccuracy 

Model errors and inaccuracies can be a concern when using AI language models like ChatGPT. For example, a user might ask ChatGPT to edit an email but the model creates a response to the email instead. In this case the user can easily change the prompt to help the model better understand the request and provide the correct response. ChatGPT has also imputed factually inaccurate information when asked about who a public figure is. 

 

While this may not be too much of an inconvenience in individual use, such errors can have more of a consequence for businesses, especially when left unrecognised. This is concerning when it comes to industries like finance, where relying on a model that can “hallucinate” and provide incorrect outputs can be detrimental to people’s lives. 

 

ChatGPT is also capable of summarising conversations, making it a valuable tool with several use cases within the financial services sector.  However, if you were to ask about something that wasn’t discussed in a cited conversation, for example, “what did the customer say about being unable to pay the next instalment due to inflation?” ChatGPT may provide the accurate response indicating that it wasn’t mentioned, or it may generate an incorrect answer. This highlights the need to have a robust strategy in place, like adopting human+, when adopting a model that may not always be accurate.

limitation of ChatGPT in financial services

Weakness in offering specific/ unique solutions

One of the limitations of ChatGPT is its weakness in offering specific and unique solutions. While ChatGPT is excellent at providing high-level answers due to its extensive training with numerous examples, it may not always be reliable in providing solutions for problems that are specific to your business. It may generate inaccurate information that sounds convincing, instead of confirming that it does not have the answer you are looking for.

This is a limitation of ChatGPT in financial services that can be detrimental because it can make you believe that the information provided by the model is accurate, leading you to rely on a wrong input.  It is essential to recognise this limitation and use ChatGPT as a tool to supplement human decision-making for tasks that need a more granular or tailored approach. 

Self-referential responses

Large Language models like ChatGPT can sometimes provide commentative or self-referential responses, where they refer to their own output or process of generating a response. For instance, instead of simply transcribing text, the model may say ‘Here is a transcript for…’ and provide a transcription while also commenting on it. While this can be helpful in certain contexts, it can also lead to confusion if the output is being used to build a product or communicate information to a customer. It’s crucial to carefully consider the potential implications of commentative or self-referential responses while adopting this model.

Model Bias

LLMs can sometimes encode bias, resulting in stereotypical and biassed outputs that reflect the statistical data they were trained on. For example, a model may generate the output ‘she will be unable to pay bills’ for a client who belongs to a certain demographic group, even if this may not be statistically accurate. Such outputs may not be helpful and can perpetuate biases that exist in the real world due to social reasons. While language model developers try to eliminate these biases and put guard rails in the training process, they are not completely eliminated. Therefore, firms should be aware of the potential for bias and use them with caution, especially in sensitive contexts such as decision-making processes.

 

limitation of ChatGPT in financial services

In conclusion, while there are extensive advantages, it’s important to acknowledge address these limitation of ChatGPT in financial services.If adopted correctly, language models like ChatGPT have the potential to significantly disrupt the way businesses operate across every sector and function in the coming years. However, there is a need to close the adoption gap and ensure that these models are used responsibly. One way to achieve this is through incorporating  human-in-the-loop or a human+ model, where human input is used to guide and adjust the automated processes, ensuring accuracy and preventing errors or biases.

 

Another approach is to adopt Generative Adversarial Network (GAN), in which a smaller model is trained to recognise good practices and can act as an adversary or opposition model to check the main model and prevent it from going off course. It’s crucial to take a risk-based approach to the adoption of this technology and incorporate human intervention and oversight to ensure responsible use.

Other recent posts

Podcast logos-03

AI’ll Take Your Job

Join Joseph Twigg and Jamie Hunter, the dynamic duo of financial services and AI, as they unleash their wit and wisdom on the game-changing influence of recent AI development on the industry.  

Replay social 2-06

An Introduction to ChatGPT in Financial Advice

Our CEO Joseph Twigg was joined by Iria Del Rio, our lead NLP engineer to talk about the explosive rise of ChatGPT and other large language models, what got us here and what this...

Three confident business people having discussion while working in the office together

Beyond the basics: Debunking common QA assumptions in financial services

Quality assurance (QA) is a critical business function, ensuring that products and services are compliant with regulation and meet customer needs. However, there are some common assumptions about QA within the financial services industry...

Board member and FCA proposal

Aveni partners with Delta Capita to power their Consumer Duty offering

Aveni has partnered with Delta Capita, a leading global capital markets consulting, managed services and technology provider, to advance their Consumer Duty offering by providing their clients with access to our cutting edge AI...

It pays to plan properly for the future. Cropped shot of a senior couple getting advice from their financial consultant.

Age Partnership selects Aveni Detect platform to enhance customer outcomes and mitigate risks

Aveni.ai, has been selected by Age Partnership, one of the UK’s leading Equity Release Advisory firms, to revolutionise its Quality Assurance systems setting new standards for customer service and outcomes.     Following a successful...

featured-image

Demonstrating Consumer Duty Compliance with Technology - Key Takeaways from Aveni’s recent webinar

In our latest Consumer Duty webinar series,”Demonstrating Consumer Duty Compliance with Technology,” Joseph Twigg, CEO of Aveni, sat down with John Liver, Strategic Adviser at Kore and NED at Barclays, and Alan Blanchard, Head...

Aveni SPW

Schroders Personal Wealth adopts AI-based Aveni Detect platform to transform compliance function

Aveni, has been selected by Schroders Personal Wealth (SPW) to transform its compliance function. Through the deployment of the Aveni Detect platform SPW will use the latest advances in Natural Language Processing (NLP) to...

Humain-in-the-loop

Human in the loop 101: what is it and why is it so important?

Financial services firms have been turning to Natural Language Processing (NLP) solutions to extract valuable insights from vast amounts of unstructured data. But even the most advanced algorithms can’t match the intuition and creativity...

Screenshot 2023-02-15 at 11.53.52

'Dear CEO' letter: The FCA highlights gaps in Consumer Duty Implementation plans

The FCA has sent its ‘Dear CEO’ letters to 8 sectors in the financial services industry, providing guidance on how to effectively implement and embed the Duty. The letter outlines four key components:   ...