Limitations of adopting chatGPT and other large language models in the financial services industry

4 min read

There has recently been a lot of hype surrounding ChatGPT, and other large language models (LLM’s) and rightly so. Their extensive capabilities make them impactful for a wide range of use cases across various industries. According to ChatGPT itself, some of its most popular applications include customer support, content creation- it edited this post – medical advice, gaming, research, etc. However, there limitation of adopting ChatGPT in financial services industry.


While it’s exciting to see the world discover and explore the power of generative AI and LLMs, it’s important to acknowledge that integrating this technology responsibly into business operations, particularly in the financial services industry, demands a considered approach.


Cutting through the hype, there are significant challenges of adopting LLMs that may be regarded as immaterial but can be detrimental for a business. 

Model error/inaccuracy 

Model errors and inaccuracies can be a concern when using AI language models like ChatGPT. For example, a user might ask ChatGPT to edit an email but the model creates a response to the email instead. In this case the user can easily change the prompt to help the model better understand the request and provide the correct response. ChatGPT has also imputed factually inaccurate information when asked about who a public figure is. 


While this may not be too much of an inconvenience in individual use, such errors can have more of a consequence for businesses, especially when left unrecognised. This is concerning when it comes to industries like finance, where relying on a model that can “hallucinate” and provide incorrect outputs can be detrimental to people’s lives. 


ChatGPT is also capable of summarising conversations, making it a valuable tool with several use cases within the financial services sector.  However, if you were to ask about something that wasn’t discussed in a cited conversation, for example, “what did the customer say about being unable to pay the next instalment due to inflation?” ChatGPT may provide the accurate response indicating that it wasn’t mentioned, or it may generate an incorrect answer. This highlights the need to have a robust strategy in place, like adopting human+, when adopting a model that may not always be accurate.

limitation of ChatGPT in financial services

Weakness in offering specific/ unique solutions

One of the limitations of ChatGPT is its weakness in offering specific and unique solutions. While ChatGPT is excellent at providing high-level answers due to its extensive training with numerous examples, it may not always be reliable in providing solutions for problems that are specific to your business. It may generate inaccurate information that sounds convincing, instead of confirming that it does not have the answer you are looking for.

This is a limitation of ChatGPT in financial services that can be detrimental because it can make you believe that the information provided by the model is accurate, leading you to rely on a wrong input.  It is essential to recognise this limitation and use ChatGPT as a tool to supplement human decision-making for tasks that need a more granular or tailored approach. 

Self-referential responses

Large Language models like ChatGPT can sometimes provide commentative or self-referential responses, where they refer to their own output or process of generating a response. For instance, instead of simply transcribing text, the model may say ‘Here is a transcript for…’ and provide a transcription while also commenting on it. While this can be helpful in certain contexts, it can also lead to confusion if the output is being used to build a product or communicate information to a customer. It’s crucial to carefully consider the potential implications of commentative or self-referential responses while adopting this model.

Model Bias

LLMs can sometimes encode bias, resulting in stereotypical and biassed outputs that reflect the statistical data they were trained on. For example, a model may generate the output ‘she will be unable to pay bills’ for a client who belongs to a certain demographic group, even if this may not be statistically accurate. Such outputs may not be helpful and can perpetuate biases that exist in the real world due to social reasons. While language model developers try to eliminate these biases and put guard rails in the training process, they are not completely eliminated. Therefore, firms should be aware of the potential for bias and use them with caution, especially in sensitive contexts such as decision-making processes.


limitation of ChatGPT in financial services

In conclusion, while there are extensive advantages, it’s important to acknowledge address these limitation of ChatGPT in financial services.If adopted correctly, language models like ChatGPT have the potential to significantly disrupt the way businesses operate across every sector and function in the coming years. However, there is a need to close the adoption gap and ensure that these models are used responsibly. One way to achieve this is through incorporating  human-in-the-loop or a human+ model, where human input is used to guide and adjust the automated processes, ensuring accuracy and preventing errors or biases.


Another approach is to adopt Generative Adversarial Network (GAN), in which a smaller model is trained to recognise good practices and can act as an adversary or opposition model to check the main model and prevent it from going off course. It’s crucial to take a risk-based approach to the adoption of this technology and incorporate human intervention and oversight to ensure responsible use.

Related posts

Adviser productivity
The landscape of mergers and acquisitions (M&A) in wealth management is undergoing significant changes, driven largely by evolving regulatory scrutiny. In a recent webinar, Jana Sivananthan, CRO at 7IM along...
Aveni’s fine-tuned RoBERTa language model has been knocking it out of the park when it comes to detecting vulnerabilities in call transcripts, even beating the latest GPT-4. Over the past...
In this webinar, Aveni’s CEO, Joseph Twigg, Head of NLP, Iria Del Rio and Chief Client Officer, Robbie Homer-Plews, held a live Q&A bootcamp as a crash course in AI...
Financial Services
Just four months into 2024 and the FCA has been relentless at proving that Consumer Duty is every bit the ‘TCF with teeth’ the industry expected it to be. Companies...
The financial services (FS) industry is steeped in complexity and ever-evolving regulations. From process inefficiencies to outdated legacy systems that require manual data input that hasn’t been maintained to a...
What is the EU AI Act: the key takeaways   The December 2023 EU AI Act is the first comprehensive legal framework for AI in the world. It aims to...
Consumer Duty
Based on current trends and strategic planning, here are the top 5 regulatory developments the Financial Conduct Authority (FCA) will prioritise in 2024 and everything you need to know about...
We know that there’s a lot to come in the next twelve months. That’s why we asked a popular chatbot what it predicts to be the top 5 generative AI...
Adviser productivity
Cavendish Online, part of Lloyds Banking Group, has partnered with, the Artificial Intelligence fintech business, to become one of the first protection distributors in the market to use AI...
Financial Services
Nothing gets us through the day better than a podcast. That’s why we’ve put together this list of our top 5 financial services podcasts to add to your playlist for...
Artificial intelligence (AI) is transforming almost every sector of the world, and the finance industry is no exception. From robo-advisors to algorithmic trading to chatbots answering customer questions, AI is...
Artificial Intelligence (AI) has been a hot topic, not just in finance but in homes and businesses across the world. From whipping up long paragraphs in seconds to translating languages,...

Aveni’s platform uses the latest in NLP to transform productivity and risk oversight.

Scale compliance at a fraction of the cost

Cut financial advice admin from hours to minutes with Aveni’s AI assisitant

Aveni Assist

Get up and running with Aveni Assist and how it can help transform productivity and compliance. 

Aveni Detect

Get up and running with Aveni Detect and how it can help transform productivity and compliance. 

Read the latest articles from Aveni

Access our latest whitepapers, webinars, brochures and more

Jargon-bust your way to a better understanding of all things AI