There has recently been a lot of hype surrounding ChatGPT, and other large language models (LLM’s) and rightly so. Their extensive capabilities make them impactful for a wide range of use cases across various industries. According to ChatGPT itself, some of its most popular applications include customer support, content creation- it edited this post – medical advice, gaming, research, etc. However, there limitation of adopting ChatGPT in financial services industry.
While it’s exciting to see the world discover and explore the power of generative AI and LLMs, it’s important to acknowledge that integrating this technology responsibly into business operations, particularly in the financial services industry, demands a considered approach.
Cutting through the hype, there are significant challenges of adopting LLMs that may be regarded as immaterial but can be detrimental for a business.
Model error/inaccuracy
Model errors and inaccuracies can be a concern when using AI language models like ChatGPT. For example, a user might ask ChatGPT to edit an email but the model creates a response to the email instead. In this case the user can easily change the prompt to help the model better understand the request and provide the correct response. ChatGPT has also imputed factually inaccurate information when asked about who a public figure is.
While this may not be too much of an inconvenience in individual use, such errors can have more of a consequence for businesses, especially when left unrecognised. This is concerning when it comes to industries like finance, where relying on a model that can “hallucinate” and provide incorrect outputs can be detrimental to people’s lives.
ChatGPT is also capable of summarising conversations, making it a valuable tool with several use cases within the financial services sector. However, if you were to ask about something that wasn’t discussed in a cited conversation, for example, “what did the customer say about being unable to pay the next instalment due to inflation?” ChatGPT may provide the accurate response indicating that it wasn’t mentioned, or it may generate an incorrect answer. This highlights the need to have a robust strategy in place, like adopting human+, when adopting a model that may not always be accurate.
Weakness in offering specific/ unique solutions
One of the limitations of ChatGPT is its weakness in offering specific and unique solutions. While ChatGPT is excellent at providing high-level answers due to its extensive training with numerous examples, it may not always be reliable in providing solutions for problems that are specific to your business. It may generate inaccurate information that sounds convincing, instead of confirming that it does not have the answer you are looking for.
This is a limitation of ChatGPT in financial services that can be detrimental because it can make you believe that the information provided by the model is accurate, leading you to rely on a wrong input. It is essential to recognise this limitation and use ChatGPT as a tool to supplement human decision-making for tasks that need a more granular or tailored approach.
Self-referential responses
Large Language models like ChatGPT can sometimes provide commentative or self-referential responses, where they refer to their own output or process of generating a response. For instance, instead of simply transcribing text, the model may say ‘Here is a transcript for…’ and provide a transcription while also commenting on it. While this can be helpful in certain contexts, it can also lead to confusion if the output is being used to build a product or communicate information to a customer. It’s crucial to carefully consider the potential implications of commentative or self-referential responses while adopting this model.
Model Bias
LLMs can sometimes encode bias, resulting in stereotypical and biassed outputs that reflect the statistical data they were trained on. For example, a model may generate the output ‘she will be unable to pay bills’ for a client who belongs to a certain demographic group, even if this may not be statistically accurate. Such outputs may not be helpful and can perpetuate biases that exist in the real world due to social reasons. While language model developers try to eliminate these biases and put guard rails in the training process, they are not completely eliminated. Therefore, firms should be aware of the potential for bias and use them with caution, especially in sensitive contexts such as decision-making processes.
In conclusion, while there are extensive advantages, it’s important to acknowledge address these limitation of ChatGPT in financial services.If adopted correctly, language models like ChatGPT have the potential to significantly disrupt the way businesses operate across every sector and function in the coming years. However, there is a need to close the adoption gap and ensure that these models are used responsibly. One way to achieve this is through incorporating human-in-the-loop or a human+ model, where human input is used to guide and adjust the automated processes, ensuring accuracy and preventing errors or biases.
Another approach is to adopt Generative Adversarial Network (GAN), in which a smaller model is trained to recognise good practices and can act as an adversary or opposition model to check the main model and prevent it from going off course. It’s crucial to take a risk-based approach to the adoption of this technology and incorporate human intervention and oversight to ensure responsible use.