Large language models (LLMs) are a rapidly evolving field, with new and existing models being released and improved on all the time.
In this podcast episode, host and Aveni CEO Joseph Twigg, is joined by Aveni Labs Head Dr Lexi Birch, and Aveni NLP Engineer Dr Ben Trevett as they discuss the latest developments in LLMs.
Delve into the current rapid advancements of this tech and what exactly it could mean for the future of our industry.
The GPT Effect
OpenAI’s GPT family of LLMs has been at the forefront of recent advancements, with the release of GPT-3 in 2020, GPT-3.5 in 2022, and GPT-4 releasing a year later. Since their emergence, we’ve seen the world sit up and pay attention to generative AI in a way it hadn’t before.
If we thought GPT-3 was a game-changer, GPT-4 has taken things to a new level and woken the world up. Evaluated on benchmarks such as how well the models can pass the bar exam, it was found that GPT-3.5 can pass in the 10th percentile, and GPT-4 can pass in the 90th percentile.
This huge improvement in performance in such a short space of time has every eye watching OpenAI, wondering what’s coming next.
Scaling Up: GPT-4’s Massive Parameters
The number of parameters in a LLM is a key factor that drives its performance. Parameters are the variables that the model learns during training. So the more parameters a model has, the more complex patterns in the data it can learn.
To give you perspective, GPT-3 was trained on 175 billion parameters but GPT-4 was trained on a whopping 1.7 trillion parameters! This massive parameter count is why GPT-4 is able to outperform GPT-3 and many other LLMs on the market.
The Significance of Data Size vs. Model Parameters
It’s important to remember that the amount of data seen by an LLM is just as important as the number of parameters it has. At a certain scale, more data becomes more impactful than merely adding parameters. This is because larger datasets allow the model to learn a wider range of patterns and relationships.
With large data inputs and the parameters to learn from that data, LLMs can now process and generate content in different formats, such as text, images, and speech through multimodal capabilities. GPT-3.5 was the first GPT model to incorporate multimodal capabilities, and GPT-4 is expected to take this even further.
We’re already seeing examples of GPT-4’s very human-sounding “spoken” responses. Are we on the brink of replicating human experience with AI interactions?
Smaller Large Language Models Making Waves
While the spotlight often shines on larger Language Models , smaller models are also causing a stir. Models like ChatGPT, for instance, are incredibly exciting. However, their use is currently limited to a few companies that have the resources to train and run them, mainly due to the costs involved in inference and engineering.
Achieving transparency, traceability, and meeting the stringent standards for it to be reliable within a business can be a costly endeavour, especially when deploying this technology on a large scale. This is a common challenge with models like GPT.
Looking at more financially viable alternatives, such as the LlaMA (Large Language Model Meta AI) family of LLMs, you can still deliver impressive performance despite their smaller size. They also come with the benefit that companies will have the flexibility to fine-tune these models for specific applications, making them a more accessible and versatile option.
The Darkside of LLMs
Despite all of the hype, LLMs like GPT are not perfect. Hallucinations can still be a problem, and can be detrimental with any sort of sensitive information. Companies can’t rely on technology that is producing misleading or incorrect information, particularly financial support and advice. It’s a sure-fire way to lose trust.
But all is not lost. There are techniques being used to minimise these issues, notably “Retrieval Augmented Generations (RAG).” RAG means the language model is backed up by a reliable source of information, which it can draw from to generate answers, reducing the chance of hallucinations.
With RAG, LLMs can provide citation metrics, allowing your model to show where it got its information from and provide a link. This gives users more confidence in the answers it generates.
The Future Landscape of Large Language Models
Smaller, domain specific LLMs models are becoming more accessible and versatile, promising a range of practical applications across industries.
There’s going to be an ongoing process of enhancing these locally operated language models. We’ve seen significant progress from LIMA1, which was just released in February, to LIMA2, which was about a few months ago.
Updating and fine-tuning domain specific models and seamlessly integrating them into specific contexts, adapting them to your own datasets, clients and specific problems will become much more impactful. These models will be able to retain information about specific customers or clients, allowing for continuous tailored performance for individual sectors or clients.
Conclusion
The field of LLMs is rapidly evolving, with new developments emerging all the time. The integration of multimodal capabilities, the proliferation of smaller models, and the ongoing pursuit of ethical AI are just a few of the areas shaping this transformative field.
The future promises even greater advancements, ushering in a new era of possibilities for language models.
Watch full podcast episode here.