Hallucination occurs when an AI model generates information that appears plausible but is factually incorrect or entirely fabricated. In financial services, hallucinations pose significant risks because they can lead to unsuitable advice, regulatory breaches or misleading client communications. Techniques such as Retrieval Augmented Generation, strict prompting, output validation and human review are used to minimise hallucination risks and ensure AI-generated content is grounded in verified information.