Bias in AI systems occurs when models produce systematically skewed outputs that unfairly favour or disadvantage certain groups or perspectives. In financial services, bias can manifest in various ways, such as inconsistent treatment of client demographics, skewed risk assessments or unbalanced recommendations. Bias can arise from training data, model design or deployment context. Firms must test for bias, implement mitigation strategies and monitor AI systems to ensure fair treatment and regulatory compliance, particularly in relation to Consumer Duty and equality obligations.