Outcome Testing Teams (OTT) are teams within financial services organisations that assess and analyse the real-world impacts of Artificial Intelligence (AI). They aim to provide accountability around potential harms and evaluate the effectiveness of policies and procedures when meeting regulatory requirements.
OTTs do this by:
- Conducting audits evaluating model performance in deployment
- Developing testing plans assessing for issues like unfair bias
- Monitoring systems after launch to detect problems
- Suggesting interventions to address weaknesses
- Working closely with developers and business teams
- Including diverse disciplines like social scientists and ethicists
Outcome Testing represents a shift toward holistic AI safety, which centres on societal well-being and due care. It works in tandem to complement technical testing, providing ethical, legal and social consideration.