consumer duty sampling

What the industry told us about AI agent oversight and why it matters now

Last month, Aveni convened the inaugural meeting of the Agent Assurance Expert Council: a group of senior practitioners from across financial services, brought together to start building a shared understanding of what responsible oversight of agentic AI actually looks like in practice.

The timing wasn’t accidental. The industry is moving fast, and the governance conversation hasn’t kept pace.

What came out of the room

The session was held under Chatham House rules, so we won’t be attributing views, but we can share the themes that came through clearly, because they matter for anyone working in or around AI in financial services right now.

The overriding message: this isn’t an evolution of existing oversight models, it’s an overhaul. The shift from AI-assisted processes to autonomous, agent-based systems creates new questions about accountability that the traditional three lines of defence weren’t designed to answer. When a decision is made continuously, at scale, by a machine — who’s responsible? Where does oversight sit? And how do you build governance that’s fast enough to keep up?

There was strong consensus that the “human-in-the-loop” model, often cited as the answer, needs far more careful design than it typically gets. A human placed at the wrong point in the process isn’t a safeguard, they’re overwhelmed. The group discussion wasn’t whether there’s a human involved; it’s where, when, and with what information. Targeted, risk-based intervention is the goal. And there are some decisions such as  sensitive customer situations, potential data breaches, actions with irreversible consequences etc where human oversight must be a non-negotiable hard stop.

The group also identified the importance of embedding an organisation’s risk appetite directly into agent behaviour from the start, rather than retrofitting governance after deployment. Getting compliance and risk functions involved early isn’t just good practice but the difference between a system that works and one that creates liability at scale.

One framing that resonated: this is a “back to basics” moment. First principles of control, accountability and trust  applied to a fundamentally different operating model.

What’s happening outside the room

The Council met at a moment when this conversation is accelerating across the industry.

In the UK, the FCA has explicitly named agentic AI as a live policy question for the first time in a formal priorities document, signalling that the existing frameworks — SM&CR, Consumer Duty — may not be sufficient for systems that autonomously initiate actions on a customer’s behalf. Guidance on audit trails and human-in-the-loop protocols is expected from the FCA later in 2026. The Mills Review, launched earlier this year, is asking harder questions still: who controls the primary customer relationship by 2030 — incumbent firms, Big Tech, or consumers’ own AI agents — and what would that mean for regulation, competition, and accountability?

Internationally, Singapore is further ahead. The Monetary Authority of Singapore (MAS) recently concluded phase two of Project MindForge, publishing an AI Risk Management Toolkit developed collaboratively with a consortium of 24 banks, insurers, capital markets firms and other industry partners. It covers traditional AI, generative AI, and emerging agentic AI technologies, and it’s built around practical implementation, not just principles. MAS has been trying to move the market from principle-setting to implementation, recognising that many firms already have AI policies on paper, but that generative and agentic AI create newer operational risks around oversight, accountability, model behaviour, and lifecycle controls. It’s worth watching.

Meanwhile, McKinsey’s 2026 AI Trust Maturity Survey found that only around a third of organisations report maturity levels of three or higher in strategy, governance, and agentic AI governance, suggesting that while technical capabilities are advancing, oversight structures are struggling to keep pace.

Where this goes next

The Agent Assurance Expert Council will continue meeting to shape emerging best practice, share cross-industry insight, and define what assurance needs to look like for agentic systems. The problems being discussed don’t have established answers yet,  which is exactly why getting the right people in the room to work through them matters.

If you’re working on these questions and want to be part of the conversation, get in touch.

Share with your community!

In this article

Related Articles

Aveni AI Logo