Aveni has announced the formation of its Agent Assurance Expert Council (AAEC), a new collaborative body designed to address one of financial services’ most urgent emerging challenges: how to govern and assure the next generation of autonomous AI agents.
The council held its inaugural meeting in Edinburgh and plans to rotate future gatherings between London and Scotland. It brings together senior leaders from across financial services, advice, risk and compliance to develop practical frameworks for overseeing AI-driven systems as they become increasingly embedded in day-to-day operations.
The launch comes at a critical juncture for the industry. As firms move beyond AI tools that merely support human decision-making towards fully autonomous agents capable of complex interactions and independent action, the traditional models of assurance are struggling to keep pace.
These agentic systems introduce continuous, machine-led decision-making at scale, raising serious questions for boards, regulators and compliance leaders around oversight, accountability and customer outcomes.
The urgency behind the AAEC is underscored by a striking gap in industry readiness. Research shows that 99% of companies plan to put AI agents into production, yet only 11% have actually done so. At the same time, just 2% of companies report having adequate AI guardrails in place, while 95% have already experienced at least one AI-related incident. The gap between ambition and appropriate oversight is rapidly becoming one of the most consequential risks facing regulated financial institutions.
Aveni RegTech adviser Kent Mackenzie said, “AI agents represent a step change in how decisions are made within financial services. Assurance models and frameworks built for human-led processes are no longer sufficient. The Agent Assurance Expert Council is about bringing the industry together to define how we maintain control, transparency and trust as these systems scale. Collaboration will be essential to ensuring we meet regulatory expectations while continuing to innovate responsibly.”
The AAEC has been established as a collaborative forum to explore how assurance frameworks must evolve in response to agentic AI adoption. With participation from senior practitioners across the industry, the council will focus on practical governance approaches, including emerging concepts such as machine-led assurance and the future of the lines of defence model — a traditional risk management structure now being tested by the speed and scale at which AI agents operate.
Aveni is well placed to spearhead the initiative. Through its participation in the FCA’s inaugural Supercharged Sandbox, Aveni demonstrated how end-to-end assurance — spanning pre-deployment stress testing and post-production monitoring — can unlock the safe deployment of agentic AI in regulated environments.
The company’s work focused on evidence-led assurance, using simulated real-world interactions to validate AI agent behaviour against safe conduct standards before deployment, while also providing continuous monitoring once live.
The AAEC initiative reflects growing recognition across the sector that no single firm can solve the governance challenge alone. Industry-wide collaboration will be critical to developing consistent, scalable approaches to monitoring, validating and evidencing AI-driven decisions within customer journeys. With regulatory scrutiny intensifying and adoption accelerating, the council marks a significant step towards establishing the machine-based oversight frameworks that safe and accountable AI deployment will require.
Keep up with all the latest FinTech news here
Copyright © 2026 FinTech Global









