By the end of 2025, artificial intelligence had moved decisively from theory to practice across financial crime functions.
At industry events ranging from Transform Finance and ACAMS chapter meetings to Money20/20 and senior 1LoD gatherings, the same conclusion surfaced repeatedly: AI is no longer experimental, said Quantifind.
It is becoming operational infrastructure for financial institutions, forcing leaders to confront a more pressing question about preparedness rather than potential.
This shift has reframed industry conversations. Instead of debating whether AI should be used, financial crime leaders are now assessing whether their organisations are genuinely ready to deploy it safely, effectively, and at scale. Insights shared throughout the 2025 conference circuit point to a growing consensus that readiness, not technology, will define success as institutions prepare for 2026.
There is good news. AI adoption in financial crime is accelerating and delivering measurable value. Analysts have consistently reinforced this momentum. McKinsey observed in 2025 that “risk and compliance teams are accelerating deployment of domain-specific AI models, supported by clearer regulatory expectations.” Deloitte’s Financial Crime Trends report highlighted that many firms have “moved beyond experimentation into structured AI-enabled workflows,” while Forrester identified explainable AI as “a top priority for financial crime platforms in 2025.” Gartner echoed this, noting that demand for “AI with transparent, traceable logic” is shaping AML procurement decisions.
Regulators have also played a role in building confidence. FATF’s updated guidance on responsible AI in AML reinforced the importance of explainability, while the OCC’s 2025 supervisory priorities encouraged adoption where strong oversight is in place. In the UK, the FCA’s AI & Innovation Review emphasised that explainable models are essential in regulated financial services. Together, these signals marked 2025 as a turning point, with AI increasingly viewed as core infrastructure for modern FIUs.
Despite this progress, readiness remains the biggest obstacle. Across multiple industry panels, it became clear that technology itself is no longer the limiting factor. Instead, institutions are grappling with practical organisational challenges such as skills, data quality, governance, workflow alignment, and user competency. As one panelist put it, “AI governance starts with user governance.” Another added, “You cannot operationalize what you do not understand.”
This gap matters because the role of the investigator is changing rapidly. Investigators are moving away from manual fact collection toward interpreting, validating, and explaining AI-generated intelligence. Regulatory guidance issued in 2025 reinforces this evolution, with FATF, the OCC, the FCA, and the Basel Committee all stressing documented human oversight, transparency, and user competency as prerequisites for responsible AI deployment.
Several themes now define what every FIU should understand heading into 2026. First, AI does not simply mean generative tools. “ChatGPT cannot run your investigations. Purpose-built models can.” Generative AI lacks the auditability required for compliance decisions, while models designed specifically for sanctions screening, network detection, and investigative workflows are far more appropriate.
Second, explainability is non-negotiable. Regulators are not opposed to AI itself, but to black-box decisioning. “If you cannot show how you got the answer with AI, you will not pass an exam.” Transparency and evidence lineage now matter more than predictive complexity.
Third, legacy systems are a major constraint. As one speaker noted, “Most banks do not have an AI problem. They have a plumbing problem.” Fragmented data and outdated infrastructure limit the effectiveness of advanced analytics, highlighting the need for modern intelligence layers that can work across silos.
Fourth, model risk management now extends to users. “Human oversight is part of the system. Train the human, not just the model.” Investigators, QA teams, and supervisors must all be equipped to understand, challenge, and document AI-driven insights.
Finally, AI literacy is emerging as a competitive advantage. “The institutions that understand AI will outpace those that simply deploy it.” By 2026, FIUs that invest in readiness, governance, and user capability will be better positioned to scale intelligence while maintaining regulatory confidence.
Copyright © 2026 FinTech Global









