Financial crime detection is being quietly rebuilt from the ground up. What was once a rules-driven discipline – defined by static thresholds, retrospective reviews, and an ever-growing backlog of alerts – is now being reshaped by machine intelligence that can detect patterns, context, and intent in real time.
The shift is not just technological; it is structural. Detection is moving from linear screening toward dynamic, adaptive systems that learn continuously from data, behaviour, and risk signals across the enterprise.
For Scott Nice, CRO at Label, the distinction is clear: AI excels where scale and complexity overwhelm traditional approaches.
“AI clearly outperforms rules when the job is to find patterns across a volume of data that no human team — and frankly no traditional rules engine — can work through in a meaningful way,” he says. This is particularly true in dynamic environments, where behaviour evolves over time, risk is distributed across multiple weak signals, or where firms must distinguish genuinely suspicious activity from legitimate but unusual customer behaviour.
Rules, in his view, remain useful — but limited. “They are usually best at catching what is already known.” AI, by contrast, is strongest when the question shifts from detection to discovery: “What are we missing? What looks off? What deserves a closer look?”
However, Nice is unequivocal that human judgement remains critical — especially at the point where detection becomes decision. “That is not the part firms should try to automate away.” Assessing credibility, weighing evidence, and interpreting context are still inherently human tasks.
He draws a parallel with broader compliance functions, from tax to customer due diligence. Automation can handle data validation, identify inconsistencies, and surface risk — but “once the facts stop being clean and straightforward, someone still needs to understand the context and stand behind the decision.”
In the view of Phil Cotter, CEO of SmartSearch, machine intelligence is already outperforming traditional rules-based systems in areas defined by scale, speed and complexity. Cotter said that research shows that 68% of regulated firms spend 25-50% of their time on manual, repetitive compliance tasks, with a further 13% spending the majority of their time on low-value activity.
He said, “These are exactly the processes – monitoring, screening, documentation – where AI and automation deliver immediate impact, freeing up an estimated 36% of working time. AI is also far better suited to identifying emerging risks highlighted in the National Risk Assessment, such as synthetic identity fraud (22%) and abuse of digital identity (24%), where static rules struggle to keep pace with evolving typologies. However, human judgement remains critical in high-risk, context-heavy decisions.”
He added, “Areas like enhanced due diligence, complex ownership structures (already a challenge for 52% of firms), and interpreting nuanced risk signals still require oversight. It is not about replacing humans with AI, it is about AI taking on those mundane, manual tasks that are time consuming and open to errors, so that professionals can focus on the big decisions that actually matter.”
The future of financial crime detection
Nice sees the trajectory of financial crime detection as unmistakably network-driven.
“The direction of travel is clearly more network-based, because financial crime does not respect the boundaries of a single institution,” he said. Many of the most meaningful signals, he argues, only emerge when data is viewed across relationships — counterparties, entities, ownership structures, and flows over time. Firms operating in isolation will always be working with an incomplete picture.
Yet this shift does not remove institutional responsibility. “Firms are still accountable for their own customers, their own monitoring, and their own decisions,” he remarked. The future, therefore, is not a replacement of one model with another, but a convergence of both.
That hybrid model places a premium on data quality. Network intelligence is only as strong as the underlying inputs. “If customer data is weak, if entity structures are incomplete, or if controlling person information is outdated, then even very advanced detection will only take you so far.”
In practice, Nice argues, the effectiveness of network-based detection is anchored in the discipline of due diligence and ongoing monitoring at the firm level.
Cotter stated that the direction of travel is clearly toward a more network-based model of financial crime detection. Financial crime is increasingly interconnected, spanning jurisdictions, sectors and digital ecosystems, yet many firms are still operating in silos with legacy systems.
He remarked, “Our research shows that more than half of identity verification checks (54%) are still conducted manually, limiting firms’ ability to see the bigger picture across customer networks, beneficial ownership structures and transaction flows. As firms invest in technology – our research shows 55% are already investing or planning to invest in digital verification tools – detection will shift from isolated, institution-level monitoring to a more holistic, data-driven view of risk.
For Cotter, this will be essential for tackling challenges like cryptocurrency-related money laundering and rapid changes in the sanctions landscape which have been identified by 19% and 14% of firms respectively as key challenge areas.
What firms must prove to regulators
For Nice, regulatory trust hinges less on the model itself and more on the environment surrounding it.
“What firms really need to prove is something quite simple, but not easy — that the use of AI is controlled, understandable, and genuinely improving outcomes.” Claims of efficiency or model sophistication will not suffice. Regulators will want to see how systems are governed, what data they rely on, how outputs are reviewed, and how performance is tracked over time.
The real test, he suggests, is explainability in practice. “Can the firm explain why the model flagged something, how it was reviewed, what evidence was considered, and how the final decision was reached?”
Beyond that, firms must demonstrate ongoing control — ensuring data remains current, monitoring for drift or bias, and responding when things go wrong. “Trust will come not from the sophistication of the model itself, but from the control environment around it.”
Ultimately, Nice believes AI will only be fully accepted when it no longer resembles a black box. Instead, it must operate as “a well-governed part of a broader compliance process” — transparent, accountable, and embedded within the firm’s decision-making framework.
Meanwhile, Cotter states that the biggest barrier to trust isn’t awareness – it’s evidence. While 82% of firms believe their IDV processes will be robust enough over the next 24 months, only 27% are very confident, pointing to a gap between perception and demonstrable capability.
Cotter said, “To gain regulatory trust, firms will need to prove three things. The first is Effectiveness, in that AI meaningfully improves detection outcomes, particularly in high-risk areas like fraud, sanctions, and emerging NRA threats. Control and governance is another area, and Cotter defines this as that AI outputs are explainable, auditable and aligned with regulatory expectations – especially as only just over a quarter of firms are very confident in their understanding of sector-specific obligations.
The last area is consistency, in which AI-driven processes can deliver reliable, repeatable results at scale, replacing the fragmented manual approaches still used by many firms.
Cotter finished, “Crucially, regulators will expect AI to enhance not obscure accountability. In an environment where 77% of firms cite reputational damage as a major concern and 87% would sever ties following a compliance failure, trust will ultimately be built on transparency, not just performance.”
More to be done
For Iain Armstrong, executive director of FCC Strategy at ComplyAdvantage, he stated that rules-based transaction monitoring was designed to catch known patterns -typologies the industry had already identified and codified.
He explained, “Machine learning improves on this significantly: it detects statistical anomalies rather than named patterns, and it reduces false positive volumes to the enormous benefit of operations teams.”
However, Armstrong said he would like to see more acknowledgement of the fact that money laundering depends on layering and dispersal across institutions and jurisdictions.
He said, “A sophisticated machine learning model at one PSP still sees only that PSP’s data. Greater application of these technologies in public-private intelligence sharing is needed.”
When it comes to the question of human judgment, Armstrong stressed that AI shifts the cognitive load from detection to assessment, something he sees as a genuine improvement.
“But it raises the bar for analysts rather than replacing them. “Reasonable grounds to suspect” is the legal standard, and it still requires a person to stand behind it,” finished Armstrong.
A structural shift
In the view of Sebastian Hetzler, co-CEO of IMTF, financial crime detection is undergoing a structural shift, from static, rule-based monitoring to more dynamic, intelligence-driven approaches.
He said, “At the same time, modern rule-based systems have significantly evolved, achieving high precision and low false positives through the holistic integration of diverse AFC data, and therefore they remain essential for ensuring consistent baseline coverage.”
For Hetzler, machine intelligence complements this by identifying complex patterns and relationships across large datasets, making it particularly effective for detecting both known and emerging typologies.
He said, “The real value lies in combining both approaches: rules providing control and safeguard mechanisms, while AI enhances detection through scale, adaptability, and contextual insight. The most effective approaches combine both, supported by human expertise to ensure context, interpretation, and accountability.”
Hetzler concluded,” AI is not replacing rules—it is redefining detection. Rules provide the necessary safety net, while machine intelligence brings the scale and adaptability needed to detect increasingly complex financial crime.”
Keep up with all the latest FinTech news here
Copyright © 2026 FinTech Global









