Financial institutions racing to adopt artificial intelligence risk replicating other organisations’ mistakes rather than solving their own problems.
That was one of the central warnings to emerge from a recent webinar hosted by RegTech firm Hawk in partnership with ACAMS.
The panel brought together senior voices from across compliance, risk advisory, and financial crime prevention, including ING global head of financial crime compliance for investment banking Adrianna Fabijanska, Wintrust Financial Corporation VP of compliance technology product management Michael Morrison, and Grant Thornton (US) partner in risk advisory services Kyle Daddio, moderated by Hawk senior product marketing manager Erica Brackman.
Define the problem first
A recurring theme throughout the discussion was that AI without a clearly defined purpose is destined to underperform. Wintrust Financial Corporation VP of compliance technology product management Michael Morrison said, “Good AI isn’t just accurate, it’s operationally embedded and defensible. This starts at the point of selecting the right AI model by establishing what problems you’re trying to solve with it.”
ING global head of financial crime compliance for investment banking Adrianna Fabijanska agreed, adding that data quality is equally fundamental. She warned that poor data governance leads directly to poor AI outcomes, and that organisations should invest in structuring their data and understanding its lineage before deployment — not after encountering a wave of unexplained false positives.
Don’t follow the crowd
Grant Thornton (US) partner in risk advisory services Kyle Daddio cautioned against what he called a growing “copycat” mentality in the industry, where firms rush to replicate a competitor’s AI implementation without considering whether it suits their own risk profile. Daddio said, “What really ends up happening is you’re doing what was good for somebody else, not what’s good for your organization.”
Instead, he urged firms to set long-term goals, involve the board early, and resist the pressure to adopt AI reactively. Hawk senior product marketing manager Erica Brackman added that vendor selection is central to this, noting that every provider claims to cut false positives — but what matters is whether the solution is genuinely tailored to the organisation’s specific risks and systems.
Governance as a long-term asset
Rather than viewing governance as a drag on innovation, the panel argued it is what makes AI programmes sustainable. Morrison outlined the core components of a defensible framework: a clear purpose statement, documented data lineage, defined performance metrics, and rigorous change management tracking to capture model updates over time.
Fabijanska highlighted a specific organisational risk that is easy to overlook. She said, “Just as much as the person who designed the model knows how it works, if an analyst can’t explain why they’re making the decision they are — or if an examiner comes and asks a question and there’s only one person who can answer it — the AI you’ve designed is flawed.” Building broad internal literacy across teams, she argued, is what makes an organisation genuinely regulator-ready.
Morrison recommended that firms begin with narrow, lower-complexity use cases to surface problems early and build credibility with auditors before scaling up. This cautious, incremental approach, he suggested, is far more likely to earn regulatory confidence than diving straight into large-scale implementation.
Hawk’s broader conclusion was that putting these principles into practice requires technology capable of automating the heavy lifting — from documentation and change tracking to explainable alert outputs — so that compliance teams can manage the model lifecycle without depending entirely on data science expertise.
For more insights from the discussion, read the full report here.
Copyright © 2026 FinTech Global









