Is model governance slowing AI in financial crime?

AI model governance: the gap holding back financial crime

A new report from Hawk and Chartis has found that nine in ten financial institutions now actively encourage the use of artificial intelligence in financial crime and compliance (FCC) operations.

Yet despite this widespread enthusiasm, a significant gap has emerged — one that many FCC teams were ill-equipped to handle: governing the AI models they deploy.

The research, which surveyed 125 compliance and risk leaders at banks globally, found that more than half of the technical challenges holding institutions back from expanding AI in anti-financial crime programmes were directly linked to model governance.

In other words, building a model is just the beginning. The harder work lies in validating, operationalising, and maintaining those models over time — and most teams simply do not have the resources to do it properly.

Data quality tops the list of concerns

Limited or poor-quality training data was the most widely cited challenge, with 91% of respondents placing it in their top five concerns. Without clean data and clear data lineage, models absorb noise alongside signal, producing avoidable false positives. Regulators increasingly expect institutions to demonstrate that the data underpinning their models is fit for purpose, making data quality as much a governance obligation as a technical one.

Integration with existing systems came in second, flagged by 86% of respondents. A well-constructed model is of little use if it cannot reliably connect to the systems feeding it data or acting on its outputs. These integration gaps slow deployment, introduce manual workarounds, and make it harder to maintain consistent model behaviour — all of which complicate governance documentation.

Difficulty interpreting or trusting model outputs was highlighted by 83% of respondents. If compliance teams cannot understand why a model flagged a particular transaction, they cannot act on it with confidence, nor can they explain their decisions to auditors or regulators. Explainability, the report argues, is not a nice-to-have — it is a governance requirement. Black-box models fundamentally undermine the human oversight that effective financial crime controls depend upon.

Nearly three quarters (73%) of respondents identified data and model governance as a standalone challenge, while 70% cited degrading model performance over time. Financial crime typologies evolve constantly, and a model that was effective at deployment can become a liability without active monitoring and retraining cycles in place.

Governance pressures mount after deployment

The report also examined how challenges shift once models move from pilot into production. Pre-deployment concerns around data quality and integration do not disappear, but new pressures emerge at scale.

Some 43% of respondents reported increased concern about the inability to update models once they are live. Data science teams are frequently overstretched, making updates slow, infrequent, and reactive — leaving institutions exposed to emerging threats their models were never trained to detect.

A further 38% reported heightened concern about sustaining governance across a growing model inventory, noting that consistent documentation, version control, and audit trails become significantly harder to maintain as the number of deployed models grows.

Meanwhile, 33% said that interpreting and trusting model outputs remained an ongoing challenge well beyond the initial deployment stage.

What good model governance looks like

The report outlines three pillars of effective model governance for FCC teams. First, thorough documentation throughout the entire model development lifecycle — not just when it is requested by a regulator. This should capture the model’s purpose, data sources, performance metrics, and any changes made over time.

Second, building genuine trust in model outputs by ensuring teams can understand and justify the decisions their models make. Explainable AI is increasingly a regulatory expectation, not merely an industry aspiration.

Third, maintaining model effectiveness after deployment through regular retraining cycles. A model trained on historical data will gradually lose relevance if institutions do not build structured review and update processes into their governance frameworks.

Hawk’s Analytics Studio platform is positioned as a direct response to these challenges, offering automated documentation, human-readable decision explanations, and the ability for compliance teams to retrain models independently — without relying on data science resource.

For more insights, read the full report here.

Read the daily FinTech news

Copyright © 2026 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.