A question that compliance professionals rarely ask aloud — but probably should — is how many of their high-risk clients are actually high risk.
According to Muinmos, in many institutions, a substantial slice of the client book ends up in that category not because of any demonstrated behaviour, but because of assumptions embedded in frameworks designed for a different era of financial services.
Muinmos recently put together a write-up on a webinar that discussed rethinking AML client risk categorisation.
That tension was at the heart of a recent Compliance Café Connect session, hosted in partnership with FAI Comply, which brought together compliance and risk professionals from multiple jurisdictions to explore how institutions can shift away from static, assumption-led risk classification towards approaches that are more dynamic, proportionate and — crucially — defensible.
Muinmos CEO Remonda Kirketerp-Møller shared her observations from working with institutions globally, outlining both what is working and where she continues to see the same mistakes being made.
The limits of periodic reviews
Traditional anti-money laundering (AML) risk categorisation was built for a slower-moving world — one with less available data, more face-to-face client interaction and a more predictable regulatory environment. Risk was assessed at onboarding, refreshed at fixed intervals and governed by predefined scoring rules. That model is now under strain.
The consequences are practical rather than abstract. When too many clients are classified as high risk, compliance teams become overwhelmed, attention is diluted and genuinely suspicious activity becomes harder to detect. There is also a commercial dimension: high-risk classifications trigger additional document requests, lengthen onboarding and restrict services — experiences that drive legitimate clients to competitors.
Regulators, too, are shifting their expectations. Bodies including FATF, the Basel Committee and supervisory authorities across the UK and EU are aligned on the principle that AML must be risk-based, outcomes-focused and proportionate. The question has moved from how many controls an institution has to whether it can demonstrate that its decisions were reasonable, proportionate and consistently applied.
Risk changes — even when clients do not
One of the more important ideas to emerge from the session was the concept of temporal risk: a client’s risk profile can shift without the client doing anything at all. Sanctions regimes can be introduced overnight. Geopolitical developments alter indirect exposure. Ownership structures change without triggering an immediate review cycle.
Kirketerp-Møller was direct in her assessment: the periodic refresh model will become redundant. The data required to assess risk is increasingly available in real time, and the industry needs to move towards continuous monitoring rather than relying on six-monthly or annual reviews.
Muinmos CEO Remonda Kirketerp-Møller said, “If you work with providers that enable continuous monitoring, you don’t need to think about it again. Focus your attention on the areas that require genuine human judgement — and let technology handle what it can handle reliably.”
Sanctions screening illustrates the point clearly. In many jurisdictions, the legal requirement is not simply to screen at onboarding but to monitor on an ongoing basis. A client who appeared clean at the point of entry may not remain so — and a client who set out to deceive the institution at the outset will have known precisely what checks were being run. The fuller picture often only emerges through subsequent behaviour and transaction patterns.
Why copying someone else’s framework never works
Kirketerp-Møller identified a pattern she sees repeatedly: institutions taking a compliance framework from another organisation and assuming it will function in their own context. It does not, because a framework is only as effective as its alignment with the specific client base, product set, jurisdictional footprint and counterparty ecosystem of the institution deploying it.
The same logic applies to technology selection. Her advice to compliance leaders looking to modernise was to understand their own logic first — before searching for a provider to support it. Know your clients. Know your risks. Know your data. Build a framework that can be seen, tested and explained, and only then look for technology that can give it operational scale.
A RegTech platform is a medium, not a solution. The underlying logic has to originate from the institution itself.
Freeing compliance professionals to do the work that matters
Technology’s role in this evolution is not to replace compliance professionals but to free them from the work that machines can perform reliably, allowing them to concentrate on the judgement calls that require human input. Manual passport review, spreadsheet-based sanctions checks and tick-box onboarding processes are not merely inefficient — they introduce risk, because they leave human error in processes where consistency is most critical.
Automating straight-through processing for routine, rules-based decisions is not a threat to compliance teams. It is what allows them to operate as genuine analysts rather than process administrators.
Tackling over-classification in practice
One of the most common questions raised in the session concerned institutions with a large high-risk client population and the challenge of reducing over-classification without introducing regulatory risk.
The answer begins with understanding why that population is so large. In many cases, a single default attribute — operating in crypto, for example, or carrying out fully remote onboarding — is pushing entire segments of the client base into the same risk band. When every client is high risk, the category loses its meaning entirely.
Kirketerp-Møller’s recommended approach was clear: identify the baseline default attributes, define the mandatory responses to them, and then concentrate risk management effort on the factors that are genuinely variable. That is where real risk intelligence sits. A regulator reviewing the framework would expect to see not a uniform high-risk population, but a calibrated approach that reflects the actual distribution of risk within the client base.
Auditability as a baseline expectation
Across the session, one theme recurred consistently: whatever framework an institution builds, it must be explainable. Regulators increasingly expect compliance decisions to be traceable end to end — what information was used, what logic was applied and who approved the outcome.
That expectation extends to the technology supporting those decisions. Institutions cannot simply point to a piece of software as their answer. They need to articulate how it is configured, why those parameters were chosen, and how the outputs align with the institution’s own risk appetite and client profile.
The compliance leaders navigating this landscape most effectively are those who treat technology as an enabler of explainable, auditable and consistently applied decisions — not as a substitute for having a coherent framework in the first place.
As the regulatory environment grows more complex, with sanctions regimes expanding, digital onboarding becoming the norm and data expectations rising, the institutions best placed to adapt will be those that prioritise clarity of thinking over volume of controls.
Read the full Muinmos post here.
Copyright © 2026 FinTech Global









