The accountability problem no one has solved
Compliance has always been built on a simple premise: when something goes wrong, someone is accountable. That assumption is now under strain.
Decisions that once relied on human judgement are increasingly shaped — and in some cases made — by automated systems. Risk scores are generated automatically. Customers are onboarded or rejected based on models. Alerts are prioritised, suppressed, or escalated without human review. Regulatory obligations are interpreted, mapped, and operationalised by machines.
None of this is hypothetical, and it is already happening inside of large financial organisations. Despite this, the frameworks used to assign accountability have barely changed. Compliance officers still ‘own’ risk outcomes, and businesses still rely on governance models designed for a human-dominated world. The result is a growing gap between how compliance decisions are made and how responsibility for those decisions is formally designed. This is the accountability gap.
An uncomfortable truth being realised is that the industry has not yet agreed where accountability should sit in an automated compliance environment, or how it should be demonstrated.
This feature is the first in The Accountability Gap series. It examines how automation has outpaced responsibility, why existing accountability models are beginning to fracture, and why this issue can no longer be treated as a future concern. In the parts that follow, we will explore what decisions machines should be allowed to make, how firms can govern AI without paralysing themselves, and what regulators are likely to expect next. But first, the problem itself needs to be confronted.
Who is accountable?
Mike Lubansky, SVP of Strategy at Red Oak, believes that nothing has changed from an accountability perspective.
“The firm is still fully accountable, but they can no longer point to a single human decision-maker in the way traditional supervisory models assume, “ he said. “Operational responsibility is becoming fragmented in ways regulators haven’t fully addressed. AI redistributes operational decision-making without shifting the legal responsibility.”
Lubansky added that regulators will continue to anchor accountability to the registered entity, the designated supervisory principal and the documented supervisory system.
He added, “As such, it’s critical that firms continue to require a ‘human-in-the-loop’ for many of these processes and have a clear audit trail of the rationale and decisions made by AI.”
For Areg Nzsdejan, CEO of Cardamon, it depends on multiple factors: what the decision is, where it sits in the process and how much authority the AI system has been given.
He said, “The way we think about it at Cardamon is that our platform isn’t a tool – it’s a set of digital teammates. These teammates work for compliance experts, who act as orchestrators. The AI does the heavy lifting: scanning, mapping, highlighting risk, and proposing actions. But it does not have executive decision-making power.”
Ultimately, accountability sits with the orchestrator, claims Nzsdejan. “Humans remain responsible because they retain control over final decisions. This may change in the future, and certain AI systems may be empowered to make binding decisions for narrowly defined tasks. But that’s not how we see the world today – and not how regulators see it either,” he said.
Mike O’Keefe, global head of digital transformation & innovation at Corlytics, stressed that traditionally, accountability in compliance was clear cut.
“Humans made decisions, signed-off on controls, and assumed responsibility for outcomes. But as firms increasingly rely on AI and automation to interpret regulatory requirements, monitor transactions, flag suspicious activity, evaluate risk scores, the human chain of responsibility becomes obscured,” he said.
O’Keefe went on, “If an AI model incorrectly classifies a regulatory compliance obligation, suggests a policy or control update, clears a high risk transaction or misclassifies customer activity, is the accountable party the compliance officer? The data science team that built the model? The vendor who supplied it? Or the executive who approved its deployment?”
Regulators have indicated that accountability cannot be outsourced, whether to third-party vendors and not to algorithms. ”But as AI systems become more autonomous and more complex, this accountability gap becomes harder to close,” said O’Keefe.
Tim Khamzin, founder and CEO of Vivox AI, also made clear that when AI or automation is involved in a compliance decision, accountability does not disappear. “On the contrary, it concentrates. Responsibility still sits with the firm and the individuals who own the process, particularly risk and financial-crime leaders. What AI changes is not who is accountable, but how that accountability must be exercised.”
In a similar vein, CEO of Flagright Baran Ozkan said, “When AI makes a compliance decision, accountability does not move to the model or the vendor. It stays with the regulated firm, specifically the senior leaders who own the control framework and risk appetite.
When accountability frameworks break
Which accountability frameworks break under AI? This is a critical question being asked inside the industry, and for Lubansky, his belief is that several traditional accountability frameworks begin to strain under AI, including reasonable supervision models, vendor accountability assumptions and established control testing and audit frameworks.
“These approaches were designed for environments with clear review chains, predictable decision rules, and identifiable points of human judgment,” he said. He remarked that models often generate reccomendations based on complex patterns rather than explicit rules, producing outcomes that may be statistically defensible but difficult to explain on an individual basis.
Lubansky said, “While supervisors are still expected to review and approve decisions, accountability breaks down if they cannot clearly articulate how an AI system arrived at a particular outcome—exposing firms to regulatory and legal risk.”
Businesses have traditionally relied on SOC reports, vendor representations, and contractual indemnities to manage third-party risk. AI complicates this model in several ways, says Lubansky. Firstly, vendors may not fully disclose training data or model changes, models may be updated continuously and multiple vendors may be part contributors to a single decision path, which makes attribution challenging.
Lubansky explained, “Control testing and audit frameworks are also challenged by AI. Traditional compliance testing assumes stable logic, repeatable outcomes, and sample-based validation. AI can challenge these assumptions because: the same input may yield different outputs over time, model tuning can alter behavior without explicit code changes, and sample-based testing may fail to capture rare but high-risk edge cases.”
O’Keefe, meanwhile, stressed that many existing frameworks rely on assumptions that do not hold in fully automated algorithmic environments.
The first area is in model risk management frameworks. “While they provide structure for validation, documentation, and oversight, they were built for deterministic statistical models – not adaptive, opaque machine learning systems that evolve over time, he said.
Another area is in expert-in-the-loop models, with firms assuming that a person meaningfully reviews and influences each decision made. However, AI systems increasingly operate at speeds and volumes impossible for humans to oversee. Human reviews, O’Keefe said, could become a formality rather than a safeguard.
The third area is traditional compliance sign-off structures, where traditional compliance was once based on human decision-making authority. “But if the “decision” is an algorithmic output, the sign off could become ambiguous. Is the compliance officer accountable for the decision?” said O’Keefe.
A final notable area is vendor accountability provisions. Here, contracts can shift risk to vendors, but they cannot shift regulatory responsibility, said O’Keefe. “If a vendor’s AI fails, firms remain liable even if they lack full transparency of model operation,” he said.
Traditional accountability frameworks were built for rules-based systems that produce deterministic outcomes, added Khamzin. He stated, “Modern AI systems are probabilistic by design. They reduce false positives and improve detection accuracy, but they also introduce new challenges around explainability, auditability and governance. That is where many existing frameworks need to be adapted.”
For Ozkan, the frameworks that break under AI are the ones that confuse activity with accountability, for example ‘the system flagged it’ instead of ‘a named owner approved the policy and its thresholds.’
Nzsdejan expressed that most accountability frameworks assume a single setup: a person makes a decision, follows a defined process, and uses fixed inputs.
However, as he outlines, AI doesn’t work like that. “Decisions are spread across models, data sources, and workflows, which makes single-person sign-off harder; documenting process steps alone isn’t enough when regulators increasingly care about why an outcome occurred; and the line between what a firm owns versus what a vendor provides becomes blurred as models and data change over time. The approaches that still work are those that clearly define who owns the decision, make AI outputs transparent and reviewable, and ensure there is a clear path for human escalation and override.”
The double-edged sword of automation
Automation can improve outcomes in compliance. However, the double-edged sword of this is that it leaves firms exposed.
For Nzsdejan, if improvement means speed alone, then automation is able to increase exposure. Faster wrong decisions, he stresses, are still wrong – just at scale.
He said, “What we optimise for is quicker and more accurate compliant outcomes, which in turn create more security for firms. In our view, you cannot genuinely improve outcomes while leaving firms exposed. If exposure increases, incentives are misaligned.”
The key for the Cardamon CEO is incentive design. He explains simply, “Automation should reduce uncertainty, not just effort. AI should surface risk, not hide it behind confidence. Humans should be accountable for decisions, not burdened with busywork. When those incentives are aligned, automation doesn’t weaken accountability – it sharpens it.”
At the same time, Lubansky states that whilst automation has many benefits, it can also create exposure if not designed and managed properly.
He explained, “When implemented well, automation can improve throughput, lower error rates, and scale review capacity. However, it can also remove visible decision checkpoints and encourage over-reliance on system outputs—particularly when controls, documentation, and accountability are not clearly defined.”
A couple of examples of this are marketing review automation and communication supervision triage automation. On the former, approval cycles are faster, but without the right controls and audit trails, firms may struggle to document why content was approved, weakening defensibility after the fact. On the latter, automation can improve prioritization, but it can also create ambiguity when alerts are suppressed or items are not flagged, making it difficult to determine who was accountable for the decision.
“In practice, automation often improves operational performance while weakening defensibility. Firms may appear stronger on efficiency metrics, yet struggle months later to explain decisions, reconstruct rationale, or demonstrate active supervision rather than passive reliance on automated systems,” concluded Lubansky.
Khamzin stressed on this point that a big bottleneck here is side-by-side evolution. “We have seen automation materially improve compliance outcomes while simultaneously leaving firms exposed, not because the technology failed, but because accountability models failed to evolve accordingly alongside it. Regulators are increasingly clear that companies cannot outsource responsibility to algorithms.”
This, Khamzin claims, is also why regulatory and ethical guardrails matter so deeply. “As accuracy improves, the real compliance challenge shifts from decision-making to understanding, governing and evidencing how those decisions are made.”
For Ozkan, whilst automation can absolutely improve outcomes, it can leave businesses exposed if they cannot recreate the decision later.
He said, “In practice, closing the accountability gap means traceability: what data was used, what policy was applied, what the model recommended, and who had the authority to override it.”
There is no question that automation has delivered meaningful improvements, said O’Keefe. Automation and AI can detect anomalies that humans miss, process data at unprecedented scale and reduce human error. Many firms report better detection rates, fewer false positives, stronger audit trails and more consistent application of regulatory rules with more robust policies and controls.
However, he believes these gains need to be protected to ensure that the appropriate education, training and decision-making process are in place.
He explained, “Ensure a robust operating model is defined: insist that compliance teams examine outputs and then make the decision to prevent an overreliance on automation and prevent “compliance deskilling,” where teams lose the ability to challenge the AI output. Define where accountability sits as part pf the operating model to prevent confusion, speed up remediation and reduce regulatory risk.
“Furthermore, remove opaque decision-making by increasing explainability, making regulatory exams more assured. Vendors must frequently test algorithmic drift to prevent degradation of models and be mindful of data quality issues to prevent skewed AI outputs.”
An absence of accountability?
A further interesting argument on this question was that the absence of accountability frameworks is the real issue.
Anthony Quinn, CEO of Arctic Intelligence, remarked, “The industry continues to debate whether AI can be trusted in compliance, but this misses the real issue. The true risk is not automation itself, it is the absence of accountability frameworks designed for automated decision-making. Regulators have been clear that responsibility cannot be outsourced to technology, yet many firms continue to deploy AI on top of governance models built for manual processes.”
This, Quinn outlines, creates a dangerous gap where decisions are faster and more consistent, but ownership is less clear.
“At Arctic Intelligence, we see this accountability gap as the defining compliance challenge of the next decade. Firms that fail to redesign accountability alongside automation will ultimately find that efficiency comes at the cost of defensibility,” he concluded.
Closing the gap
Looking ahead, Lubansky believes the future of compliance accountability isn’t choosing between humans and machines. It’s about designing systems where automation is traceable, explainable, and embedded within documented supervisory workflows. That level of accountability cannot be assumed, it has to be engineered.
He remarked, “This is where Red Oak is helping firms close the gap. By connecting content, review, distribution, and supervision within a single compliance connectivity platform, Red Oak makes automated decisions with compliance-grade AI that’s purpose-built for accuracy, architected for auditability, and trusted by professionals who understand what’s at stake. “
Additionally, O’Keefe stressed that the path forward requires a recognition that they executives in each firm need to take responsibility responsible for the AI systems deployed.
He explained, “Accountability must be anchored in roles, not individual decisions, focusing on governance, oversight, model assurance, and continuous monitoring. A leading global FSI firm has trialled the use of fully automated AI regulatory traceability and has learned to their chagrin that the AI systems, while improving, are incapable of making the decisions that humans do, resulting in a large focus on re-training the vendor provided model.
“This firm is a leader in managing risk and has taken the opportunity to make the shift back to human-based decisions aided, but not led, by their AI model. Until that shift happens across industry, firms will continue reaping the benefits of automation while quietly carrying a growing and underappreciated regulatory risk.”







