{"id":7202,"date":"2026-03-25T10:43:53","date_gmt":"2026-03-25T10:43:53","guid":{"rendered":"https:\/\/fintech.global\/globalregtechsummit\/?p=7202"},"modified":"2026-03-25T10:43:55","modified_gmt":"2026-03-25T10:43:55","slug":"who-owns-compliance-decisions-in-automated-systems","status":"publish","type":"post","link":"https:\/\/fintech.global\/globalregtechsummit\/who-owns-compliance-decisions-in-automated-systems\/","title":{"rendered":"Who owns compliance decisions in automated systems?"},"content":{"rendered":"<p><strong>Automation is steadily moving from the margins of financial services into its operational core. Surveillance systems flag misconduct, onboarding platforms assess risk, and AI models increasingly recommend \u2014 or even execute \u2014 compliance actions. Yet as decision-making becomes embedded in automated systems, a fundamental question is becoming harder to answer: who actually owns the decision?<\/strong><\/p><p>For decades, accountability in financial services was structurally simple. Humans made judgments, documented them, and regulators knew where responsibility sat. Automation complicates that model.<\/p><p>When compliance outcomes are shaped by machine learning models, external vendors, and complex data pipelines, the chain of responsibility becomes less visible \u2013 even as regulators continue to expect clear accountability.<\/p><p><strong>How firms define accountability<\/strong><\/p><p>A question on the mind of many in the RegTech industry right now is how businesses are defining accountability when compliance decisions are partially or fully automated.<\/p><p>For Janet Bastiman, chief data scientist at&nbsp;<a href=\"https:\/\/www.napier.ai\/\">Napier AI<\/a>, responsibility still sits with the human even for fully automated decisioning, which is why we recommend human-in-the-loop approach. Regulators, she outlines, have been clear about the human accountability in frameworks even when provisioning for AI implementations in AML.<\/p><p>She explained, \u201cHumans must remain in the loop because they are ultimately responsible for the decisions. AI can generate alerts, make recommendations, suggest rules and automations that better match analyst decisions, generate natural-language explanations, but these all have to be under the oversight of human responsibility.\u201d<\/p><p>Similarly, Stephen Lovell, CPTO at RegTech firm&nbsp;<a href=\"https:\/\/www.vixio.com\/\">Vixio<\/a>, stressed that regulatory accountability hasn\u2019t changed, as responsibility still sits with the regulated entity, &nbsp;the accountable senior manager and ultimately the board.<\/p><p>\u201cAutomation does not transfer liability. It changes how inputs are gathered and processed \u2013 not who is answerable,\u201d he said. \u201cThe most mature organisations will treat AI as a decision-support layer, not a decision-maker.\u201d<\/p><p>Scott Parkin,&nbsp;<a href=\"https:\/\/zeidler.group\/\">Zeidler\u2019s<\/a>&nbsp;head of US, took the time to make clear that in financial services, all businesses are required to have a series of policies and procedures for their functions and the heart and soul of these policies is simple \u2013 accountability.<\/p><p>He said, \u201cThis industry has decades of legislative and regulatory history culminating in stringent requirements for accountability but there is one common thread, there must be a human accountable.\u201d<\/p><p>As LegalTech and RegTech develop, mature and become more prevalent, Parkin expressed, the question for firms is how to integrate the technology into their existing and complicated policies and procedures that nevertheless end with a human being accountable.<\/p><p>He remarked, \u201cThe key for firms across financial services is deciding what aspects of their policies and procedures technology can perform without impacting the final accountability. Ultimately, firms need to determine how the accountable human oversees the integration of use.\u201d<\/p><p>Companies,&nbsp;<a href=\"https:\/\/labeltech.io\/\">Label<\/a>&nbsp;CRO Scott Nice, added, are increasingly confident in automating execution, bur are far less precise articulating accountability. Automation doesn\u2019t transfer responsibility, it only operationalises pre-defined logic.<\/p><p>\u201cWhen a system validates customer data, suppresses an alert, or generates a reportable outcome, that result reflects prior human decisions. Who approved the rule logic? Who defined materiality thresholds? Who accepted residual risk? The phrase \u201csystem decision\u201d is misleading. Systems execute configured rules. They do not assume regulatory liability.\u201d<\/p><p>For Nice, mature firms separate rule design ownership, escalation ownership and model validation ownership. \u201cThat separation is what regulators will probe when outcomes are challenged,\u201d he said.<\/p><p><strong>Where the responsibility sits<\/strong><\/p><p>Where does the responsibility sit when automated decisions are challenged by regulators?<\/p><p>Here, Lovell suggests that when a regulator challenges an automated outcome, businesses must be able to explain three key things. First of all, the dataset \u2013 what information was used, and was it complete and current. Next is the processing logic, how was the information transformed into an output? Lastly, consistency and reproducibility \u2013 would the same inputs produce the same output?<\/p><p>\u201cIf firms can articulate those three layers, they move from \u201cblack box AI\u201d to defensible, explainable automation. Without that, trust erodes quickly \u2013 both internally and externally,\u201d says Lovell.<\/p><p>Meanwhile, Nice suggested that when regulators question an automated outcome, the issue immediately becomes governance, not technology. \u201cThe real question is: Why was the system configured this way? Responsibility sits with the regulated entity and, ultimately, accountable executives,\u201d he said.<\/p><p>For Nice, if no one is able to explain the decision logic, the change management process, the rationale for thresholds and the oversight framework, then the weakness that exists is structural. \u201cAutomation amplifies governance quality. It does not replace it,\u201d he explained succinctly.<\/p><p>Bastiman, on the other hand, thinks that decisions themselves should not be automated, stating, \u201cFor example, the auto-discounting of alerts based on risk-based scoring should align to a rule that had human oversight before implementation and is clearly linked to a risk-based assessment.<\/p><p>\u201cThe expectations from regulators regarding explanations for decisions have not changed, so while AI is a great tool in helping draw together the data points that may be used to make a decision as well as helping generating natural language explanations to populate SARs, the decision to discount, escalate or report to the regulator has to be made by a human.\u201d<\/p><p>This, for Bastiman, is why it is so essential that risk-based assessments be well documented and operationalised, and why AI cannot be opaque in compliance workflows.<\/p><p>She continued, \u201cAnalysts need to understand exactly why the AI makes suggestions or raises flags, so they can document their decisions. Any rules created from AI recommendations would be based on collating the best of human decisions, with well-evidenced historical alert decisions as the basis for future automated discounting.<\/p><p>The view held by Parkin is that it would take an \u2018earthshattering\u2019 and foundational regulatory change for compliance decisions and accountability for such decision to be automated.<\/p><p>He remarked, \u201cHuman accountability is woven into the fabric of the legal and regulatory framework for financial services \u2013 a fortiori for firms subject to any level of fiduciary duty \u2013 and this is not likely to change any time soon.\u201d<\/p><p>Parkin described it a \u2018Sisyphean exercise\u2019 to try to use AI tools to make compliance decisions. \u201cThese AI tools are incredible and will make firms astronomically more efficient, optimizing nearly every area of compliance except the final decisions where a human is accountable.\u201d<\/p><p>He continued by stressing that regulators will expect the same, and firms will need to be careful when implementing AI tools to ensure they can demonstrate how a human is overseeing the technology.<\/p><p>\u201cEven a shred of evidence indicating that a firm\u2019s compliance function is delegating their accountability to technology is an enormous risk,\u201d Parkin finished.<\/p><p><strong>Sufficient human oversight<\/strong><\/p><p>Another key question being posed here is how much human oversight is considered sufficient in automated compliance workflows.<\/p><p>On this, Bastiman comments, \u201cOversight should be a given, all regulatory frameworks demand transparency, explainability and auditability for financial crime compliance. Prescriptive approaches to \u2018how much oversight\u2019 are likely to result in the check-box compliance approaches of old, whereas most regulators are transitioning to a more outcomes-based approach with a focus on collaborating to define best practice.\u201d<\/p><p>For Bastiman, the goal should be to ensure that the humans-in-the-loop can explain the automations, and that any automated explanations are natural language, meaning they can be understood by humans.<\/p><p>Oversight should be proportionate to risk, stated Lovell. For the Vixio CPTO, low-impact, repeatable workflows may only require review by exception.<\/p><p>He said, \u201cHigh-impact regulatory interpretation \u2013 particularly where customer harm or enforcement exposure exists \u2013 should require documented human rationale. Human sign-off is not necessarily a sign of mistrust in AI. More often, it reflects the contextual nature of compliance.\u201d<\/p><p>He added that two businesses can read the same rule and reach different but ultimately legitimate conclusions based on factors like risk appetite, customer base, business model and jurisdictional footprint.<\/p><p>\u201cNo general-purpose model inherently understands those nuances as they apply to your business,\u201d Lovell said.<\/p><p>Lovell also provides a succinct answer to whether human signoffs reflect risk, or a lack of trust. \u201cNeither, they reflect responsibility. To fully automate interpretive regulatory judgement, a firm would need vast domain-specific training data, significant compute infrastructure and stable, agreed interpretations across regulators. That environment rarely exists.\u201d<\/p><p>Instead, what Lovell believes is emerging as the sustainable model is AI for preperation and humans for accountability.<\/p><p>He explained, \u201cThe opportunity is not to remove the human from compliance. It is to elevate the human, giving them better information, clearer impact analysis and structured workflows that make decisions explainable. If we get that balance right, AI does not create an accountability gap, It closes one.\u201d<\/p><p>For Label\u2019s CRO, meanwhile, he believes on this point that companies often drift toward extremes. \u201cEither they re-check automated outputs manually, undermining efficiency, or they treat the system as a black box and assume statistical confidence equals defensibility,\u201d.<\/p><p>For Nice, sufficient oversight should be risk-based, documented, periodicially validated and exception-driven.<\/p><p>He expressed, \u201cIf every decision requires human duplication, the architecture is flawed. If no one can explain a decision path, oversight is insufficient. The correct model is defined intervention thresholds with structured human escalation, not parallel processing of automated outputs.\u201d<\/p><p>The final point on this question comes from Parkin, who explained in his view that this is one area where compliance departments do have some insight into what is expected by regulators.<\/p><p>He said, \u201cThe use of technology to assist, augment, and optimize the compliance function is not new in any way. Compliance teams have dealt with this for a long time and generally understand what level of oversight by humans is needed to deploy technology for use in the compliance infrastructure.\u201d<\/p><p>The question here, Parkin waxes, is whether the use of AI technology requires more, or potentially less, human oversight than non-AI technology deployed historically.<\/p><p>\u201cI am of the opinion that the current legal and regulatory framework governing financial services, specifically their compliance teams, for deploying technology is sufficient and clear,\u201d said Parkin.<\/p><p>He continued, \u201cThe current processes can be replicated for AI with a caveat that the human overseeing the AI technology needs to understand it, as opposed to simply being able to use the internet. While the quantity of human oversight might be generally the same, the quality of the oversight might, and perhaps should, be higher as the humans involved would need to be more technologically savvy than was required historically.\u201d<\/p><p><strong>Are governance frameworks keeping up?<\/strong><\/p><p>One of the biggest challenges for many in the industry to consider is whether the governance frameworks running adjacent to automation are keeping pace. On this, Bastiman is of the mind that governance frameworks have often dragged behind operations under the weight of regulatory burden.<\/p><p>\u201cBut with the shift to outcomes-based approaches by the likes of the Financial Conduct Authority, governance is becoming less onerous \u2013 although more important,\u201d she remarked. \u201cWorking with the right partners can reduce the governance overhead, as the governance of underlying AI models is managed by the solution provider.\u201d<\/p><p>Bastiman added that governance of self-built or black-box AI models could be challenging for financial institutions, so picking partners with a compliance-first approach to AI and can help to leverage automations without compromising on compliance.<\/p><p>Parkin, however, is more bearish on this point, believing they are not keeping pace. \u201cA firm\u2019s policies and procedures are extensive, complicated, and typically a result of multiple iterations that have changed and evolved with the market and the firm itself over time. However, similar to how policies and procedures took years to keep pace with the benefits of the internet, they are generally not keeping pace.\u201d<\/p><p>More frustrating for compliance teams, Parkin added, is that policies and procedures not only need to keep pace with the current AI tech being used, but also with the unparalleled speed with which AI itself is evolving.<\/p><p>Nice was similar on this point, stating that in many cases, tech adoption is outpacing governance sophistication.<\/p><p>\u201cBoards approve automation strategies without always interrogating rule version control, audit traceability, change approval protocols and model drift.\u201d In Nice\u2019s view, governance frameworks need to evolve to include formal rule ownership, automated decision logging, defined change control and periodic rule effectiveness testing. \u201cAutomation is not the risk; the risk occurs when automation is unsupervised,\u201d Nice finished.<\/p><p>Lovell, meanwhile, stressed that governance is improving, but unevenly. Many firms initially approached AI governance as a technology risk problem. In reality, it is a regulatory accountability problem\u201d<\/p><p>The businesses in Lovell\u2019s mind that are moving fastest are embedding clear usage boundaries, escalation paths, audit trails and traceability, defined human decision points and model version control.<\/p><p>Meanwhile, as detailed recently by&nbsp;<a href=\"https:\/\/www.norm.ai\/\">Norm Ai<\/a>, one of the persistent challenges in financial regulation is that governance frameworks rarely arrive before the technology they must oversee.<\/p><p>As Dan Berkovitz noted in discussion at the Central Park AI Forum, many of the most significant financial laws have historically emerged only after market failures \u2013 from post-Depression securities legislation to the reforms that followed the 2008 financial crisis.<\/p><p>\u201cIt\u2019s very difficult to get prospective legislation, forward looking ahead, anticipating issues, and the political will to address them,\u201d he said. \u201cBut after a crisis, there\u2019s motivation.\u201d For automated compliance systems, this raises an uncomfortable question: will accountability frameworks evolve before AI-driven decisions reshape how firms manage regulatory risk?<\/p><p><strong>The accountability gap<\/strong><\/p><p>Another perspective provided on this debate was by Andrew Davies, global head of FCC strategy at&nbsp;<a href=\"https:\/\/complyadvantage.com\/\">Comply Advantage<\/a>, who made a point of discussing the accountability gap in AI-drive compliance, stating that this is often less about the technology and more about the transparency of the models and architecture beneath it.<\/p><p>He said, \u201cAt ComplyAdvantage, we believe that for a compliance decision to be truly safe to automate, it must be defensible. Responsibility sits with the firm, but that burden is only manageable when the AI can provide a single, immutable audit trail explaining the \u2018why\u2019 behind every action\u201d<\/p><p>Davies mentioned that he sees a clear distinction is which actions and decisions are suitable for automation. \u201cTasks involving the triage of low-risk, high-volume false positives \u2013 which can currently account for up to 85% of an analyst\u2019s manual workload \u2013 are not just safe to automate; they are a defensive necessity in an era of real-time payments. By using agentic AI to remediate these noisey cases, firms allow their human experts to focus on the truly complex 10-15% of cases that require nuanced, human judgment.\u201d<\/p><p>In the view of ComplyAdvantage\u2019s Davies, the risk isn\u2019t over automation per-se \u2013 but black-box automation.<\/p><p>He explained, \u201cRegulators correctly demand to know the underlying logic of a decision. This is why human sign-offs should not reflect a lack of trust in AI, but rather a validation of a glass box approach \u2013 where natural language rules and clear reasoning chains allow compliance officers to stay in the driver\u2019s seat.\u201d<\/p><p>Ultimately, Davies remarked, AI should be a participant in the compliance workflow, not a replacement for it.<\/p><p>He concluded, \u201cGovernance frameworks must shift from periodic model reviews to continuous, real-time monitoring against golden datasets to ensure that as the machine learns, it remains aligned with the firm\u2019s specific risk appetite and regulatory obligations.\u201d<\/p><p><a href=\"https:\/\/regtechanalyst.com\/\">Keep up with all the latest RegTech news here&nbsp;<\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>Automation is steadily moving from the margins of financial services into its operational core. Surveillance systems flag misconduct, onboarding platforms assess risk, and AI models increasingly recommend \u2014 or even execute \u2014 compliance actions. Yet as decision-making becomes embedded in [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":7204,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-7202","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts\/7202","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/comments?post=7202"}],"version-history":[{"count":1,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts\/7202\/revisions"}],"predecessor-version":[{"id":7205,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts\/7202\/revisions\/7205"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/media\/7204"}],"wp:attachment":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/media?parent=7202"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/categories?post=7202"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/tags?post=7202"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}