{"id":6495,"date":"2026-02-24T12:43:17","date_gmt":"2026-02-24T12:43:17","guid":{"rendered":"https:\/\/fintech.global\/globalregtechsummitusa\/?p=6495"},"modified":"2026-02-24T12:43:17","modified_gmt":"2026-02-24T12:43:17","slug":"the-accountability-problem-no-one-has-solved","status":"publish","type":"post","link":"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/","title":{"rendered":"The accountability problem no one has solved"},"content":{"rendered":"<p><strong>Compliance has always been built on a simple premise: when something goes wrong, someone is accountable. That assumption is now under strain.<\/strong><\/p>\n<p>Decisions that once relied on human judgement are increasingly shaped \u2014 and in some cases made \u2014 by automated systems. Risk scores are generated automatically. Customers are onboarded or rejected based on models. Alerts are prioritised, suppressed, or escalated without human review. Regulatory obligations are interpreted, mapped, and operationalised by machines.<\/p>\n<p>None of this is hypothetical, and it is already happening inside of large financial organisations. Despite this, the frameworks used to assign accountability have barely changed. Compliance officers still \u2018own\u2019 risk outcomes, and businesses still rely on governance models designed for a human-dominated world. The result is a growing gap between how compliance decisions are made and how responsibility for those decisions is formally designed. This is the accountability gap.<\/p>\n<p>An uncomfortable truth being realised is that the industry has not yet agreed where accountability should sit in an automated compliance environment, or how it should be demonstrated.<\/p>\n<p>This feature is the first in\u00a0<em>The Accountability Gap<\/em>\u00a0series. It examines how automation has outpaced responsibility, why existing accountability models are beginning to fracture, and why this issue can no longer be treated as a future concern. In the parts that follow, we will explore what decisions machines should be allowed to make, how firms can govern AI without paralysing themselves, and what regulators are likely to expect next. But first, the problem itself needs to be confronted.<\/p>\n<p><strong>Who is accountable?<\/strong><\/p>\n<p>Mike Lubansky, SVP of Strategy at\u00a0<a href=\"https:\/\/www.redoak.com\/\">Red Oak<\/a>, believes that nothing has changed from an accountability perspective.<\/p>\n<p>\u201cThe firm is still fully accountable, but they can no longer point to a single human decision-maker in the way traditional supervisory models assume, \u201c he said. \u201cOperational responsibility is becoming fragmented in ways regulators haven\u2019t fully addressed. AI redistributes operational decision-making without shifting the legal responsibility.\u201d<\/p>\n<p>Lubansky added that regulators will continue to anchor accountability to the registered entity, the designated supervisory principal and the documented supervisory system.<\/p>\n<p>He added, \u201cAs such, it\u2019s critical that firms continue to require a \u2018human-in-the-loop\u2019 for many of these processes and have a clear audit trail of the rationale and decisions made by AI.\u201d<\/p>\n<p>For Areg Nzsdejan, CEO of\u00a0<a href=\"https:\/\/cardamon.ai\/\">Cardamon<\/a>, it depends on multiple factors: what the decision is, where it sits in the process and how much authority the AI system has been given.<\/p>\n<p>He said, \u201cThe way we think about it at Cardamon is that our platform isn\u2019t a tool \u2013 it\u2019s a set of digital teammates. These teammates work for compliance experts, who act as orchestrators. The AI does the heavy lifting: scanning, mapping, highlighting risk, and proposing actions. But it does not have executive decision-making power.\u201d<\/p>\n<p>Ultimately, accountability sits with the orchestrator, claims Nzsdejan. \u201cHumans remain responsible because they retain control over final decisions. This may change in the future, and certain AI systems may be empowered to make binding decisions for narrowly defined tasks. But that\u2019s not how we see the world today \u2013 and not how regulators see it either,\u201d he said.<\/p>\n<p>Mike O\u2019Keefe, global head of digital transformation &amp; innovation at\u00a0<a href=\"https:\/\/www.corlytics.com\/\">Corlytics<\/a>, stressed that traditionally, accountability in compliance was clear cut.<\/p>\n<p>\u201cHumans made decisions, signed-off on controls, and assumed responsibility for outcomes. But as firms increasingly rely on AI and automation to interpret regulatory requirements, monitor transactions, flag suspicious activity, evaluate risk scores, the human chain of responsibility becomes obscured,\u201d he said.<\/p>\n<p>O\u2019Keefe went on, \u201cIf an AI model incorrectly classifies a regulatory compliance obligation, suggests a policy or control update, clears a high risk transaction or misclassifies customer activity, is the accountable party the compliance officer? The data science team that built the model? The vendor who supplied it? Or the executive who approved its deployment?\u201d<\/p>\n<p>Regulators have indicated that accountability cannot be outsourced, whether to third-party vendors and not to algorithms. \u201dBut as AI systems become more autonomous and more complex, this accountability gap becomes harder to close,\u201d said O\u2019Keefe.<\/p>\n<p>Tim Khamzin, founder and CEO of\u00a0<a href=\"https:\/\/www.vivox.ai\/\">Vivox AI<\/a>, also made clear that when AI or automation is involved in a compliance decision, accountability does not disappear. \u201cOn the contrary, it concentrates. Responsibility still sits with the firm and the individuals who own the process, particularly risk and financial-crime leaders. What AI changes is not who is accountable, but how that accountability must be exercised.\u201d<\/p>\n<p>In a similar vein, CEO of\u00a0<a href=\"https:\/\/www.flagright.com\/\">Flagright<\/a>\u00a0Baran Ozkan said, \u201cWhen AI makes a compliance decision, accountability does not move to the model or the vendor. It stays with the regulated firm, specifically the senior leaders who own the control framework and risk appetite.<\/p>\n<p><strong>When accountability frameworks break<\/strong><\/p>\n<p>Which accountability frameworks break under AI? This is a critical question being asked inside the industry, and for Lubansky, his belief is that several traditional accountability frameworks begin to strain under AI, including reasonable supervision models, vendor accountability assumptions and established control testing and audit frameworks.<\/p>\n<p>\u201cThese approaches were designed for environments with clear review chains, predictable decision rules, and identifiable points of human judgment,\u201d he said. He remarked that models often generate reccomendations based on complex patterns rather than explicit rules, producing outcomes that may be statistically defensible but difficult to explain on an individual basis.<\/p>\n<p>Lubansky said, \u201cWhile supervisors are still expected to review and approve decisions, accountability breaks down if they cannot clearly articulate how an AI system arrived at a particular outcome\u2014exposing firms to regulatory and legal risk.\u201d<\/p>\n<p>Businesses have traditionally relied on SOC reports, vendor representations, and contractual indemnities to manage third-party risk. AI complicates this model in several ways, says Lubansky. Firstly, vendors may not fully disclose training data or model changes, models may be updated continuously and multiple vendors may be part contributors to a single decision path, which makes attribution challenging.<\/p>\n<p>Lubansky explained, \u201cControl testing and audit frameworks are also challenged by AI. Traditional compliance testing assumes stable logic, repeatable outcomes, and sample-based validation. AI can challenge these assumptions because: the same input may yield different outputs over time, model tuning can alter behavior without explicit code changes, and sample-based testing may fail to capture rare but high-risk edge cases.\u201d<\/p>\n<p>O\u2019Keefe, meanwhile, stressed that many existing frameworks rely on assumptions that do not hold in fully automated algorithmic environments.<\/p>\n<p>The first area is in model risk management frameworks. \u201cWhile they provide structure for validation, documentation, and oversight, they were built for deterministic statistical models \u2013 not adaptive, opaque machine learning systems that evolve over time, he said.<\/p>\n<p>Another area is in expert-in-the-loop models, with firms assuming that a person meaningfully reviews and influences each decision made. However, AI systems increasingly operate at speeds and volumes impossible for humans to oversee. Human reviews, O\u2019Keefe said, could become a formality rather than a safeguard.<\/p>\n<p>The third area is traditional compliance sign-off structures, where traditional compliance was once based on human decision-making authority. \u201cBut if the \u201cdecision\u201d is an algorithmic output, the sign off could become ambiguous. Is the compliance officer accountable for the decision?\u201d said O\u2019Keefe.<\/p>\n<p>A final notable area is vendor accountability provisions. Here, contracts can shift risk to vendors, but they cannot shift regulatory responsibility, said O\u2019Keefe. \u201cIf a vendor\u2019s AI fails, firms remain liable even if they lack full transparency of model operation,\u201d he said.<\/p>\n<p>Traditional accountability frameworks were built for rules-based systems that produce deterministic outcomes, added Khamzin. He stated, \u201cModern AI systems are probabilistic by design. They reduce false positives and improve detection accuracy, but they also introduce new challenges around explainability, auditability and governance. That is where many existing frameworks need to be adapted.\u201d<\/p>\n<p>For Ozkan, the frameworks that break under AI are the ones that confuse activity with accountability, for example \u2018the system flagged it\u2019 instead of \u2018a named owner approved the policy and its thresholds.\u2019<\/p>\n<p>Nzsdejan expressed that most accountability frameworks assume a single setup: a person makes a decision, follows a defined process, and uses fixed inputs.<\/p>\n<p>However, as he outlines, AI doesn\u2019t work like that. \u201cDecisions are spread across models, data sources, and workflows, which makes single-person sign-off harder; documenting process steps alone isn\u2019t enough when regulators increasingly care about why an outcome occurred; and the line between what a firm owns versus what a vendor provides becomes blurred as models and data change over time. The approaches that still work are those that clearly define who owns the decision, make AI outputs transparent and reviewable, and ensure there is a clear path for human escalation and override.\u201d<\/p>\n<p><strong>The double-edged sword of automation<\/strong><\/p>\n<p>Automation can improve outcomes in compliance. However, the double-edged sword of this is that it leaves firms exposed.<\/p>\n<p>For Nzsdejan, if improvement means speed alone, then automation is able to increase exposure. Faster wrong decisions, he stresses, are still wrong \u2013 just at scale.<\/p>\n<p>He said, \u201cWhat we optimise for is quicker and more accurate compliant outcomes, which in turn create more security for firms. In our view, you cannot genuinely improve outcomes while leaving firms exposed. If exposure increases, incentives are misaligned.\u201d<\/p>\n<p>The key for the Cardamon CEO is incentive design. He explains simply, \u201cAutomation should reduce uncertainty, not just effort. AI should surface risk, not hide it behind confidence. Humans should be accountable for decisions, not burdened with busywork. When those incentives are aligned, automation doesn\u2019t weaken accountability \u2013 it sharpens it.\u201d<\/p>\n<p>At the same time, Lubansky states that whilst automation has many benefits, it can also create exposure if not designed and managed properly.<\/p>\n<p>He explained, \u201cWhen implemented well, automation can improve throughput, lower error rates, and scale review capacity. However, it can also remove visible decision checkpoints and encourage over-reliance on system outputs\u2014particularly when controls, documentation, and accountability are not clearly defined.\u201d<\/p>\n<p>A couple of examples of this are marketing review automation and communication supervision triage automation. On the former, approval cycles are faster, but without the right controls and audit trails, firms may struggle to document why content was approved, weakening defensibility after the fact. On the latter, automation can improve prioritization, but it can also create ambiguity when alerts are suppressed or items are not flagged, making it difficult to determine who was accountable for the decision.<\/p>\n<p>\u201cIn practice, automation often improves operational performance while weakening defensibility. Firms may appear stronger on efficiency metrics, yet struggle months later to explain decisions, reconstruct rationale, or demonstrate active supervision rather than passive reliance on automated systems,\u201d concluded Lubansky.<\/p>\n<p>Khamzin stressed on this point that a big bottleneck here is side-by-side evolution. \u201cWe have seen automation materially improve compliance outcomes while simultaneously leaving firms exposed, not because the technology failed, but because accountability models failed to evolve accordingly alongside it. Regulators are increasingly clear that companies cannot outsource responsibility to algorithms.\u201d<\/p>\n<p>This, Khamzin claims, is also why regulatory and ethical guardrails matter so deeply. \u201cAs accuracy improves, the real compliance challenge shifts from decision-making to understanding, governing and evidencing how those decisions are made.\u201d<\/p>\n<p>For Ozkan, whilst automation can absolutely improve outcomes, it can leave businesses exposed if they cannot recreate the decision later.<\/p>\n<p>He said, \u201cIn practice, closing the accountability gap means traceability: what data was used, what policy was applied, what the model recommended, and who had the authority to override it.\u201d<\/p>\n<p>There is no question that automation has delivered meaningful improvements, said O\u2019Keefe. Automation and AI can detect anomalies that humans miss, process data at unprecedented scale and reduce human error. Many firms report better detection rates, fewer false positives, stronger audit trails and more consistent application of regulatory rules with more robust policies and controls.<\/p>\n<p>However, he believes these gains need to be protected to ensure that the appropriate education, training and decision-making process are in place.<\/p>\n<p>He explained, \u201cEnsure a robust operating model is defined: insist that compliance teams examine outputs and then make the decision to prevent an overreliance on automation and prevent \u201ccompliance deskilling,\u201d where teams lose the ability to challenge the AI output. Define where accountability sits as part pf the operating model to prevent confusion, speed up remediation and reduce regulatory risk.<\/p>\n<p>\u201cFurthermore, remove opaque decision-making by increasing explainability, making regulatory exams more assured. Vendors must frequently test algorithmic drift to prevent degradation of models and be mindful of data quality issues to prevent skewed AI outputs.\u201d<\/p>\n<p><strong>An absence of accountability?<\/strong><\/p>\n<p>A further interesting argument on this question was that the absence of accountability frameworks is the real issue.<\/p>\n<p>Anthony Quinn, CEO of\u00a0<a href=\"https:\/\/arctic-intelligence.com\/\">Arctic Intelligence<\/a>, remarked, \u201cThe industry continues to debate whether AI can be trusted in compliance, but this misses the real issue. The true risk is not automation itself, it is the absence of accountability frameworks designed for automated decision-making. Regulators have been clear that responsibility cannot be outsourced to technology, yet many firms continue to deploy AI on top of governance models built for manual processes.\u201d<\/p>\n<p>This, Quinn outlines, creates a dangerous gap where decisions are faster and more consistent, but ownership is less clear.<\/p>\n<p>\u201cAt Arctic Intelligence, we see this accountability gap as the defining compliance challenge of the next decade. Firms that fail to redesign accountability alongside automation will ultimately find that efficiency comes at the cost of defensibility,\u201d he concluded.<\/p>\n<p><strong>Closing the gap<\/strong><\/p>\n<p>Looking ahead, Lubansky believes the future of compliance accountability isn\u2019t choosing between humans and machines. It\u2019s about designing systems where automation is traceable, explainable, and embedded within documented supervisory workflows. That level of accountability cannot be assumed, it has to be engineered.<\/p>\n<p>He remarked, \u201cThis is where Red Oak is helping firms close the gap. By connecting content, review, distribution, and supervision within a single compliance connectivity platform, Red Oak makes automated decisions with compliance-grade AI that\u2019s purpose-built for accuracy, architected for auditability, and trusted by professionals who understand what\u2019s at stake. \u201c<\/p>\n<p>Additionally, O\u2019Keefe stressed that the path forward requires a recognition that they executives in each firm need to take responsibility responsible for the AI systems deployed.<\/p>\n<p>He explained, \u201cAccountability must be anchored in roles, not individual decisions, focusing on governance, oversight, model assurance, and continuous monitoring. A leading global FSI firm has trialled the use of fully automated AI regulatory traceability and has learned to their chagrin that the AI systems, while improving, are incapable of making the decisions that humans do, resulting in a large focus on re-training the vendor provided model.<\/p>\n<p>\u201cThis firm is a leader in managing risk and has taken the opportunity to make the shift back to human-based decisions aided, but not led, by their AI model. Until that shift happens across industry, firms will continue reaping the benefits of automation while quietly carrying a growing and underappreciated regulatory risk.\u201d<\/p>\n<p data-start=\"4043\" data-end=\"4155\"><a class=\"decorated-link\" href=\"https:\/\/regtechanalyst.com\/\" target=\"_new\" rel=\"noopener\" data-start=\"4043\" data-end=\"4119\">Read the daily RegTech news<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Compliance has always been built on a simple premise: when something goes wrong, someone is accountable. That assumption is now under strain. Decisions that once relied on human judgement are increasingly shaped \u2014 and in some cases made \u2014 by [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":6497,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false},"categories":[38,16],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The accountability problem no one has solved - Global RegTech Summit USA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The accountability problem no one has solved - Global RegTech Summit USA\" \/>\n<meta property=\"og:description\" content=\"Compliance has always been built on a simple premise: when something goes wrong, someone is accountable. That assumption is now under strain. Decisions that once relied on human judgement are increasingly shaped \u2014 and in some cases made \u2014 by [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/\" \/>\n<meta property=\"og:site_name\" content=\"Global RegTech Summit USA\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-24T12:43:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2026\/02\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1536\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/\",\"url\":\"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/\",\"name\":\"The accountability problem no one has solved - Global RegTech Summit USA\",\"isPartOf\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#website\"},\"datePublished\":\"2026-02-24T12:43:17+00:00\",\"dateModified\":\"2026-02-24T12:43:17+00:00\",\"author\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16\"},\"breadcrumb\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/fintech.global\/globalregtechsummitusa\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The accountability problem no one has solved\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#website\",\"url\":\"https:\/\/fintech.global\/globalregtechsummitusa\/\",\"name\":\"Global RegTech Summit USA\",\"description\":\"The world&#039;s largest gathering of RegTech leaders &amp; innovators\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/fintech.global\/globalregtechsummitusa\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16\",\"name\":\"Editorial\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g\",\"caption\":\"Editorial\"},\"url\":\"https:\/\/fintech.global\/globalregtechsummitusa\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The accountability problem no one has solved - Global RegTech Summit USA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/","og_locale":"en_US","og_type":"article","og_title":"The accountability problem no one has solved - Global RegTech Summit USA","og_description":"Compliance has always been built on a simple premise: when something goes wrong, someone is accountable. That assumption is now under strain. Decisions that once relied on human judgement are increasingly shaped \u2014 and in some cases made \u2014 by [&hellip;]","og_url":"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/","og_site_name":"Global RegTech Summit USA","article_published_time":"2026-02-24T12:43:17+00:00","og_image":[{"width":2560,"height":1536,"url":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2026\/02\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-scaled.jpg","type":"image\/jpeg"}],"author":"Editorial","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Editorial","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/","url":"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/","name":"The accountability problem no one has solved - Global RegTech Summit USA","isPartOf":{"@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#website"},"datePublished":"2026-02-24T12:43:17+00:00","dateModified":"2026-02-24T12:43:17+00:00","author":{"@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16"},"breadcrumb":{"@id":"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/the-accountability-problem-no-one-has-solved\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/fintech.global\/globalregtechsummitusa\/"},{"@type":"ListItem","position":2,"name":"The accountability problem no one has solved"}]},{"@type":"WebSite","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#website","url":"https:\/\/fintech.global\/globalregtechsummitusa\/","name":"Global RegTech Summit USA","description":"The world&#039;s largest gathering of RegTech leaders &amp; innovators","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/fintech.global\/globalregtechsummitusa\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16","name":"Editorial","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g","caption":"Editorial"},"url":"https:\/\/fintech.global\/globalregtechsummitusa\/author\/editorial\/"}]}},"featured_image_src":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2026\/02\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-600x400.jpg","featured_image_src_square":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2026\/02\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-600x600.jpg","author_info":{"display_name":"Editorial","author_link":"https:\/\/fintech.global\/globalregtechsummitusa\/author\/editorial\/"},"_links":{"self":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts\/6495"}],"collection":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/comments?post=6495"}],"version-history":[{"count":1,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts\/6495\/revisions"}],"predecessor-version":[{"id":6498,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts\/6495\/revisions\/6498"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/media\/6497"}],"wp:attachment":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/media?parent=6495"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/categories?post=6495"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/tags?post=6495"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}