{"id":6260,"date":"2025-11-11T13:06:55","date_gmt":"2025-11-11T13:06:55","guid":{"rendered":"https:\/\/fintech.global\/globalregtechsummitusa\/?p=6260"},"modified":"2025-11-11T13:06:55","modified_gmt":"2025-11-11T13:06:55","slug":"is-explainable-ai-the-missing-link-in-regulatory-compliance","status":"publish","type":"post","link":"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/","title":{"rendered":"Is explainable AI the missing link in regulatory compliance?"},"content":{"rendered":"<p><strong>As financial institutions turn to AI to automate compliance, a key question arises: do we truly understand these systems\u2019 decisions? The black-box nature of many models challenges transparency and trust. Explainable AI could change that, offering clarity around how algorithms reach conclusions. If successful, it might be the missing link that makes AI in compliance truly accountable. Key industry thought leaders examined this question in the first part of a two-part series.<\/strong><\/p>\n<p>As AI systems become increasingly embedded in financial compliance operations, a critical tension has emerged in both financial compliance operations and the sophistication of AI models and the fundamental regulatory requirements that govern their use, claims Oisin Boydell, chief data officer at\u00a0<a href=\"https:\/\/www.corlytics.com\/\">Corlytics<\/a>.<\/p>\n<p>He said, \u201cThe question facing compliance professionals today is not whether AI can support regulatory obligations, but whether it can do so in a manner that satisfies the transparency and accountability standards that regulators demand.\u201d<\/p>\n<p>For Boydell, auditability, attestation, traceability and transparency form the cornerstone of effective regulatory compliance. These principles, he claims, enable firms to demonstrate adherence to regulatory frameworks and provide regulators with the assurance that compliance decisions are sound, defensible and properly documented.<\/p>\n<p>\u201cHowever, as AI increasingly supports and, in some cases, replaces human decision-making in compliance management, the spotlight has shifted to a more complex challenge: ensuring that AI-driven compliance decisions can meet these same standards of transparency and explainability,\u201d Boydell commented.<\/p>\n<p>Boydell also stressed that advanced AI models \u2013 particularly LLMs and deep learning systems \u2013 present a fundamental paradox.<\/p>\n<p>He explained, \u201cAs these systems become more capable and sophisticated, their internal decision-making processes become increasingly opaque\u2014even to the AI scientists and model developers who created them. These \u201cblack box\u201d models can deliver impressive performance but understanding precisely how they arrive at specific conclusions remains a significant challenge.\u201d<\/p>\n<p>Such opacity, Boydell underlines, creates a critical issue for regulated industries. Financial institutions must document and justify AI-driven decisions to regulators, ensuring that processes are understandable and auditable.<\/p>\n<p>\u201cYet the very characteristics that make advanced AI models powerful\u2014their ability to identify complex, non-linear patterns across vast datasets\u2014also make them difficult to interpret in ways that humans can readily understand and validate,\u201d he said.<\/p>\n<p>However, there remains a key explainability challenge. \u201cExplainable AI techniques aim to bridge this gap by providing insights into how AI models reach their conclusions. In theory, XAI enables organizations to trace the logic behind each prediction, identify potential biases or errors, and build trust among stakeholders. However, the field of explainable AI remains an emerging research area, and the challenge is far from solved,\u201d said Boydell.<\/p>\n<p>A key area discussed around explainable has been the need to consider the human-in-the-loop approach. Relying solely of XAI techniques may not provide the level of transparency that regulators require or the level of trust that compliance professionals need when making critical decisions, stated Boydell.<\/p>\n<p>\u201cUnderstanding how an AI model functions internally may not even be informative or possible in many cases,\u201d Boydell added. \u201cThe solution lies not in making AI completely transparent at the algorithmic level, but in integrating AI within human workflows that enable effective oversight.\u201d<\/p>\n<p>The Corlytics CDO remarked that a human-in-the-loop approach embeds human oversight within AI-driven processes, enabling compliance professionals to verify AI decisions by providing them with key information, relevant details, and the full context of compliance determinations.<\/p>\n<p>\u201cRather than attempting to explain the internal mechanics of complex models, this approach focuses on giving compliance professionals the tools and information they need to validate AI outputs efficiently,\u201d said Boydell.<\/p>\n<p>He went on, \u201cThis partnership model harnesses the strengths of automation\u2014speed, consistency, and the ability to process large volumes of data\u2014while preserving the nuanced judgment and accountability that trained professionals provide. By using AI across the full regulatory lifecycle, while retaining human oversight for final decisions, organizations can build trust in AI-based regulatory compliance solutions through verified outcomes rather than algorithmic transparency alone.\u201d<\/p>\n<p>While AI models may function as black boxes, compliance operations cannot. Transparency and the ability for compliance professionals to understand, trust, and verify AI-generated decisions remain critical, particularly as regulators demand documentation and justification of automated processes.<\/p>\n<p>Boydell said, \u201cAt Corlytics, we embed this human-in-the-loop approach across our AI-driven compliance solution, from horizon scanning, regulatory change management, through obligations and requirements analysis, and policies and controls managements and the mapping and connections between all these components. We combine AI decision making with human focused workflows to leverage AI efficiencies whilst supporting this critical human oversight.\u201d<\/p>\n<p>Through incorporating AI within human-in-the-loop workflows that enable oversight of key decisions offer a practical solution to the explainability challenge, detailed Boydell.<\/p>\n<p>\u201cThis approach acknowledges the limitations of current XAI techniques while still meeting regulatory requirements for transparency, auditability, and accountability. As AI adoption in compliance continues to grow, building trust through verified, human-supervised processes is essential for managing the complex and highly fluid regulatory landscape and enabling a trusted, compliant future across regulated industries,\u201d he said.<\/p>\n<p><strong>Changing the game<\/strong><\/p>\n<p>In the view of\u00a0<a href=\"https:\/\/www.b-next.com\/\">b-next<\/a>, AI has changed the way compliance teams operate. It processes vast amounts of data, detects patterns of suspicious behavior, and highlights risks that would otherwise go unnoticed. But as automation becomes more common, so does the question of trust, stressed the firm.<\/p>\n<p>\u201cCan compliance teams, regulators, and clients truly understand and rely on the decisions an algorithm makes?,\u201d said b-next. \u201cThat is where XAI comes in. It promises to open the black box of machine learning and show how conclusions are reached. In an industry built on accountability and evidence, this kind of transparency is no longer optional it is essential.\u201d<\/p>\n<p>b-next believes clarity in automated compliance is key. \u201cMost AI systems are designed for performance, not explanation. They flag anomalies and assign risk scores but often fail to communicate why something was flagged. In compliance, that lack of clarity is a serious issue. Every alert has consequences. It can lead to an investigation, a trading restriction, or even a report to regulators.\u201d<\/p>\n<p>For the company, XAI changes this dynamic by showing which variables influenced a model\u2019s decision. It connects patterns, data points, and logic into a narrative that humans can understand. Instead of simply trusting the system, compliance officers can see and verify its reasoning. This makes AI a partner, not a mystery, the firm claims.<\/p>\n<p>b-next added, \u201cWhen teams can interpret automated outcomes, they can act faster, explain findings internally, and stand behind their conclusions with confidence.\u201d<\/p>\n<p>According to the firm, every compliance professional knows that regulatory audits can be demanding. When auditors or regulators review surveillance systems, they are not only interested in what was detected but how it was detected. They want to ensure that logic, data, and governance are sound.<\/p>\n<p>\u201cExplainable AI can simplify that process,\u201d stressed b-next. \u201cWhen systems generate human-readable explanations, firms can demonstrate the inner workings of their algorithms without needing complex technical interpretations. This cuts down on audit time, reduces miscommunication, and increases regulator confidence in the technology being used. Essentially, explainable AI allows compliance to be both more efficient and more defensible.\u201d<\/p>\n<p>For b-next, one of the biggest challenges in explainability is balancing openness with the need to protect proprietary models. \u201cFirms want to show how their systems reach conclusions, but they do not want to expose the algorithms themselves.<\/p>\n<p>\u201cLayered explainability provides a solution\u201d the firm continued, \u201cIt allows organizations to share understandable summaries of model logic, such as which factors had the greatest influence on a decision, without revealing the technical details of the model\u2019s design. This achieves transparency without giving away trade secrets, ensuring compliance teams and regulators have what they need while innovation remains protected.\u201d<\/p>\n<p>In summary, b-next believes that XAI is not a passing trend.<\/p>\n<p>The firm explained, \u201cIt represents a necessary evolution in how compliance technology operates. The ability to interpret and justify automated decisions will soon become a regulatory expectation, not an advantage. More importantly, it is a step toward rebuilding trust in the relationship between humans and machines. Compliance officers can move from asking \u201cWhy did the system do this?\u201d to confidently saying \u201cHere\u2019s why the system made this decision.\u201d<\/p>\n<p>\u201cIn an environment where accountability is everything, XAI might just be the missing link that connects automation with understanding, efficiency with transparency, and technology with human judgment,\u201d the company concluded.<\/p>\n<p><strong>The critical bridge<\/strong><\/p>\n<p>In the view of RegTech firm\u00a0<a href=\"https:\/\/www.vivox.ai\/\">Vivox.ai<\/a>, in a financial ecosystem increasingly reliant on automation, XAI has emerged as the critical bridge between innovation and accountability.<\/p>\n<p>A Vivox spokesperson said, \u201cAs regulators sharpen their focus on how AI decisions affect customers and compliance outcomes, the ability to show how a model reaches its conclusions is no longer a nice-to-have\u2014it\u2019s a regulatory necessity.\u201d<\/p>\n<p>The company gave the example of the EU AI Act, which makes this explicit. It explained, \u201cUnder the new regime, financial institutions deploying high-risk AI systems\u2014such as those used in AML or KYB checks\u2014must ensure their models are transparent, traceable, and auditable. The emphasis is on human oversight and explainability, ensuring that decisions impacting access to financial services can be reviewed and justified.\u201d<\/p>\n<p>In a similar vein, the UK\u2019s FCA has been advancing its stance on AI assurance, focusing on model risk management, fairness and governance.<\/p>\n<p>\u201cIts guidance underscores that assurance must be \u201cproportionate, evidence-based, and explainable\u201d\u2014a principle that resonates strongly with the compliance community,\u201d said Vivox.<\/p>\n<p>Meanwhile, for FinTechs, this shift isn\u2019t theoretical. Vivox stressed a real-world example, a European FinTech unicorn that recently adopted Vivox\u2019s AI KYB agent to automate the onboarding of corporate clients.<\/p>\n<p>\u201cAs part of its governance process, the company implemented a tapered review framework for model output validation\u2014an approach aligned with regulatory expectations for human-in-the-loop assurance,\u201d said Vivox.<\/p>\n<p>Explaining the process, Vivox said that in the early phase, 100% of AI-generated KYB assessments were manually reviewed during the first four weeks. By weeks 4-6, after incorporating customisations to reflect the FinTech\u2019s internal policies, reviewers again validated nearly all outputs against the baseline.<\/p>\n<p>\u201cConfidence grew as results consistently matched human reviewers\u2019 expectations. Over subsequent weeks, the sample size of manual checks decreased\u2014to 70\u201380% by Week 8 and 50\u201360% by Week 12\u2014transitioning toward a sampling-based degradation-monitoring model,\u201d said Vivox.<\/p>\n<p>However, the company detailed that something unexpected happened \u2013 which was that the rollout accelerated.<\/p>\n<p>It explained, \u201cThanks to the agent\u2019s high accuracy and transparency, the fintech compressed what was planned as a 12-week phased launch into just four weeks from technical discovery to production. Explainability didn\u2019t slow adoption\u2014it enabled it, by giving risk teams confidence that model decisions could be understood, audited, and defended.\u201d<\/p>\n<p>XAI also plays a crucial role in meeting the GDPR\u2019s \u2018right to explanation obligations, said Vivox.<\/p>\n<p>\u201cWhen automated systems make decisions about customers, institutions must be able to articulate the logic behind those decisions. For compliance functions, this capability simplifies internal investigations and reduces audit burdens\u2014regulators can see exactly how a decision was reached without demanding opaque technical justifications,\u201d the firm exclaimed.<\/p>\n<p>Vivox concluded by highlighting that XAI provides a pragmatic balance between compliance transparency and proprietary model protection. \u201cModern explainability techniques\u2014such as decision trace visualisation and confidence attribution\u2014allow firms to disclose reasoning without revealing sensitive intellectual property,\u201d said Vivox.<\/p>\n<p>The company finished, \u201cAs the regulatory perimeter expands, explainability will likely become a differentiator, not a constraint. Firms that can both comply and clarify\u2014showing regulators, auditors, and customers that their AI works as intended\u2014will move faster and build greater trust.In that sense, explainable AI isn\u2019t just the missing link in compliance. It\u2019s the foundation of the next era of responsible financial automation.\u201d<\/p>\n<p><strong>The power of explainability<\/strong><\/p>\n<p>According to Baran Ozkan, co-founder and CEO of\u00a0<a href=\"https:\/\/www.flagright.com\/\">Flagright<\/a>, explainability turns an automated decision from a black box into an auditable story.<\/p>\n<p>He remarked, \u201cWhen a model can show which signals mattered, how they combined, and what alternatives would have changed the outcome, regulators and customers can see that the result was reasoned, not arbitrary. That transparency supports core duties under modern privacy laws, including the need to inform people about automated decisions, offer a meaningful way to contest them, and prove human oversight where required.<\/p>\n<p>Ozkan added that it also shortens audits. \u201cIf every alert carries a reason code, feature attributions, the data lineage behind those features, and a clear control that was triggered, examiners spend less time chasing spreadsheets and more time validating outcomes,\u201d he said.<\/p>\n<p>However, Ozkan also added that the hard part here is balancing openness with protection of proprietary models, stating that the practical approach is layered disclosure.<\/p>\n<p>\u201cFirms keep weights and architecture private, while exposing regulator\u2011grade artifacts such as reason codes, surrogate explanations that are faithful within a defined window, counterfactual examples, and signed decision logs. That gives supervisors what they need to test fairness and consistency without forcing full model handover. At Flagright we design for explainability by default,\u201d he said.<\/p>\n<p>Detailing the Flagright offering, Ozkan mentioned that every score ships with human-readable rationales, immutable evidence, and a simulator that shows how different facts would have changed the decision.<\/p>\n<p>\u201cThe goal is simple: speed for operations, clarity for auditors, and recourse for customers,\u201d he concluded.<\/p>\n<p>Ryan Swann, CEO and founder of\u00a0<a href=\"https:\/\/www.risksmart.com\/\">RiskSmart<\/a>, succinctly outlined a key benefit of explainable AI.<\/p>\n<p>\u201cIf you can clearly say \u201chere\u2019s why this decision happened,\u201d customers understand it, teams can challenge it, and audits go smoother. Keep it simple (a core value at RiskSmart). Plain\u2011English summary, clear reasons for the specific case, and a short pack of proof you can show a regulator. Be honest about limits and test that your explanations match what the system really did,\u201d he remarked.<\/p>\n<p data-start=\"4162\" data-end=\"4278\"><a class=\"decorated-link\" href=\"https:\/\/regtechanalyst.com\/\" target=\"_new\" rel=\"noopener\" data-start=\"4164\" data-end=\"4240\">Read<\/a><a class=\"decorated-link\" href=\"https:\/\/regtechanalyst.com\/\" target=\"_new\" rel=\"noopener\" data-start=\"4164\" data-end=\"4240\">\u00a0the daily RegTech news<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As financial institutions turn to AI to automate compliance, a key question arises: do we truly understand these systems\u2019 decisions? The black-box nature of many models challenges transparency and trust. Explainable AI could change that, offering clarity around how algorithms [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":6262,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false},"categories":[38],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Is explainable AI the missing link in regulatory compliance? - Global RegTech Summit USA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Is explainable AI the missing link in regulatory compliance? - Global RegTech Summit USA\" \/>\n<meta property=\"og:description\" content=\"As financial institutions turn to AI to automate compliance, a key question arises: do we truly understand these systems\u2019 decisions? The black-box nature of many models challenges transparency and trust. Explainable AI could change that, offering clarity around how algorithms [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/\" \/>\n<meta property=\"og:site_name\" content=\"Global RegTech Summit USA\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-11T13:06:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2025\/11\/pierre-bamin-5B0IXL2wAQ0-unsplash-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1707\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/\",\"url\":\"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/\",\"name\":\"Is explainable AI the missing link in regulatory compliance? - Global RegTech Summit USA\",\"isPartOf\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#website\"},\"datePublished\":\"2025-11-11T13:06:55+00:00\",\"dateModified\":\"2025-11-11T13:06:55+00:00\",\"author\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16\"},\"breadcrumb\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/fintech.global\/globalregtechsummitusa\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Is explainable AI the missing link in regulatory compliance?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#website\",\"url\":\"https:\/\/fintech.global\/globalregtechsummitusa\/\",\"name\":\"Global RegTech Summit USA\",\"description\":\"The world&#039;s largest gathering of RegTech leaders &amp; innovators\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/fintech.global\/globalregtechsummitusa\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16\",\"name\":\"Editorial\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g\",\"caption\":\"Editorial\"},\"url\":\"https:\/\/fintech.global\/globalregtechsummitusa\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Is explainable AI the missing link in regulatory compliance? - Global RegTech Summit USA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/","og_locale":"en_US","og_type":"article","og_title":"Is explainable AI the missing link in regulatory compliance? - Global RegTech Summit USA","og_description":"As financial institutions turn to AI to automate compliance, a key question arises: do we truly understand these systems\u2019 decisions? The black-box nature of many models challenges transparency and trust. Explainable AI could change that, offering clarity around how algorithms [&hellip;]","og_url":"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/","og_site_name":"Global RegTech Summit USA","article_published_time":"2025-11-11T13:06:55+00:00","og_image":[{"width":2560,"height":1707,"url":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2025\/11\/pierre-bamin-5B0IXL2wAQ0-unsplash-scaled.jpg","type":"image\/jpeg"}],"author":"Editorial","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Editorial","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/","url":"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/","name":"Is explainable AI the missing link in regulatory compliance? - Global RegTech Summit USA","isPartOf":{"@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#website"},"datePublished":"2025-11-11T13:06:55+00:00","dateModified":"2025-11-11T13:06:55+00:00","author":{"@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16"},"breadcrumb":{"@id":"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/is-explainable-ai-the-missing-link-in-regulatory-compliance\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/fintech.global\/globalregtechsummitusa\/"},{"@type":"ListItem","position":2,"name":"Is explainable AI the missing link in regulatory compliance?"}]},{"@type":"WebSite","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#website","url":"https:\/\/fintech.global\/globalregtechsummitusa\/","name":"Global RegTech Summit USA","description":"The world&#039;s largest gathering of RegTech leaders &amp; innovators","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/fintech.global\/globalregtechsummitusa\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/d25d670fca037052a277394a71dbed16","name":"Editorial","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/fintech.global\/globalregtechsummitusa\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e25caf13ff74e4ec69c5895b17b6b1e0?s=96&d=mm&r=g","caption":"Editorial"},"url":"https:\/\/fintech.global\/globalregtechsummitusa\/author\/editorial\/"}]}},"featured_image_src":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2025\/11\/pierre-bamin-5B0IXL2wAQ0-unsplash-600x400.jpg","featured_image_src_square":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-content\/uploads\/2025\/11\/pierre-bamin-5B0IXL2wAQ0-unsplash-600x600.jpg","author_info":{"display_name":"Editorial","author_link":"https:\/\/fintech.global\/globalregtechsummitusa\/author\/editorial\/"},"_links":{"self":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts\/6260"}],"collection":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/comments?post=6260"}],"version-history":[{"count":1,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts\/6260\/revisions"}],"predecessor-version":[{"id":6263,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/posts\/6260\/revisions\/6263"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/media\/6262"}],"wp:attachment":[{"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/media?parent=6260"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/categories?post=6260"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitusa\/wp-json\/wp\/v2\/tags?post=6260"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}