{"id":4935,"date":"2026-05-07T08:52:42","date_gmt":"2026-05-07T08:52:42","guid":{"rendered":"https:\/\/fintech.global\/globalregtechsummitapac\/?p=4935"},"modified":"2026-05-07T08:52:42","modified_gmt":"2026-05-07T08:52:42","slug":"what-regulators-will-expect","status":"publish","type":"post","link":"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/","title":{"rendered":"What regulators will expect"},"content":{"rendered":"<div class=\"flex max-w-full flex-col gap-4 grow\">\n<div class=\"min-h-8 text-message relative flex w-full flex-col items-end gap-2 text-start break-words whitespace-normal outline-none keyboard-focused:focus-ring [.text-message+&amp;]:mt-1\" dir=\"auto\" data-message-author-role=\"assistant\" data-message-id=\"7d6a2a95-1af3-4e9a-99fc-d51cd797c4a6\" data-turn-start-message=\"true\" data-message-model-slug=\"gpt-5-3\">\n<div class=\"flex w-full flex-col gap-1 empty:hidden\">\n<div class=\"markdown prose dark:prose-invert w-full wrap-break-word light markdown-new-styling\">\n<p data-start=\"0\" data-end=\"285\"><strong>AI is no longer peripheral \u2013 it is embedded in decision-making, risk, and control. As that shift accelerates, tolerance for ambiguity around accountability is collapsing. Regulators are no longer asking whether firms use AI, but whether they understand, control, and can stand behind it.<\/strong><\/p>\n<p data-start=\"287\" data-end=\"337\">This is where the accountability gap becomes real. The core tension remains \u2013 machines can act, but responsibility is human. And while firms debate how far to push AI, regulators are converging on a clearer standard of what \u201cacceptable\u201d looks like.<\/p>\n<p data-start=\"537\" data-end=\"814\">Their focus is not philosophical, but practical: can you evidence control, explain outcomes, and intervene when things go wrong? The principle is simple\u2014if AI materially influences an outcome, it must be governable to the same standard as a human decision-maker, if not higher. The question is no longer whether AI will be regulated. It\u2019s whether firms are ready for how.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"z-0 flex min-h-[46px] justify-start\">\n<p>The focus of the fourth and final part of the\u00a0<em>Accountability Gap<\/em>\u00a0Series will focus on what regulators will expect around AI and compliance. This completes the full series, following the\u00a0<a href=\"https:\/\/regtechanalyst.com\/governing-ai-without-slowing-down\/\">previous instalment<\/a>\u00a0which asked: how do firms govern AI without slowing compliance to a crawl? The first two had focused on the\u00a0<a href=\"https:\/\/regtechanalyst.com\/the-accountability-problem-no-one-has-solved\/\">accountability problems that haven\u2019t been solved within AI in compliance<\/a>\u00a0and\u00a0<a href=\"https:\/\/regtechanalyst.com\/what-decisions-can-machines-be-allowed-to-make\/\">what decisions machines can be allowed to make<\/a>, respectively.<\/p>\n<p><strong>Accountability signals<\/strong><\/p>\n<p>One of the first key questions to ask on this topic is what accountability signals regulators are sending already.<\/p>\n<p>According to Mike Lubansky, SVP of strategy at\u00a0<a href=\"https:\/\/www.redoak.com\/\">Red Oak<\/a>, regulators have been consistent on one core principle: accountability cannot be outsourced. Not to vendors, and not to AI. What is changing, he remarks, is not who is accountable, but what firms must now be able to demonstrate.<\/p>\n<p>Lubansky states that regulators are sending clear signals in three areas. Firstly, in outcomes to process plus evidence. \u201cIt is no longer enough to show that a decision was reasonable. Firms must show how the decision was reached, what data and logic were used, and what controls were in place at the time. This reflects a broader shift toward evidence-based supervision, where decisions must be reconstructable.\u201d<\/p>\n<p>The second area outlined by Lubansky is from point-in-time controls to continuous oversight. He explained, \u201cStatic governance is no longer sufficient. Regulators increasingly expect: ongoing validation of automated systems, monitoring for drift and performance degradation, and documented change management for models and rules. This aligns with what firms are already experiencing: automation is not a one-time implementation \u2014 it is a continuously governed system.\u201d<\/p>\n<p>The third and final one is human in the loop to human accountability by design. Here, Lubansky detailed that early guidance emphasised human review. However, that expectation is evolving.<\/p>\n<p>\u201cThe focus is now on clear ownership of decisions and systems, defined escalation and override mechanisms, and demonstrable supervisory engagement. In other words, regulators are less concerned with whether a human touched every decision, and more concerned with whether accountability is structurally embedded in the process,\u201d said Lubansky.<\/p>\n<p>Lubansky finished on this point by stressing that regulators are not asking firms to slow down automation \u2013 they are instead asking them to prove that automation operates inside a system of control that is visible, testable and accountable.<\/p>\n<p>Regulators are not asking firms to slow down automation. They are asking them to prove that automation operates inside a system of control that is visible, testable, and accountable.<\/p>\n<p>Meanwhile, Scott Nice, CRO at\u00a0<a href=\"https:\/\/labeltech.io\/\">Label<\/a>, made clear his view that regulators have been steadily widening expectations for a number of years now. This hasn\u2019t been coming necessarily by introducing entirely new frameworks, but by tightening how existing regulations are interpreted and enforced.<\/p>\n<p>\u201cWhat is becoming clear is that regulators are no longer satisfied with firms simply having controls in place, they expect those controls to be demonstrably effective, consistently applied, and fully auditable,\u201d said Nice.<\/p>\n<p>While regulatory approaches still vary by jurisdiction, with some being more audit-driven and others more enforcement-led, there is a common direction of travel, Nice believes.<\/p>\n<p>He added, \u201cFirms are expected to be able to evidence not just what decisions were made, but how and why those decisions were reached. The signal being sent is that compliance must move from being reactive and procedural to being defensible, traceable, and embedded within core operations.\u201d<\/p>\n<p>Areg Nzsdejan, CEO of RegTech firm\u00a0<a href=\"https:\/\/cardamon.ai\/\">Cardamon<\/a>, emphasised that regulators don\u2019t want to wait for a crisis to set expectations on AI \u2013 they\u2019re already trying to signal this through existing frameworks.<\/p>\n<p>He said, \u201cThe FCA\u2019s Consumer Duty expects firms to evidence customer outcomes \u2013 not just processes. Where AI influences those outcomes, firms need enough explainability, governance and monitoring to understand, challenge and evidence its impact.\u201d<\/p>\n<p>Nzsdejan gave the example of the PRA\u2019s model risk management principles reinforce that banks need robust controls over models, including documentation, validation and oversight. \u201cAnd under the EU AI Act, certain AI use cases in financial services \u2013 such as creditworthiness assessments and life\/health insurance risk assessment or pricing \u2013 are explicitly treated as high-risk. None of these are targeted specifically at compliance AI.\u201d<\/p>\n<p>Nzsdejan finished on this point by making clear that they are setting the underlying expectation that firms can show their AI works, when it works, and what happens when it doesn\u2019t.<\/p>\n<p>\u201cThe signal is consistent across jurisdictions: accountability follows the decision, not the technology,\u201d he said.<\/p>\n<p>Supradeep Appikonda, COO and co-founder of\u00a0<a href=\"https:\/\/www.4crisk.ai\/\">4CRisk.ai<\/a>, added his view that regulators are beginning to require firms that use AI to prove how they control it, backed up by Conformity Assessments and Mitigation Plans.<\/p>\n<p>He gave the example of the FTC and SEC want to see a \u201cresponsible person\u201d in place to oversee specific automated decisions made by AI. \u201cThat person and their team must understand the AI\u2019s logic well enough to intervene, override, or shut down the system if it deviates from expected performance,\u201d he Appikonda said.<\/p>\n<p>Additionally, Ryan Swann, CRO of Risksmart, said that regulators are making one thing deeply clear \u2013 accountability doesn\u2019t disappear with automation.<\/p>\n<p>He said, \u201cFrom the FCA to global supervisory bodies, there\u2019s a consistent signal that firms must retain clear ownership of outcomes\u2014regardless of how decisions are made. \u201cBlack box\u201d systems are no longer acceptable without traceability, governance, and human oversight.\u201d<\/p>\n<p>Allison Lagosh, V.P. and head of compliance for\u00a0<a href=\"https:\/\/saifr.ai\/\">Saifr<\/a>, made the final point on this topic, stating that regulators are signalling that accountability remains fully human and unchanged regardless of AI use.<\/p>\n<p>She gave the example of the 2026 FINRA Annual Regulatory Oversight Report, with key signals including a number of things. The first was that technology neutrality is not flexibility.<\/p>\n<p>She explained, \u201cFINRA explicitly states that existing rules apply without exception to GenAI. Firms cannot argue novelty, experimentation, or vendor complexity to dilute accountability under Rules 3110 (Supervision), 2210 (Communications), and recordkeeping obligations.\u201d<\/p>\n<p>Secondly, supervisory responsibility cannot be delegated to machines. In their report, FINRA detailed that AI may assist decisions, but registered principals remain responsible for outcomes. \u201cThis is especially pointed out where AI is used for marketing content, customer communications, AML alerts, and recommendations,\u201d said Lagosh.<\/p>\n<p>The third point referenced is that governance of visibility matters as much as outcomes. \u201cRegulators are signaling that they expect firms to show how AI is governed\u2014approval processes, testing evidence, escalation paths, and senior-level reporting\u2014not just that controls exist on paper.\u201d<\/p>\n<p>In the view of\u00a0<a href=\"https:\/\/copla.com\/\">Copla<\/a>\u00a0CEO &amp; Co-founder Aurimas Bakas, the clearest signal is structural: regulators are moving beyond narrative.<\/p>\n<p>He said, \u201cUnder DORA\u2019s ICT third-party reporting requirements, firms must submit machine-readable, field-level data \u2014 register entries, contract classifications, dependency mappings \u2014 in place of policy documents. The FCA\u2019s incoming register of material arrangements under PS26\/2 follows the same logic. When a regulator builds a data intake schema, they\u2019re telling you exactly what they intend to audit.<\/p>\n<p>\u201cThat\u2019s the accountability signal. The firms that read it early are already building the underlying data infrastructure. The rest will be doing remediation under pressure.\u201d<\/p>\n<p><strong>What must be proved about AI<\/strong><\/p>\n<p>\u201cWhat must firms prove about AI decisions before something goes wrong?\u201d is fast becoming a central test of modern compliance \u2013 and the standard is exacting.<\/p>\n<p>As Nice makes clear, the question is not whether AI is perfect, but whether it is controlled. \u201cFirms need to be able to demonstrate that any AI-driven decisioning, particularly where more autonomous or agentic models are being used, is operating within a clearly defined and controlled framework,\u201d he says. Regulators are less concerned with flawless outputs than with evidence that systems are governed, tested, and understood.<\/p>\n<p>That evidence has to be concrete. Firms must be able to show \u201chow models are configured, what data they are trained on, how outputs are validated, and what controls exist to detect and manage errors.\u201d In short, the AI lifecycle needs to be transparent and defensible end-to-end.<\/p>\n<p>Critically, this cannot be static. \u201cGovernance cannot be static, it needs to be continuous, with ongoing monitoring, testing, and recalibration,\u201d Nice adds. Oversight has to evolve alongside the models themselves.<\/p>\n<p>The reason is scale. \u201cA single flaw in logic or data can be multiplied across thousands or millions of decisions.\u201d That amplification effect is what regulators are zeroing in on. The implication here is that firms must prove, in advance, that they understand the risks of AI at scale and have built safeguards to contain them.<\/p>\n<p>Cardamon CEO Nzsdejan cuts more directly to the gap most firms are still missing. \u201cThis is the question most firms are underestimating,\u201d he says, warning that AI accountability is too often framed as a future regulatory task rather than a current operational demand. \u201cThe instinct is to treat AI accountability as a regulatory problem. That\u2019s the wrong lens.\u201d<\/p>\n<p>In reality, expectations are straightforward and immediate. Firms must be able to show that they understood what the AI was deciding and why; that accountability sat with a named individual; that decisions can be reconstructed end-to-end\u2014\u201cwhat data, what logic, what outcome\u201d\u2014and that controls were in place and tested.<\/p>\n<p>The imbalance here is stark for businesses. \u201cMost firms can answer question two, very few can confidently answer one, three, or four.\u201d Assigning ownership is easy; proving understanding, traceability, and control is not -and that is exactly where regulators will focus.<\/p>\n<p>Lubansky, meanwhile, detailed that the regulatory bar is quietly shifting from reactive explanation to proactive defensibility.<\/p>\n<p>He commented, \u201cFirms should assume they will be expected to prove, in advance, that decisions are reconstructable, decisions align to firm-defined policy, oversight is active, and the system is built for audit not just output.\u201d<\/p>\n<p>On the former, he explained that for any given outcome, firms must be able to show the inputs used, logic or policy applied, model or rule version at the time, outcome produced and any human intervention or override. \u201cMost firms cannot consistently do this today,\u201d he said.<\/p>\n<p>On having aligned decisions, Lubansky exclaimed that regulators are not evaluating AI in isolation, they are evaluating whether: the system reflects the firm\u2019s stated risk appetite, policies are correctly translated into system logic, and decisions are consistent with regulatory obligations. If a system produces a \u201ccorrect\u201d outcome for the wrong reason, it is still a governance failure, he said.<\/p>\n<p>Active oversight, Lubansky went on to, is showcased by firms demonstrating ongoing monitoring and testing, defined escalation paths, evidence of challenge and override and clear ownership of system performance.<\/p>\n<p>Finally, on the system part, Lubansky explained this is where many firms fall short. \u201cIt is not enough that a system produces good outcomes. It must produce: traceable decisions, consistent documentation, and audit-ready evidence.\u201d<\/p>\n<p>Bakas has two parts to a response here. First, that a human reviewed and approved the output \u2014 with genuine oversight rather than passive sign-off. Second, that the logic behind the decision is traceable.<\/p>\n<p>He said, \u201cIf an AI system generates a risk profile or a policy document, the firm needs to show which regulatory framework it was mapped to, what inputs shaped it, and when it was last reviewed. In any defensible compliance setup, the audit trail has to be native to how the decision was made.\u201d<\/p>\n<p>Swann added, \u201cThe expectation is shifting from reactive justification to proactive assurance. Firms need to demonstrate that AI models are explainable, tested, and governed before they\u2019re deployed\u2014not after an incident. That means clear audit trails, documented decision logic, and evidence that risks have been anticipated, not just managed post-event.\u201d<\/p>\n<p>Appikonda, on the other hand, said, \u201cRegulators will ask for a firm\u2019s code, and pre-incident paper trails when reviewing incidents. Firms must show that training and validation data was representative, high-quality, screened for bias, and tested with edge use cases and potential attacks.<\/p>\n<p>\u201cIn addition, firms need to show that they are performing real-time logging of system performance and provide mandatory reporting of serious incidents or malfunctions within strict windows, often within hours.\u201d<\/p>\n<p>Lagosh pushes the standard even further upstream. The requirement is not to explain AI after the fact, but to evidence control before anything breaks. \u201cFirms must be able to prove explainability, supervision, and control in advance\u2014not after harm occurs.\u201d<\/p>\n<p>That proof rests on a few non-negotiables. First, traceability: firms need to be able to reconstruct decisions end-to-end\u2014\u201cprompts, outputs, model versions, data sources, and human reviews\u201d\u2014with evidence strong enough to withstand examination. Second, testing: not just at deployment, but continuously, with clear records covering accuracy, bias, drift, stress scenarios, and unintended consequences, particularly in high-risk areas.<\/p>\n<p>Human oversight is equally explicit. \u201cQualified humans must review and approve high-impact outputs,\u201d with the authority\u2014and competence\u2014to override the system where needed. And none of this works without clear ownership. Supervisory responsibility has to be defined across the organisation, from business lines to the enterprise level, including third-party exposure.<\/p>\n<p>The conclusion is blunt: it is not enough to say nothing has gone wrong. Firms must be able to show that AI decisions were designed, tested, reviewed, and governed responsibly from the outset.<\/p>\n<p><strong>Is waiting for clarity a good idea?<\/strong><\/p>\n<p>Is waiting for clarity a defensible strategy or a risk? On this question, Lubansky stated that waiting for regulatory clarity is increasingly a risk, and there are several reasons for this.<\/p>\n<p>He said, \u201cWhile detailed rules are still evolving, regulatory direction is clear, and the underlying expectations remain the same. It is an extension of existing supervisory principles into automated environments.\u201d<\/p>\n<p>Lubansky emphasised that automation is outpacing governance, and without corresponding investment in governance and auditability, there\u2019s a widening exposure gap. Many firms have already deployed automation, scaled AI-assisted workflows, and reduced human touchpoints.<\/p>\n<p>\u201cEnforcement will be backward-looking. Regulators will not assess firms based on what guidance existed at the time, they will assess whether the firm can explain and defend its decisions, whether appropriate controls were in place, and whether risks were reasonably foreseeable. The risk is not that firms move too quickly with AI \u2014 it is that they move quickly without building the accountability infrastructure required to defend it,\u201d said Lubansky.<\/p>\n<p>In their answer to this same question, Nice is unequivocal: it is becoming harder to defend by the day.<\/p>\n<p>\u201cWaiting for regulatory clarity is increasingly difficult to justify,\u201d he says, particularly for firms already deploying AI in live environments. The direction of travel is clear\u2014even where formal guidance is still evolving, regulators are signalling that responsibility does not wait. \u201cThe responsibility sits with the firm to ensure control, regardless of whether detailed guidance exists.\u201d<\/p>\n<p>That shifts waiting from caution to exposure. \u201cA firm that chooses to wait is effectively accepting unmanaged risk,\u201d both operationally and from a regulatory standpoint. The bar is not perfection, but proof\u2014evidence that risks are understood and actively managed.<\/p>\n<p>Swann added that waiting for regulatory clarity might feel safe, but in practice, it creates exposure. \u201cThe direction of travel is already visible: more scrutiny, more accountability, and higher expectations around transparency. Firms that delay action risk falling behind\u2014not just in compliance, but in operational resilience.\u201d<\/p>\n<p>Appikonda was even more blunt on this topic, stating that inaction here is a \u2018form of negligent oversight\u2019. He said that firms are expected to be \u2018compliant by design\u2019 and cannot shift the blame to the AI software vendor.<\/p>\n<p>He explained, \u201cUnder the EU AI Act and various U.S. state laws, the burden of proof has shifted to deployers using AI, who are now responsible for the failures of the provider companies that built the AI model.\u00a0 Another new concept shows how firms can be proactive: Regulators are requiring pre-use notices. As an example, if an AI is going to price a product dynamically or screen a resume, the consumer must be told before the interaction and be able to \u201copt-out\u201d.\u201d<\/p>\n<p>Lagosh is more direct: waiting is not caution\u2014it is exposure. \u201cWaiting for clarity is a risk\u2014and regulators are signaling little patience for delay.\u201d<\/p>\n<p>The reason is simple. The baseline already exists. \u201cThe message is not \u2018wait for AI-specific rules,\u2019 but rather \u2018apply existing rules now.\u2019\u201d Firms that hold back risk being seen not as prudent, but as knowingly under-supervised.<\/p>\n<p>Supervision, in this context, is judged on effort and structure, not perfection. Regulators expect to see tangible progress\u2014clear inventories of AI use, updated policies, and evidence of training and oversight. Inaction is harder to defend than an imperfect but active framework.<\/p>\n<p>And once something goes wrong, the window closes quickly. \u201cPost-incident explanations are weaker than pre-incident controls.\u201d With risks like bias, hallucinations, and data exposure already well understood, firms will be pressed on why safeguards were not in place earlier.<\/p>\n<p>The conclusion is unambiguous, in that waiting is not neutral, it compounds regulatory, reputational, and enforcement risk. If firms intend to rely on AI, they are expected to be governing it now, not later.<\/p>\n<p>Nzsdejan was also clear on the topic, stressing it is a risk, not a strategy. He said, \u201cThe firms waiting for regulators to publish explicit AI rules before building accountability frameworks are making a costly assumption: that current frameworks don\u2019t apply. They do.\u201d<\/p>\n<p>\u201cYou don\u2019t need an AI-specific regulation to be expected to explain a decision your AI made. Consumer Duty applies. Model risk guidance applies. Senior manager accountability applies. The enforcement risk is a firm that cannot explain a decision and has no mechanism to reconstruct it when a regulator asks.\u201d<\/p>\n<p><span data-olk-copy-source=\"MessageBody\">Bakas remarked, \u201c<\/span>It\u2019s a risk that firms are mispricing. The argument for waiting \u2014 \u201cthe regulation is still settling\u201d \u2014 collapses the moment something goes wrong, because regulators will ask what was in place at the time. DORA\u2019s RTS on ICT third-party reporting came into effect March 31.<\/p>\n<p>\u201cFirms that filed incomplete or erroneous submissions are already receiving correction requests. The firms in the best position right now are the ones who treated the first filing as a capability-building exercise, and built something they can iterate on.\u201d<\/p>\n<p><strong>Remaining accountable<\/strong><\/p>\n<p>In the opinion of Tim Khamzin, founder and CEO of\u00a0<a href=\"https:\/\/www.vivox.ai\/\">Vivox AI<\/a>, regulators aren\u2019t asking whether you use AI anymore \u2013 they\u2019re asking whether you remain accountable for it.<\/p>\n<p>He remarked, \u201cIf a firm cannot clearly explain why a decision was made, who validated it, how it can be challenged, that is a control failure, not a technology issue. What\u2019s changing is the burden of proof.<\/p>\n<p>\u201cBefore anything goes wrong, firms need to demonstrate that decisions are traceable, that governance is embedded, that human oversight is real, rather than something symbolic.\u201d<\/p>\n<p>As Khamzin remarked, that\u2019s becoming the baseline, not best practice. \u201cWaiting for regulatory clarity is the wrong strategy. By the time it arrives, expectations will already have moved. The firms that are getting ahead are treating AI decisions like regulated decisions today, with evidence, auditability and clear ownership from day one.\u201d<\/p>\n<p>Lubansky concluded the discussion with a simple point, \u201cRegulators are unlikely to resist AI adoption in compliance. The real point of scrutiny will be whether firms can stand behind the decisions these systems produce, with clear evidence, structured governance, and demonstrable control.\u201d<\/p>\n<p><a href=\"https:\/\/regtechanalyst.com\/\">Read the daily RegTech news<\/a><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>AI is no longer peripheral \u2013 it is embedded in decision-making, risk, and control. As that shift accelerates, tolerance for ambiguity around accountability is collapsing. Regulators are no longer asking whether firms use AI, but whether they understand, control, and [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":4937,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_mi_skip_tracking":false,"footnotes":""},"categories":[38,16],"tags":[],"class_list":["post-4935","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-latest-insights","category-technology"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>What regulators will expect - Global RegTech Summit APAC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What regulators will expect - Global RegTech Summit APAC\" \/>\n<meta property=\"og:description\" content=\"AI is no longer peripheral \u2013 it is embedded in decision-making, risk, and control. As that shift accelerates, tolerance for ambiguity around accountability is collapsing. Regulators are no longer asking whether firms use AI, but whether they understand, control, and [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/\" \/>\n<meta property=\"og:site_name\" content=\"Global RegTech Summit APAC\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-07T08:52:42+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/fintech.global\/globalregtechsummitapac\/wp-content\/uploads\/2026\/05\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-3-scaled.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1536\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Editorial\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/\",\"url\":\"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/\",\"name\":\"What regulators will expect - Global RegTech Summit APAC\",\"isPartOf\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/#website\"},\"datePublished\":\"2026-05-07T08:52:42+00:00\",\"dateModified\":\"2026-05-07T08:52:42+00:00\",\"author\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/#\/schema\/person\/d25d670fca037052a277394a71dbed16\"},\"breadcrumb\":{\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/fintech.global\/globalregtechsummitapac\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What regulators will expect\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/#website\",\"url\":\"https:\/\/fintech.global\/globalregtechsummitapac\/\",\"name\":\"Global RegTech Summit APAC\",\"description\":\"The world&#039;s largest gathering of RegTech leaders &amp; innovators\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/fintech.global\/globalregtechsummitapac\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/#\/schema\/person\/d25d670fca037052a277394a71dbed16\",\"name\":\"Editorial\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/fintech.global\/globalregtechsummitapac\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/864a6b369c7316f7cf0798ceccc4e4c3b77f76029c7367dfb6019357d4ec0455?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/864a6b369c7316f7cf0798ceccc4e4c3b77f76029c7367dfb6019357d4ec0455?s=96&d=mm&r=g\",\"caption\":\"Editorial\"},\"url\":\"https:\/\/fintech.global\/globalregtechsummitapac\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"What regulators will expect - Global RegTech Summit APAC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/","og_locale":"en_US","og_type":"article","og_title":"What regulators will expect - Global RegTech Summit APAC","og_description":"AI is no longer peripheral \u2013 it is embedded in decision-making, risk, and control. As that shift accelerates, tolerance for ambiguity around accountability is collapsing. Regulators are no longer asking whether firms use AI, but whether they understand, control, and [&hellip;]","og_url":"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/","og_site_name":"Global RegTech Summit APAC","article_published_time":"2026-05-07T08:52:42+00:00","og_image":[{"width":2560,"height":1536,"url":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-content\/uploads\/2026\/05\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-3-scaled.png","type":"image\/png"}],"author":"Editorial","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Editorial","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/","url":"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/","name":"What regulators will expect - Global RegTech Summit APAC","isPartOf":{"@id":"https:\/\/fintech.global\/globalregtechsummitapac\/#website"},"datePublished":"2026-05-07T08:52:42+00:00","dateModified":"2026-05-07T08:52:42+00:00","author":{"@id":"https:\/\/fintech.global\/globalregtechsummitapac\/#\/schema\/person\/d25d670fca037052a277394a71dbed16"},"breadcrumb":{"@id":"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/fintech.global\/globalregtechsummitapac\/what-regulators-will-expect\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/fintech.global\/globalregtechsummitapac\/"},{"@type":"ListItem","position":2,"name":"What regulators will expect"}]},{"@type":"WebSite","@id":"https:\/\/fintech.global\/globalregtechsummitapac\/#website","url":"https:\/\/fintech.global\/globalregtechsummitapac\/","name":"Global RegTech Summit APAC","description":"The world&#039;s largest gathering of RegTech leaders &amp; innovators","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/fintech.global\/globalregtechsummitapac\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/fintech.global\/globalregtechsummitapac\/#\/schema\/person\/d25d670fca037052a277394a71dbed16","name":"Editorial","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/fintech.global\/globalregtechsummitapac\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/864a6b369c7316f7cf0798ceccc4e4c3b77f76029c7367dfb6019357d4ec0455?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/864a6b369c7316f7cf0798ceccc4e4c3b77f76029c7367dfb6019357d4ec0455?s=96&d=mm&r=g","caption":"Editorial"},"url":"https:\/\/fintech.global\/globalregtechsummitapac\/author\/editorial\/"}]}},"featured_image_src":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-content\/uploads\/2026\/05\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-3-600x400.png","featured_image_src_square":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-content\/uploads\/2026\/05\/Copy-of-RegTech-Analyst-Series-Title-Story-Design-v1-3-600x600.png","author_info":{"display_name":"Editorial","author_link":"https:\/\/fintech.global\/globalregtechsummitapac\/author\/editorial\/"},"_links":{"self":[{"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/posts\/4935","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/comments?post=4935"}],"version-history":[{"count":1,"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/posts\/4935\/revisions"}],"predecessor-version":[{"id":4938,"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/posts\/4935\/revisions\/4938"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/media\/4937"}],"wp:attachment":[{"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/media?parent=4935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/categories?post=4935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummitapac\/wp-json\/wp\/v2\/tags?post=4935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}