How data can help navigate a shifting rulebook

data

Regulations are no longer a fixed set of rules—they’re a constantly shifting landscape that challenges even the most seasoned compliance teams. Staying ahead isn’t just about interpreting the rulebook; it’s about seeing what’s coming next. Data is proving to be the tool that brings clarity to this complexity, uncovering patterns and insights that were once hidden. From advanced analytics to intelligent automation, institutions are learning to navigate change with confidence.

The financial world is rapidly evolving as new technologies grace the global stage. For ALMIS CEO Luke DiRollo, this is no different in financial regulation – something he claims is ‘putting it lightly’.

He explained, “It really kicked off with the major failures across the financial sector that triggered systemic consequences for individuals and businesses everywhere. Leaving aside the rather heated debates about how governments propped up failing institutions (“too big to fail” and all that), what we’ve been left with is simple: global financial markets are now buried under mountains of regulation.”

DiRollo cited the UK banking sector, which reportedly spends £33.9bn every year on regulatory compliance. “That’s 13% of all operating costs – a staggering share of overhead devoted solely to understanding, implementing, and maintaining compliance,” he said. “Since 2020, 84% of firms say these costs have risen – in many cases “substantially.” As an industry, we should be worried. This trajectory isn’t sustainable. And the cost will always, eventually, hit consumers.”

Every piece of legislation brings its own flavours of scrutiny, said DiRollo. He added that some of us chase scientific precision; others prioritise speed and efficiency. “We’ll keep debating the detail forever, but we also need to step back and consider the foundational framework – the mechanism that underpins how banks respond to regulation in the first place,” he said.

DiRollo remarked that regulators needs change fast, business models evolve and new products appear overnight – and regulators must respond whilst also juggling social, economic, political and technological pressures.

He said, “They’re struggling to keep up as it is. We’ve barely touched the surface when it comes to understanding the impact of deposit aggregators or “buy now, pay later” products, both of which clearly shift risk profiles. And here’s a favourite example: global liquidity and funding rules still don’t really consider the speed at which outflows occur in an age of instant online banking.”

Despite all this, he added, he believes we all know what the ‘holy grail’ looks like – granular data.

“But what actually is granular data?”, said DiRollo. “How does it truly move the needle? And why, after talking about it for over a decade, does it still feel like a distant utopia? Here’s the candid truth: granular data means the death of “RegTech” as we know it. We can’t keep treating regulatory reporting as a separate, siloed discipline living somewhere off to the side of the “real” business.”

DiRollo continued, “Granular data means collecting, standardising and democratising information so it can serve every part of the organisation. Business decisions in uncertainty should flow from the same data. Regulatory reports should be produced from the same data. And the only thing stopping us is alignment on a standard, normalised, extensible data model that works across the business. And ideally across the sector. When you boil a bank down to its essence, this is entirely achievable.”

At ALMIS, the CEO outlined, it has been empowering banks with actionable insight for over 35 years. “The core needs of CFOs haven’t fundamentally changed in that time. Where new regulations introduced alternative views or new aggregations, we simply extended the data model, adding customer or product attributes where necessary. This approach has massively reduced the data-collection burden,” he detailed.

Once the data model is consistent, reporting logic becomes distributable, he claims. “Whether by us, by a community, or ideally by the regulator. It saves users huge amounts of time and effort. And because it’s a model built by a community, it continues to pay dividends.”

A key point detailed by DiRollo is his belief that regulators and practitioners both want granular data.

“We’re all aligned,” he said. “The only disagreement is on which data points matter. Risk-weighted assets, behavioural outflows, repricing cashflows – these are not “regulatory outputs.” They’re fundamental data items that every bank should already hold if it wants to manage its risks properly.

“If we invest in that shared, extensible data model, everyone benefits. Executives get better insight. Regulators get cleaner data and can adjusted quicker The sector gets a future-proof foundation. And maybe, just maybe, we can stop adding more weight to that ever-growing regulatory mountain,” DiRollo concluded.

Changing times

Areg Nzsdejan, CEO and co-founder of Cardamon, believes firms are increasingly treating regulation as data rather than documents.

Instead of relying on manual PDF reviews, firms are now using automated pipelines to continuously ingest updates from hundreds of regulatory sources. “We structure the text into machine-readable components and apply NLP models to classify themes, detect impact, and highlight changes against previous versions,” Nzsdejan explains.

These insights are then routed in real time to the right owners across product, compliance, legal, and engineering, with each update mapped directly to the affected policies, controls, and business areas. The result is a transformation from slow, reactive horizon scanning to a fast, data-driven model where firms can interpret regulatory change instantly, act on it sooner, and maintain a clear audit trail without operational drag. Nzsdejan notes that “this is an enormous amount of effort, which is being enabled by companies like ourselves at Cardamon.”

On whether machine learning can truly capture the nuance and intent behind evolving regulations, Nzsdejan says: “Machine learning can interpret regulation far better than most people expect, but it needs deep context to do so.” Modern models can recognise definitions, carve-outs, thematic patterns, and even subtle shifts in supervisory expectations by analysing how language changes over time.

They can compare obligations, infer likely impacts, and flag contradictions faster and more consistently than any manual review. “But intent—the political, economic, and supervisory rationale behind a rule—often sits between the lines,” he adds. ML can surface signals and probabilities, yet humans are still needed to interpret proportionality, fairness, risk appetite, and the real expectations of regulators. In practice, the best outcomes come from “pairing machine-driven analysis with human expertise overlaid.”

When asked about the biggest barriers preventing organisations from turning regulatory text into actionable intelligence, Nzsdejan points out that they are structural rather than technological. Regulations are “dense, cross-referenced, inconsistent across jurisdictions, and often updated in fragmented ways,” making them extremely hard to convert into clean, machine-readable data. Within firms, knowledge is dispersed across legal, compliance, product, engineering, and operations, with no single owner responsible for translating rules into concrete requirements.

Even when teams understand the regulation, they often struggle to connect it to specific products, controls, customer segments, or processes. Without a consistent taxonomy or operating model, insights remain trapped in PDFs, summaries, or inboxes rather than flowing into workflows. “Most organisations don’t lack regulatory information—they lack the structure, alignment, and governance to turn it into operational action,” he explains. One of Cardamon’s key value propositions, Nzsdejan says, is “converting legalese into actionable intelligence using our proprietary technology.”

Looking ahead, will regulators themselves start to rely on similar technology to oversee compliance more dynamically? Nzsdejan observes that the shift is already underway. Supervisory bodies are experimenting with AI-driven thematic reviews, automated document ingestion, market surveillance models, and more structured rulebooks that can be interpreted by technology rather than humans. Some even provide APIs to access these systems. Their goal is clear: “to shift from retrospective, sample-based supervision to real-time, data-driven oversight.”

As financial services become faster and more digital, regulators cannot rely on periodic audits and manual reviews to keep pace. Over time, supervised firms and regulators will increasingly use similar technology stacks—not because it’s fashionable, but because it’s the only scalable way to understand risk and regulatory change at modern market speed. Nzsdejan concludes, “We see a world where we can connect this ecosystem and reduce friction for all parties.”

Leveraging new technologies

Businesses are using AI, automation, NLP and advanced analytics to transform regulatory change management from a manual, reactive process into a proactive, data-driven process, claims Cathy Vasilev, co-founder and CCO at Red Oak.

She said, “This seeks to develop real-time monitoring, predictive insights, automated impact analysis, and integrated workflows to obtain the data needed to fuel the compliance program. Of course, this requires technology in order to meet this need; manual processes are no longer sufficient. “

Is machine learning able to capture the nuance and intent behind evolving regulations? Vasilev believes that this is not the case, due to the fact RegTech is a deeply nuanced industry.

She explained, “AI cannot ingest and understand a rulebook with that nuance in mind. Regulators often use principle-based or intentionally vague language in rules. There are also important limitations when regulations hinge on context, ambiguity, or political/legal interpretation. “

Vasilev added that ML does not understand political context, regulatory mood, or unwritten expectations, such as information in No action letters. ML can often misapply definitions across jurisdictions unless tightly constrained. “In other words, you need a human in the loop to help get the complete picture,” said Vasilev.

Meanwhile, Rick Grashel, co-founder and CTO at Red Oak, believes that the biggest barrier preventing organisations from taking regulatory text and turning it into actionable intelligence is a lack of complete context.

He explained, “Regulatory text is not enough to be able to take action. Details about each firm—including the type of business, audiences serviced, jurisdictions serviced, types of products, size of firm, information systems involved, etc.—must be considered alongside the regulatory text to begin to form actionable intelligence.  And once actionable intelligence is developed, tactical plans of action have to be put into place in order to implement that intelligence.”

He added this is the ‘entire purpose’ of a firm’s compliance organisation, which is to create a complete compliance program around the regulations, given the firm’s specific details, risk appetite and needs.

Keeping the human in the loop

According to Baran Ozkan, CEO of Flagright, firms are moving from static policy binders to live rule intelligence.

He said, “Data teams stream official updates, use natural language processing to extract obligations, map them to a common control taxonomy, and push policy‑as‑code updates into monitoring and reporting systems. Dashboards show what changed, which controls are affected, and what evidence must be gathered next. Machine learning helps with the heavy lifting, such as clustering similar obligations across jurisdictions and suggesting control mappings, but it does not replace legal interpretation.”

For Ozkan, the safest pattern is retrieval-augmented generation with citations, explicit confidence thresholds, and mandatory human sign‑off before anything reaches production.

“The biggest barriers are messy source formats, clashing definitions between regulators, and fragmented internal systems that cannot absorb changes quickly. Supervisors are starting to adopt similar technology for their own oversight, yet they will expect firms to prove provenance for every interpretation,” said Ozkan.

The Flagright CEO concluded by citing the company’s view: it is vital to keep the human as the editor, keep every link from decision to source, and make policy changes deploy like software so an organisation is able to adapt in hours and not quarters.

Find more on RegTech Analyst.

Read the daily FinTech news

Copyright © 2025 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.