Digital communications governance: AI in action

AI

Artificial intelligence has moved well beyond buzzword status in financial services. It is now embedded in the day-to-day processes firms use to manage, monitor and make sense of the enormous volumes of digital communications their employees generate.

At the centre of this shift is Digital Communications Governance and Archiving (DCGA) — a discipline that is rapidly evolving from manual oversight into an AI-driven compliance function, said Theta Lake.

The scale of the opportunity is significant. Gartner predicts that by 2030, 70% of enterprises using DCGA solutions will have adopted AI-driven features and processes, up from 40% in 2025, as data complexity and governance demands continue to intensify.

According to Gartner Peer Insights, organisations use DCGA tools to proactively manage, monitor, collect and archive communications content — functions that are now critical to meeting a growing number of regulatory compliance mandates and expanding governance requirements.

The challenge facing compliance teams has never been more acute. From Zoom transcripts and Microsoft Teams chats to digital whiteboards, AI-generated summaries and AI-assisted responses — often referred to as aiComms — financial services firms are contending with an almost unmanageable communications landscape. Reviewers struggle to keep pace with the sheer volume of content, frequently drowning in false positives while missing crucial contextual signals.

AI is increasingly the answer to that problem. Notably, 94% of financial services firms report that they are already using, or planning to use, AI-based detection capabilities. The Financial Industry Regulatory Authority (FINRA) has also highlighted AI’s capacity to capture and surveil large volumes of structured and unstructured data — spanning text, speech, voice, image and video — to identify patterns and anomalies that allow firms to monitor conduct in a more risk-based and efficient manner.

So how are firms actually putting AI to work in DCGA? The following six use cases illustrate where real-world adoption is taking hold.

Summarising communications content

DCGA platforms are using AI to analyse diverse communications across multiple modalities and languages — including video, audio, chat and AI interactions. Theta Lake, for instance, generates summaries that capture key themes and participants from these interactions. With 82% of firms now using at least four communications and collaboration tools — from Zoom, Slack and Microsoft Teams to Asana, Monday.com and Mural — the ability to condense large volumes of text and visual information is proving especially valuable for supervision teams. It reduces the time and manual effort required to extract essential information, helping compliance reviewers prepare for regulatory inquiries and check data before it is passed to outside counsel.

Summarising communications over time

AI can also be used to reconstruct and condense fragmented communications into coherent, digestible summaries, allowing compliance reviewers to understand and act on data far more quickly. Theta Lake automatically reconstructs entire conversation histories — including weeks- or months-long threads spanning multiple platforms — into concise snapshots. This capability enables compliance teams to quickly grasp the essence of lengthy exchanges, pinpoint risks with greater ease and work more efficiently across the entire supervision workflow.

Detecting risks across communications channels

AI is being leveraged to detect risks across voice, video, chat, email and AI interactions, with the ability to interpret contextual signals such as images, GIFs and emoji reactions. Theta Lake applies machine learning (ML) and natural language processing (NLP) to identify compliance, privacy and security risks in what is spoken, shown or shared. This includes detecting when a confidential document appears in a screen share, whether an AI notetaker is present in a meeting, or whether a required disclaimer was given. Critically, because this multi-layered AI understands full context rather than simply matching keywords, it remains resilient to misspellings, transcription errors and poor-quality optical character recognition (OCR) — limitations that continue to trip up traditional lexicon-based tools. AI-driven behavioural analytics can also go further, constructing a narrative around clusters of material non-public information (MNPI) triggers, anomalous communication patterns and unexpected participant networks to fill the contextual gaps that keyword matching cannot reach.

Governing the use of AI tools

As firms roll out productivity tools powered by generative AI — such as Microsoft Copilot and Zoom AI Companion — the need to govern those tools themselves has become a compliance priority. The prompts employees enter, and the responses generated, may contain sensitive customer data, employee information, intellectual property or other confidential material. In some cases, prompts may even represent attempts to jailbreak or circumvent firm or third-party technical controls. Theta Lake’s forensic-level inspection of AI interactions allows organisations to identify sensitive data exposure, monitor for missing disclosures and detect risky user behaviour — enabling firms to support AI-driven productivity without compromising security or disrupting workflows.

Pinpointing where risks occur

Modern workplaces generate communications that span multiple platforms in quick succession — a conversation may begin in a chat thread, move to email, shift to a mobile message and end on social media. Capturing and displaying these fragmented threads in a single unified view gives compliance teams a far clearer picture of cross-platform interactions. Theta Lake’s visual interface uses AI to highlight the precise moments where a risk has been flagged, directing reviewers to the exact point in a meeting, chat or call where a sensitive topic arose. This “single pane of glass” approach allows a human reviewer to focus on what matters and make the final determination on any action required.

Explaining AI decisions

Transparency and defensibility are essential when AI models are used to flag compliance risks. AI explainability features are now being built directly into DCGA platforms to provide plain-language rationales for why a particular communication was identified as risky. Theta Lake’s detection annotation feature does precisely this — if a conversation triggers a collusion detection, the system surfaces specific evidence, such as phrases like “keep this between you and me” or an emoji used to obscure intent. The audit-ready summary produced at the point of detection allows human reviewers to verify the alert and, crucially, explain and defend the decision to regulators.

Taken together, these use cases demonstrate that AI in DCGA has moved firmly from theoretical promise into practical, everyday deployment. Firms are moving beyond unsustainable manual review processes and using AI-powered supervision to maintain complete oversight of both human and AI-generated communications — without sacrificing the productivity that modern collaboration tools deliver.

Read the daily FinTech news

Copyright © 2026 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.