AI is no longer a passive productivity tool sitting quietly in the background of the workplace. Across financial services and other regulated industries, AI assistants, meeting summarisation tools and increasingly autonomous agents are becoming embedded in everyday communications.
While adoption is accelerating at pace, governance frameworks are struggling to keep up, leaving organisations exposed to regulatory, data security and conduct risks, claims Theta Lake.
The scale of this challenge is evident in research from Theta Lake’s 7th Annual Digital Communications Governance Report, which surveyed 500 IT and compliance leaders. The findings show that 99% of financial services firms intend to expand their use of AI, yet 88% are already encountering governance and data security challenges. This growing gap between adoption and oversight is quickly becoming a barrier to confident AI deployment, making early investment in AI governance a strategic necessity rather than a compliance afterthought.
One of the most significant shifts expected in 2026 is the treatment of AI as an active participant in workplace communications. AI-generated interactions are no longer limited to internal drafts or experimental use cases. AI tools are now composing client emails, responding to prompts with regulated information, and summarising meetings that form part of official records. These interactions are conversational and iterative, meaning single prompt-and-response snapshots fail to capture the full context required for meaningful supervision. As a result, AI-generated communications linked to regulated activity must be captured, supervised and archived in the same way as human communications, with controls applied at the point of creation.
Governance expectations are also expanding beyond AI outputs to include human-to-AI and AI-to-AI behaviours. As employees interact more frequently with AI systems, new forms of risk are emerging, including attempts to bypass safeguards through techniques such as “jailbreaking”. Even well-designed guardrails cannot fully prevent the accidental exposure of PII, MNPI or confidential internal information. Effective governance therefore depends on visibility into prompts, behaviours and outputs, enabling organisations to detect unsafe content, identify misuse and monitor unsanctioned AI tools before risks escalate.
At the same time, organisations are becoming increasingly sceptical of vendor assurances around “responsible AI”. As AI-washing proliferates, firms are demanding independent, verifiable evidence of governance maturity. This is driving growing interest in ISO/IEC 42001 certification, one of the first international standards for AI management systems. The standard offers a certifiable, auditable framework that aligns closely with emerging regulation, including the EU AI Act, and is expected to become a baseline requirement for clients, partners, boards and regulators by 2026.
Regulatory scrutiny of AI-generated communications is also intensifying. Regulators have made it clear that accountability does not change simply because AI is involved. FINRA’s 2026 Annual Regulatory Oversight Report introduces a dedicated section on generative AI, reinforcing that firms remain responsible for all regulated communications, regardless of whether they are produced by humans or machines. In the UK, the FCA has echoed this position, signalling that existing regulatory frameworks already apply to AI-enabled activities. As a result, firms will increasingly be expected to demonstrate robust capture, supervision and control of AI-generated content during examinations.
Finally, the rapid spread of AI across multiple collaboration platforms is forcing organisations to rethink governance strategies. With most firms using four or more UCC platforms, and AI embedded directly into tools such as Microsoft Teams, Zoom and Webex, fragmented governance is becoming a material risk. Effective oversight in 2026 will require a unified, cross-platform approach that applies consistent controls to all AI-generated communications, regardless of where they originate or how they interact.
Looking ahead, organisations that invest in governance frameworks designed specifically for AI communications will be best positioned to unlock AI’s value with confidence. Without visibility into prompts, behaviours and downstream outputs, risks will remain hidden. Those that close this gap will not only meet regulatory expectations, but also enable safer, more scalable AI adoption across the digital workplace.
Find more on RegTech Analyst.
Copyright © 2026 FinTech Global









