The last-mile problem in AI security explained

AI

Artificial intelligence is fundamentally reshaping how organisations operate. Businesses across every sector are racing to deploy AI tools in pursuit of greater productivity, efficiency and strategic advantage. Yet as this transformation accelerates, a critical new challenge has emerged: governing what happens when humans and intelligent systems communicate.

According to Theta Lake, this new layer of interaction — increasingly referred to as AI communications, or aiComms — sits at the intersection of innovation and risk. And as AI moves from pilot projects to enterprise-wide deployment, most organisations are discovering that their existing security and governance frameworks simply were not built to handle it.

Theta Lake recently highlighted what exactly AI communications is, and why they are the last-mile in AI security.

What exactly are aiComms?

When employees use tools such as Microsoft Copilot, Zoom AI Companion or large language models (LLMs) including those from Anthropic, OpenAI and Google Gemini, they generate an entirely new category of workplace communication. These are aiComms — and they introduce a new kind of participant into the workplace: generative and agentic AI that interacts directly with staff.

Crucially, these interactions extend well beyond internal or temporary exchanges. AI is drafting client-facing emails, surfacing data in response to queries, and summarising meetings. Each of these outputs carries potential compliance, oversight and ethical implications — and many organisations are struggling to keep pace.

The scale of the governance challenge

Recent research covering 500 financial services firms underscores just how acute this challenge has become. Whilst nearly all (99%) of those surveyed are actively deploying AI, 88% report difficulties with AI governance and data security. The volume and complexity of communications being generated is growing exponentially, and traditional governance frameworks are not equipped to manage it.

The risks fall across three broad categories. On the security and privacy front, 45% of firms say they struggle to detect whether confidential or sensitive data has been exposed within generative AI outputs. Personal data, credit card details or confidential client information can surface in prompts or responses without triggering existing data loss prevention tools.

Compliance and recordkeeping present their own difficulties. Some 47% of organisations report challenges ensuring that AI-generated content meets regulatory requirements. Regulatory bodies have been unambiguous on this point: existing rules apply to AI-generated content just as they do to human communications. FINRA’s 2026 Annual Regulatory Oversight Report reinforces that firms remain responsible for their communications regardless of whether a human or a machine produced them. In the UK, the FCA has similarly confirmed that established regulatory frameworks extend to AI-enabled activities.

Behavioural and ethical risks round out the picture. Around 41% of firms are identifying new and concerning user behaviours as staff interact with AI tools. These include jailbreaking — deliberately circumventing AI guardrails to access restricted data or manipulate outputs — as well as subtler forms of misuse such as prompt steering, where employees use iterative queries to access information beyond their authorisation level or probe system boundaries.

Who is responsible?

As organisations work to define accountability for AI governance, responsibility is often fragmented. IT security teams are focused on attacks, vulnerabilities and data protection, but frequently lack the contextual visibility needed to monitor user behaviour. Compliance teams, meanwhile, are equipped for supervising regulated interactions but may not have the remit or tooling to detect unethical AI use — such as an employee attempting to access confidential files without leaving an audit trail, or quietly querying colleagues’ compensation data through a series of innocuous-looking prompts.

The last-mile problem

Traditional security frameworks are designed to protect systems, networks and data. But the last mile — the point at which humans and AI actually interact — is where intent, context and compliance converge, and where current governance strategies most commonly fall short.

This is where sensitive data can leak through prompts or AI-generated summaries. It is where AI systems can be manipulated into revealing information they should not. And it is where employees can inadvertently breach policy simply by having a natural conversation with an AI tool.

Security guardrails alone are insufficient. Even when controls are in place, AI systems can inadvertently expose personally identifiable information, client data, material non-public information (MNPI) or confidential internal documents through user behaviour patterns that no perimeter defence was designed to catch. Organisations need genuine behavioural visibility — the capacity to observe how users and AI systems interact, detect anomalies, and understand context across multiple tools and communication channels.

Enabling safe AI adoption

Blanket restrictions on AI access are not a viable solution. Overblocking drives employees towards unmonitored shadow IT alternatives and stifles the innovation these tools are meant to enable. The more sustainable path lies in building visibility, oversight and accountability into how AI is used — not just how it is deployed.

In practice, this means capturing and supervising AI interactions in their full conversational context rather than looking at isolated prompt-response pairs in isolation. It means detecting behavioural patterns such as jailbreaking or unethical prompt steering, inspecting content for sensitive data exposure or potential misconduct, and being able to reconstruct full conversation threads — spanning chat, audio and email — to ensure accuracy and traceability. It also means being able to demonstrate to regulators that oversight mechanisms are functioning effectively, not just theoretically in place.

Organisations that combine content inspection, behavioural analytics and contextual supervision will be best placed to realise the full potential of AI whilst managing the risks. By securing the last mile of AI communications, firms can pursue innovation with confidence, protect sensitive data, and meet the evolving expectations of regulators worldwide.

Read the full Theta Lake post here. 

Read the daily FinTech news

Copyright © 2026 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.