AI security reaches a turning point for enterprises

AI

AI security has reached a turning point, with the latest Cyber 60 CISO survey highlighting how artificial intelligence has moved rapidly from experimentation to essential infrastructure.

Almost half of organisations now say AI is critical to both business operations and security strategy, while three quarters report having experienced, or at least suspected, an AI-related security incident, claims Theta Lake.

The implication is clear: AI risk is no longer theoretical. It is operational, legal and reputational, with consequences that extend well beyond the IT function.

Recent high-profile incidents illustrate how quickly AI tools can create exposure. Confidential data has been inadvertently shared through generative AI tools, automated systems have provided incorrect guidance with legal consequences, and AI-driven decision-making has triggered claims of discrimination. These cases reinforce a central reality for organisations: responsibility for AI outputs sits squarely with the firm. Without governance, oversight and visibility, AI can shift from competitive advantage to material liability in a matter of moments.

AI is now shaping how work is created, interpreted and acted upon. Generative tools accelerate communication and decision-making, but they also introduce new attack vectors such as prompt injection, model manipulation and jailbreak behaviour. Manual reviews and static, rules-based controls cannot keep up with the volume and speed of AI-influenced content. As a result, organisations are increasingly turning to supervisory and review-focused AI to monitor and mitigate risks in real time.

A defining challenge emerging from the survey is that AI is simultaneously becoming the largest source of new risk and one of the most indispensable capabilities within the enterprise. As adoption accelerates across departments, security leaders are being forced to rethink long-standing assumptions about trust, control and accountability. Traditional security models, designed for predictable systems and human-driven workflows, are proving insufficient in an AI-mediated environment.

One of the most striking findings is that while AI-related incidents are already widespread, visibility gaps persist. Many organisations still treat “AI risk” as a single category, when in reality it spans two very different surfaces. External threats involve attackers using AI to scale phishing, fraud or malware, problems that can often be addressed with existing security tooling. Internal AI usage, however, presents a far broader and more complex challenge.

Internal risk arises from how employees use AI tools, both approved and unsanctioned. Sensitive data can be entered into prompts, outputs may be inaccurate or biased, and autonomous actions may be taken without adequate human oversight. Critically, when regulators or courts ask how a decision was reached, organisations must be able to explain what the AI produced and how it influenced human judgement. Liability does not disappear simply because an algorithm was involved.

This internal risk surface grows rapidly as AI adoption scales. Every employee becomes an AI operator, every prompt a potential compliance record, and every automated recommendation something that must be defensible. Blocking AI usage outright tends to drive it underground, creating shadow AI and further reducing visibility. The sustainable alternative is enablement with governance, allowing approved tools while ensuring prompts, responses and outcomes are captured in context.

Visibility across communication channels has therefore become foundational. AI-generated content rarely exists in isolation; it flows between chat, meetings, documents and email. Capturing AI-influenced communications with full conversational context allows security, compliance and IT teams to understand how information evolved and how decisions were shaped, restoring the oversight needed to govern AI safely at scale.

The survey also shows a shift in CISO priorities away from raw AI capability and towards governance. More than half plan to evaluate model access controls and secure inference platforms, signalling a focus on intentional, observable usage. Governance is increasingly defined by supervision and accountability rather than restriction, enabling innovation without sacrificing control.

Vendor strategy now plays a decisive role in purchasing decisions. Organisations are scrutinising how AI is designed, trained and governed within vendor platforms, with explainability and reliability emerging as core requirements. For regulated workflows, AI outputs must be traceable and defensible, making transparency and human-in-the-loop oversight essential.

Prompt manipulation and shadow AI are also emerging as key operational risks. These issues are less about user behaviour and more about access and oversight. When AI operates outside observable channels, organisations lose the context needed to understand intent and risk. As a result, the focus is shifting from stopping usage to making usage visible.

Finally, AI risk surface assessments are becoming a priority. Most organisations have already conducted, or plan to conduct, assessments to identify where AI intersects with communication and decision-making. The aim is not to slow adoption, but to ensure AI-assisted work can be located, understood and governed before unseen risk accumulates.

Taken together, the findings point to a new requirement for organisations: AI must be observable and reviewable within communication systems. Visibility, governance and trust in AI design will define whether AI accelerates progress or quietly compounds risk.

Find more on RegTech Analyst

Read the daily FinTech news

Copyright © 2026 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.