Managers turn blind eye to unapproved AI use

AI

A new study by Cybernews has revealed that 59% of employees in the U.S. are using AI tools that have not been approved by their employers, with most of them sharing sensitive data through these systems.

The research highlights the widespread problem of “shadow AI” in the workplace — the use of unauthorised AI tools — and the growing gap between corporate policies and employee behaviour.

According to the report, 75% of employees using unapproved AI systems admitted to sharing potentially sensitive information such as customer data, employee details, and internal company documents. Despite 89% of employees acknowledging the risks, many continue to rely on these tools, underscoring a lack of oversight and communication between employers and their teams.

Cybernews security researcher Mantas Sabeckis said, “If employees use unapproved AI tools for work, there’s no way to know what kind of information is shared with them. Since tools like ChatGPT feel like you’re chatting with a friend, people forget that this data is actually shared with the company behind the chatbot. As it turns out, many managers quietly give a thumbs-up to using these tools, even if they’re not officially approved. That creates a gray zone where employees feel encouraged to use AI, but companies lose oversight of how and where sensitive information is being shared.”

The survey found that 93% of executives and senior managers use unapproved AI tools — the highest percentage across all job levels. Managers and team leaders, who are responsible for enforcing compliance, are among the most frequent users of unapproved AI tools, creating what researchers describe as a paradox of poor example-setting at the top.

Žilvinas Girėnas, head of product at nexos.ai, explained, “When employees paste sensitive data into unapproved AI tools, there’s no guarantee of where that data will end up. It might be stored, used to train someone else’s model, exposed in logs, or even sold to third parties. That means customer details, contracts, or internal documents can quietly leak outside the company without anyone noticing.”

IBM has estimated that shadow AI can increase the average cost of a data breach by $670,000, underlining the financial and reputational stakes of unregulated AI use. Yet the Cybernews study found that 23% of employers still have no formal policy governing AI use in the workplace.

Sabeckis added, “While awareness of the risks of irresponsible AI use does exist, employees still need more knowledge. It would be a shame if the only actual way to stop employees from using unapproved AI tools at work were an actual data breach. For many companies, even a single data breach can be impossible to recover from.”

Girėnas concluded, “Shadow AI thrives in silence. When managers turn a blind eye and there’s no clear policy, employees assume it’s fine to use whatever tool gets the job done. That’s how sensitive data ends up in places it should never be. AI use in the workplace should never live in a gray zone, and leaders need to set clear rules and give employees secure options before the shortcuts turn into breaches.”

The report also found that only one-third of employees using company-approved AI tools believe they meet their needs, suggesting that usability and performance gaps may be driving staff towards unauthorised alternatives. Sabeckis warned that companies must balance security and innovation by adopting secure, compliant AI systems.

Find more on RegTech Analyst.

Keep up with all the latest FinTech news here

Copyright © 2025 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.