DeepL reveals rise of AI in financial services

DeepL reveals rise of AI in financial services

Artificial intelligence is reshaping how banks and FinTech firms communicate with customers across borders, according to new research from DeepL.

The global AI product and research company found that AI now plays an integral role in customer interactions across the UK financial sector, with 37% of all communications involving AI tools.

As almost half of all client engagements are now cross-border, institutions are turning to AI to deliver faster, more reliable, and multilingual customer experiences.

The survey, which included 1,500 professionals across Europe — with 500 from the UK — highlights how financial institutions are embedding AI into everyday operations.

From instant translation and fraud monitoring to virtual assistants, AI is helping banks and FinTechs improve the quality and consistency of customer service. Yet, the study also warns of rising risks from “shadow AI,” where employees use unapproved tools that could compromise security and compliance.

AI’s impact on customer communications is clear. Currently, 37% of client interactions in UK financial services involve AI, a figure expected to rise to nearly half within a year.

The top use cases include AI-powered translation (52%), virtual assistants for customer queries (51%), fraud detection (50%), and automated account support (48%).

With 39% of customer work in the UK now conducted across borders, financial firms face mounting pressure to deliver seamless communication in multiple languages.

DeepL’s data reveals that 85% of professionals believe language gaps slow down activity for non-English speakers, and 84% struggle to recruit staff who can communicate across regions.

Seven in ten UK finance professionals said AI improves both the speed and quality of customer service, while an equal number noted that customers are more satisfied when served in their native language.

However, as AI becomes more embedded in customer-facing processes, concerns around unauthorised use have grown. The study found that 65% of UK financial professionals admit employees are using unapproved AI tools to communicate with customers. Such practices heighten cybersecurity and regulatory risks, especially when sensitive data is processed through insecure systems.

DeepL chief revenue officer David Parry-Jones said, “In financial services, where every interaction is highly regulated and reputational risk is acute, staff will inevitably look for workarounds if the tools provided don’t meet their needs.

“The real risk is not employees experimenting with AI, but companies failing to give them secure, fit-for-purpose solutions. By building a collaborative approach between IT and frontline teams, organisations can avoid shadow AI, protect against cybersecurity threats, and still realise the full benefits of trusted AI.”

Read the daily FinTech news
Copyright © 2025 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.