AI in the crosshairs: FBI’s stark warning of emerging threats from hackers


The FBI has issued a warning regarding the increased threat from criminal and nation-state hackers who are targeting AI technologies.

The threats are aimed at big tech companies and startups working on AI, particularly those developing language models like OpenAI’s ChatGPT or Google’s Bard. The FBI’s warning highlights a growing risk of intellectual property theft and the compromise of powerful chatbots.

There are distinct threats from China in the AI domain, and FBI officials have warned of a probable rise in targeting US companies, universities, and government research facilities for AI advancements. There is a concern about both legal and illegal technology acquisition methods, including foreign commercial investments.

Additionally, the FBI alerted the public to cybercriminals using AI to amplify traditional crimes, such as fraud and extortion. Threats range from the creation of synthetic content for extortion schemes to the refinement of phishing emails and malware.

The US government and the Biden administration have been attempting to counter these threats, specifically the AI race with China, through various measures like banning the export of certain high-end GPUs to China. They are also conducting defensive cybersecurity briefings to leading AI firms regarding their data models.

FBI officials are also concerned about AI-assisted criminal activities like ransom requests, fraud against the elderly, and the creation of synthetic content known as “deepfakes” for extortion purposes. These instances illustrate the potential misuse of AI, and the FBI continues to monitor reports from victims and develop relationships with AI-related companies to mitigate the risks.

“In the field of AI it is clear that US talent is one of the most desirable aspects in the AI supply chain that our adversaries need,” an official said. “The US sets the gold standard globally for the quality of research development, and nation states are actively using lucrative as well as diverse means to recruit such talent and transfer cutting edge AI research and development to aid their military and civilian programs.”

“Tools from AI are readily and easily applied to our traditional criminal schemes, whether ransom requests from family members, or abilities to generate synthetic content or identities online, attempts to bypass banking or other financial institutions’ security measures, attempts to defraud the elderly, that’s where we’ve seen the largest part of the activity,” a senior FBI official said in the call with reporters.

Keep up with all the latest FinTech news here.

Copyright © 2023 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research


The following investor(s) were tagged in this article.