There has been a staggering 704% increase in face swap attacks, a form of deepfake, from the first half (H1) to the second half (H2) of 2023, according to a comprehensive report by iProov.
According to the research, this alarming surge can be attributed to the growing sophistication of Generative AI solutions, as well as the prevalence of malicious activities targeting digital identities.
Furthermore, the report outlines a 672% increase in the deployment of deepfake media alongside metadata spoofing tools, amplifying the complexity of cyber threats faced by organisations and individuals.
Emulators, another tool employed by threat actors, witnessed a notable 353% surge in usage, particularly in video injection attacks. Additionally, mobile web platforms experienced a 255% uptick in digital injection attacks during the same period, highlighting the evolving nature of cyber threats.
Andrew Newell, Chief Scientific Officer, iProov, explained, “Generative AI has provided a huge boost to threat actors’ productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesised media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification.”
Unsurprisingly, the report underscored the urgent need for robust remote identity verification solutions amidst the escalating threat landscape.
As organisations and governments increasingly rely on digital ecosystems to provide remote access and services, ensuring the integrity of identity verification processes has become more paramount than ever, as cybercriminals add advanced AI tools to their arsenal in a bid to exploit vulnerabilities of these systems.
Newell added, “While the data in our report highlights that face swaps are currently the deepfake of choice for threat actors, we don’t know what’s next. The only way to stay one step ahead is to constantly monitor and identify their attacks, the attack frequency, who they’re targeting, the methods they’re using, and form a set of hypotheses as to what motivates them.”
The release of the threat intelligence report comes following a harrowing story that has seen a finance worker at a multinational firm pay out $25m to a group of fraudsters who were employing deepfake technology to impersonate the company’s chief financial officer, according to a report by CNN.
Hong Kong police have detailed that during a briefing on Friday, a complex scam unfolded where the employee was tricked into participating in a video conference call with what he believed were several other colleagues. However, unbeknownst to him, all participants were actually deepfake replicas.
Senior Superintendent Baron Chan Shun-ching, speaking to the city’s public broadcaster RTHK, explained that during the multi-person video conference, it became apparent that everyone the employee saw was fabricated.
The worker’s suspicions were initially aroused when he received a message from the company’s UK-based chief financial officer, discussing the necessity of a confidential transaction. Although initially wary of it being a phishing email, the employee set aside his doubts after seeing seemingly familiar colleagues during the video call.
Trusting that the individuals on the call were genuine, the employee consented to transferring a sum of $200m Hong Kong dollars (equivalent to approximately $25.6m), as disclosed by Chan.
Keep up with all the latest FinTech news here
Copyright © 2024 FinTech Global