A startling surge in attacks utilizing deepfake technology to bypass identity verification systems and deceive victims in online meetings has been revealed in a recent report by iProov, a company specializing in facial-recognition identity verification services. According to the report, incidents of face swap attacks, where individuals use AI-based deepfake tech to replace their faces with another in real-time to perpetrate scams, have increased by a staggering 300 percent in 2024.
In addition to the spike in face swap attacks, iProov also observed a 783 percent rise in injection attacks targeting mobile web apps and a 2,665 percent increase in the use of virtual camera software to facilitate such fraudulent activities. Virtual camera software, commercially available and commonly used for legitimate purposes like enhancing appearance during video calls, can be exploited by malicious actors to impersonate individuals by injecting fake video feeds using AI technology, making it more challenging to detect such deception.
Chief scientific officer of iProov, Andrew Newell, emphasized the scale and sophistication of these deepfake attacks, highlighting the existence of over 120 tools actively employed to manipulate scammers’ faces during live calls. He expressed concern over the growing complexity of these attacks, creating over 100,000 potential combinations when combined with various injection methods and delivery mechanisms.
Despite the self-serving nature of the report, iProov recommended organizations to adopt multiple defensive layers against deepfake attacks rather than relying on a single security approach. The rise in identity-spoofing attacks leveraging real-time video poses a significant threat, with advancements in AI tools making it harder to detect traditional signs of face swapping during video calls.
The spread of deepfake technology to a wider range of criminal elements, facilitated by online markets offering tools for identity verification spoofing, has further exacerbated the threat landscape. Crime-as-a-service marketplaces are enabling cybercriminals to access and utilize deepfake technology, democratizing its availability and increasing the potential impact of fraudulent activities.
As deepfake technology becomes more accessible to criminal entities, cyber experts anticipate a shift towards utilizing deepfakes for larger financial gains compared to traditional phishing scams. The difficulty in detecting deepfake videos, combined with the lack of critical analysis by users when confronted with potential fakes, raises concerns about the effectiveness of current security measures against this evolving threat.
The report’s findings underscore the urgent need for enhanced cybersecurity measures and user education to combat the rising tide of deepfake attacks in online environments. With the proliferation of advanced AI-based tools and the democratization of deepfake technology, vigilance and readiness are crucial to safeguard individuals and organizations from falling victim to increasingly sophisticated fraud schemes.