CyberSecurity SEE

Defenders Prepare for a Future of Detecting Deepfakes

Defenders Prepare for a Future of Detecting Deepfakes

Fraudsters have reportedly been using videoconferencing for their business email compromise campaigns in 2021, taking advantage of the pandemic. By disguising themselves as business executives, fraudsters make use of deep neural networks to mimic the targeted executive’s voice and use it to instruct employees to send money. In 2022, the FBI’s Internet Crime Report revealed that this method helped fraudsters scam over $2.7 billion from businesses worldwide. This highlights a clear need for better defences that detect AI-generated images and audio, according to Vijay Balasubramaniyan, CEO of Pindrop.

Such defences will need to involve detecting liveness, or whether a live human is present at the other end of the camera or microphone. The current state-of-the-art involves not just facial recognition or voice matching but also analyzing the metadata and environment of the file to detect whether an image or voice has been injected into the camera’s pipeline or whether other signs of spoofing are present. However, companies often do not have this kind of technology in place, according to Balasubramaniyan.

While current defences outperform human capabilities, continuous upgrades are required to maintain effective protection against deepfakes. Detecting pre-recorded clips will give way to voice conversion attacks, for instance, where AI will be used in real-time to mimic the target’s voice more effectively. It also means that cybersecurity experts will never have a permanent solution, according to chief technology officer of Jumio, Stuart Wells, but acknowledge that it will play out in the game between attackers and defenders.

Balasubramaniyan, however, suggests that it will always be possible for attackers with ample resources and plenty of material; detection at that point is challenging, making it more vital to focus on preventing scalable deepfakes. In such a scenario, raising the bar so that only attackers with immense storage and algorithms that are not readily available to the public could create deep fakes would serve as the most effective barrier against deepfakes.

Source link

Exit mobile version