Deepfakes and other generative-AI attacks are becoming increasingly common, raising concerns about a potential surge in such incidents. The use of AI-generated text in emails is on the rise, with security firms detecting emails likely created by machines rather than humans. As a result, human-written emails now make up only 88% of all emails, while text generated by large language models (LLMs) constitutes approximately 12% of email content, up from 7% in late 2022.
To enhance defenses against AI-based attacks, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) recently unveiled three guidance documents for security organizations. These documents complement the previously released AI cybersecurity and governance checklist and include a guide for preparing for deepfake events, a framework for establishing AI security centers of excellence, and a curated database on AI security solutions.
Scott Clinton, co-project lead at OWASP, emphasizes the importance of providing guidance to organizations that utilize AI technology. Despite the risks associated with AI-based attacks, companies are eager to leverage AI as a competitive advantage. Security measures should not hinder innovation but should instead enable businesses to adopt AI safely and effectively.
In a real-world example of the growing threat posed by deepfakes, a job candidate at security vendor Exabeam experienced a deepfake attack during the final interview round. The interviewee exhibited unusual behavior and anomalies that raised suspicion among the company’s security team. Following the incident, Exabeam’s CISO, Kevin Kirkwood, and GRC team lead, Jodi Maas, recognized the need for improved procedures to detect and prevent GenAI-based attacks.
The proliferation of deepfake incidents has heightened concerns among IT professionals, with a significant number expressing alarm over the potential impact of deepfakes on organizations. As AI-generated videos become more realistic, the challenge of identifying and mitigating deepfake attacks becomes increasingly complex. Companies are urged to implement technical solutions to counter the evolving threat posed by deepfakes.
Eyal Benishti, founder and CEO of Ironscales, underscores the need for organizations to prepare for the growing sophistication of deepfakes. Detecting deepfakes and ensuring the authenticity of digital communications will require advanced defenses and a shift in mindset. As AI technology continues to evolve, companies must adapt their security strategies to address the unique challenges posed by deepfake attacks.
In conclusion, the rise of deepfakes and other generative-AI attacks necessitates a proactive approach to cybersecurity. By implementing robust defenses, organizations can better protect themselves against the increasing threat of AI-based attacks. Collaboration among security experts, the development of technical solutions, and ongoing vigilance are essential to safeguarding against the potential dangers posed by deepfakes in the digital landscape.