The focus on AI in cybersecurity was a hot topic at the Gartner Security and Risk Management Summit in National Harbor, Maryland this week. Experts at the conference emphasized the importance of understanding AI threats and defensive strategies for cybersecurity professionals.
Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, highlighted the ways in which hackers are using AI to improve phishing and social engineering tactics, citing deepfakes as a particular concern. However, D’Hoinne and Director Analyst Kevin Schmidt noted that there have not been any new attack techniques stemming from AI, but rather enhancements to existing methods like business email compromise (BEC) and voice scams.
Despite the potential threats posed by AI, security tools utilizing artificial intelligence are still in the early stages of development. AI assistants are seen as a promising application in cybersecurity, offering support with tasks such as patching, mitigations, alerts, and threat intelligence. However, D’Hoinne emphasized the importance of using these tools as supplements to human security staff to ensure critical thinking skills are not lost.
A key focus of the conference was the precision required in AI prompt engineering for cybersecurity purposes. Kevin Schmidt highlighted the importance of crafting specific prompts for AI assistants to overcome the limitations of large language models (LLMs). Schmidt stressed the need to validate outputs and provide oversight, especially for junior staff members. Chatbots like ChatGPT, he noted, should only be used for noncritical data to ensure accuracy.
Schmidt provided examples of both effective and ineffective AI security prompts to assist security operations teams. By offering clear and specific prompts, such as creating detection rules in a SIEM to identify suspicious logins or analyzing firewall logs for patterns and anomalies, AI assistants can generate meaningful outputs for security teams to act upon.
In addition to these examples, Schmidt highlighted the use of AI tools for incident investigation and identifying web application vulnerabilities. By providing detailed prompts for AI assistants, security teams can streamline their processes and prioritize security efforts effectively.
Tools such as chatbots and SecOps AI assistants, including options like CrowdStrike Charlotte AI, Microsoft Copilot for Security, and SentinelOne Purple AI, offer potential solutions for cybersecurity teams. Startups in the AI security space, such as AirMDR, Crogl, Dropzone, and Radiant Security, are also emerging as players in this rapidly evolving field.
Overall, the Gartner Security and Risk Management Summit underscored the importance of leveraging AI in cybersecurity strategies while also recognizing the need for precision, validation, and human oversight in utilizing these tools effectively. As AI continues to evolve, staying informed and adapting to new threats and defensive capabilities will be crucial for cybersecurity professionals in safeguarding their organizations against emerging cyber threats.

