The rapid evolution of the IT industry has taken an interesting turn with the rise of generative AI. Just a year ago, at the 2022 edition of Black Hat, Chief Information Security Officers (CISOs) were hesitant to discuss AI. However, at RSAC 2023, conversations surrounding generative AI dominated the security industry, and at Black Hat USA 2023, the focus shifted towards managing the technology as an aid to human operators. This change in perspective reflects a move from exaggerated hype to a more practical outlook.
Generative AI is expected to play a significant role in cybersecurity products, services, and operations in the coming years. This is partly driven by the fact that there is a shortage of cybersecurity professionals, a problem that is likely to persist for the foreseeable future. Rather than replacing human workers, generative AI is seen as a tool to enhance the effectiveness of cybersecurity professionals. The goal is to empower each professional to perform at a higher level, particularly Tier 1 analysts, who can benefit from the additional context, certainty, and prescriptive options provided by generative AI.
However, while the potential of generative AI is recognized, there are also limitations that need to be addressed. One of the key limitations discussed by industry experts is the quality of training data. The effectiveness of any AI deployment is directly dependent on the quality of the data used to train the models. Companies are increasingly acknowledging the importance of domain expertise, focusing AI instances on specific topics or areas of interest to optimize training. This approach ensures that the AI models deliver the best possible performance and reliability.
Another limitation highlighted is the issue of trust. People tend to be skeptical of AI and view AI engines as “black boxes” that produce results without transparency. To build trust in generative AI, security and IT departments need to communicate the process of training, generating, and utilizing AI models. It’s crucial for human workers to trust the responses they receive from generative AI in order to fully leverage its benefits as an aid. Without trust, the potential of generative AI will be severely limited.
A point of confusion that emerged during the conferences was the lack of specificity when referring to “AI.” Many participants were discussing generative AI or large language models (LLM AI) when talking about the potential of the technology. However, some individuals pointed out that AI has already been used in security products for years. This disconnect highlights the importance of defining terms and being clear about the specific type of AI being referred to in discussions.
The AI used in security products for years typically employs smaller models and generates responses quickly, making it useful for automation. On the other hand, generative AI can handle a broader range of questions using models built from extensive data sets. While generative AI may not match the speed of traditional AI, it offers more comprehensive and nuanced responses. It is essential to recognize these distinctions to effectively communicate the capabilities and limitations of AI in cybersecurity.
Overall, the emergence of generative AI as a topic in cybersecurity is undeniable. It offers valuable opportunities for enhancing the industry’s capabilities. However, it is crucial to address limitations, such as data quality and trust, and to define terms accurately to have meaningful conversations about AI. As the industry continues to explore the potential of generative AI, further discussions and articles on the subject are expected to shape its future in cybersecurity.

