Large-language-model (LLM) systems have the potential to greatly assist security-operations and threat-intelligence teams by addressing issues such as understaffing, data overload, and competing demands. However, many companies are hesitant to adopt this technology due to a lack of experience with LLMs. Despite this hesitancy, organizations that implement LLMs can enhance their ability to synthesize intelligence from raw data and deepen their threat-intelligence capabilities.
To successfully implement LLMs, security leadership plays a crucial role in focusing these programs. John Miller, head of Mandiant’s intelligence analysis group, emphasizes the importance of implementing LLMs for solvable problems and evaluating their utility in an organization’s environment. With the absence of widely available success stories and failure stories, Miller aims to provide a framework for organizations to navigate the uncertainties and understand the potential impact of LLMs.
During their presentation at Black Hat USA, Miller and Ron Graf from Mandiant’s Google Cloud intelligence-analytics team plan to showcase how LLMs can augment security workers, accelerating and deepening cybersecurity analysis.
To establish a robust threat intelligence capability, security professionals need three essential components, according to Miller. They require relevant threat data, the ability to process and standardize that data effectively, and the capability to interpret the data in relation to security concerns. LLMs can bridge the gap by allowing other groups within the organization to request data using natural language queries and receive non-technical language responses. This simplifies the process of obtaining valuable information, such as trends in specific areas of threats or threats specific to certain markets.
Miller highlights that leaders who successfully integrate LLM-driven capabilities into their threat intelligence can expect a higher return on investment. These capabilities can significantly enhance an organization’s threat intelligence function, providing the ability to answer pertinent questions with the existing resources.
While LLMs and AI-augmented threat intelligence can improve an organization’s ability to utilize enterprise security datasets effectively, there are potential pitfalls. One such pitfall involves relying solely on LLMs to produce coherent threat analysis. LLMs may occasionally generate “hallucinations” by creating connections that do not exist or fabricating answers. To address this issue, organizations can use competing models to perform integrity checks and reduce the occurrence of hallucinations. Additionally, “prompt engineering” or optimizing the method of questioning can lead to better answers that more accurately reflect reality.
However, according to Ron Graf from Google Cloud, the best approach is to keep humans involved in the process. By pairing AI with human analysts, organizations can achieve improved downstream performance while reaping the benefits of augmented threat intelligence.
This approach has gained momentum, with cybersecurity firms exploring ways to leverage large LLMs to enhance their core capabilities. For example, Microsoft launched Security Copilot, an AI-powered tool to assist cybersecurity teams in investigating breaches and hunting for threats. Threat intelligence firm Recorded Future also introduced an LLM-enhanced capability, which has significantly saved time for its security professionals by providing concise summary reports derived from vast amounts of data.
Ultimately, threat intelligence is a “Big Data” problem, requiring comprehensive visibility into all aspects of an attack. LLMs coupled with human expertise can help synthesize this data effectively, empowering analysts to be more efficient in their roles.
Overall, while the adoption of LLMs in the field of threat intelligence may be hindered by a lack of experience, organizations that embrace this technology can overcome challenges and achieve more effective cybersecurity analysis.
