CyberSecurity SEE

Insufficient AI for Cyberthreat Intelligence in Teenage Years

The use of Large Language Models (LLM) in cybersecurity has been a topic of interest and discussion among cybersecurity teams. At the recent Black Hat conference, members of the Google Cloud team presented on how LLM technology, such as GPT-4 and PalM, could potentially be used in the field of cyberthreat intelligence (CTI). This presentation shed light on the potential benefits of incorporating LLM technology into cybersecurity practices, particularly in addressing resourcing issues.

Many medium to large companies are currently grappling with a shortage of experienced cybersecurity professionals. When the term “cyberthreat intelligence” is mentioned to these companies, their response is often that they are just beginning to explore the opportunity. This presents a challenge for these companies as they are in the process of implementing threat intelligence programs while also dealing with resource limitations.

The capabilities of LLM technology offer a promising solution to this problem. One of the core elements of a successful threat intelligence program is processing capability, which LLMs can significantly assist with. For example, LLMs can analyze large volumes of data, such as log data, that would otherwise be overlooked due to its sheer volume. This capability allows cybersecurity teams to automate the processing of this data and generate insights that can answer critical questions from the business.

However, it is important to note that LLM technology may not be suitable for every task within cybersecurity. The presentation at Black Hat emphasized that LLMs should be focused on tasks that require less critical thinking and involve large volumes of data. Tasks that require more critical thinking, such as document translation for attribution purposes, should still be handled by human experts. Inaccuracy in attribution could have significant consequences for a business, highlighting the importance of human expertise in certain areas of cybersecurity.

It is also crucial to understand that while LLM technology has its place in the CTI workflow, it is still in its developmental stage. It cannot be fully trusted to consistently provide correct results, especially in more critical circumstances. There have been instances where the generated output from LLMs has been questionable, highlighting the need for caution in relying solely on this technology. It was aptly described by a keynote presenter at Black Hat as being “like a teenager, it makes things up, it lies, and makes mistakes.” Therefore, LLM technology should currently be used for lower priority and less critical tasks, while human experts handle the more crucial decision-making processes.

Looking to the future, it is evident that AI will play a larger role in cybersecurity decision-making. Tasks such as automating firewall rules, prioritizing and patching vulnerabilities, and disabling systems due to threats will likely be handed off to AI in the coming years. However, at this stage, it is paramount to rely on human expertise to make these critical decisions. Rushing to implement technology that is still in its infancy into such crucial roles could have detrimental consequences for cybersecurity.

In conclusion, the use of LLM technology in cybersecurity, particularly in the field of CTI, shows promise but must be approached with caution. While LLMs can assist in processing and interpretation tasks, they should not replace the expertise of human professionals. Cybersecurity teams must carefully evaluate the suitability of LLMs for each task and prioritize human decision-making in more critical scenarios. As AI continues to evolve, it is essential to strike a balance between leveraging its capabilities and ensuring the security and accuracy of cybersecurity practices.

Source link

Exit mobile version