The emergence of Open AI’s ChatGPT has stirred concerns about the potential impact of generative AI chatbots and large language models (LLMs) on cybersecurity. The security risks introduced by these new technologies have led some countries, US states, and enterprises to order bans on the use of generative AI technology such as ChatGPT for data security, protection, and privacy grounds.
However, generative AI chatbots and LLMs can enhance cybersecurity for businesses in multiple ways, giving security teams a much-needed boost in the fight against cybercriminal activity.
One of the ways generative AI models can help improve security is by enhancing the scanning and filtering of security vulnerabilities. The Cloud Security Alliance (CSA) found that OpenAI’s Codex API is an effective vulnerability scanner for programming languages such as C, C#, Java, and JavaScript. A scanner could be developed to detect and flag insecure code patterns in various languages, helping developers address potential vulnerabilities before they become critical security risks.
Generative AI models can also add valuable context to threat identifiers that might otherwise go missed by human security personnel. For example, ChatGPT can provide an explanation of MITRE ATT&CK identifier TT1059.001 and inform about the potential use of PowerShell in cybersecurity attacks. Additionally, generative AI can be utilized to help build rules and reverse popular add-ons based on reverse engineering frameworks like IDA and Ghidra, making it useful in threat hunting queries.
Generative AI can also be used to address supply chain security risks by identifying potential vulnerabilities of vendors. SecurityScorecard launched a new security ratings platform to integrate OpenAI’s GPT-4 system and natural language global search. Customers can ask open-ended questions about their business ecosystem, including details about their vendors, and quickly obtain answers to drive risk management decisions.
LLMs not only generate text but also detect and watermark AI-generated text, which could become a common function of email protection software. It is realistic to assume that LLMs could easily detect untypical email address senders or corresponding domains and effectively screen phishing emails and polymorphic code.
Generative AI chatbots/LLMs ultimately enhance security and defenses naturally over time, but utilizing AI/LLMs to help, not hurt, cybersecurity postures will all come down to internal communications and response. Harnessing AI/LLMs’ potential to improve cybersecurity is not without its risks, and companies must carefully consider their implementation strategies and ensure their use is safe and secure. Regular updates and human oversight must be in place to ensure LLMs function correctly, and live up to their potential to support organizational goals and defend the firm against cyber threats.
