In 2024, the cybersecurity landscape witnessed a significant uptick in AI-related threats, particularly targeting large language models (LLMs) like ChatGPT, Copilot, and Gemini. According to the annual “State of Cybercrime” report by KELA, discussions revolving around exploiting these models surged by 94% compared to the previous year, indicating a disturbing trend of cybercriminals harnessing advanced AI tools for nefarious purposes.
One concerning development highlighted in the report is the emergence of new jailbreaking techniques being developed and shared on underground forums such as HackForums and XSS. These techniques are designed to bypass the safety mechanisms of LLMs and enable the creation of malicious content like phishing emails and malware code. Among these techniques, word transformation stands out as one of the most effective methods, successfully evading detection in 27% of safety tests. By replacing sensitive words with synonyms or breaking them into substrings, cybercriminals are able to circumvent security measures, posing a significant challenge for cybersecurity experts.
The report also sheds light on the alarming increase in compromised accounts on popular LLM platforms. ChatGPT experienced a staggering rise in compromised accounts, jumping from 154,000 in 2023 to 3 million in 2024, marking a growth of nearly 1,850%. Similarly, Gemini witnessed a surge from 12,000 to 174,000 compromised accounts, reflecting a 1,350% increase. These stolen credentials, often obtained through infostealer malware, can be exploited to further compromise LLMs and their associated services, underscoring a serious cybersecurity threat.
Moreover, the report identifies emerging threats such as prompt injection and agentic AI. Prompt injection is singled out as a critical menace against generative AI applications, while agentic AI introduces a new attack vector with its autonomous decision-making capabilities. To counter these evolving risks, organizations are advised to implement robust security measures, including secure LLM integrations and advanced deepfake detection technologies. As AI-powered cyber threats continue to evolve, proactive threat intelligence and adaptive defense strategies are deemed essential to upholding security standards in the digital realm.
In conclusion, the proliferation of AI-related threats in the cybersecurity landscape demands heightened vigilance and innovative defense strategies from organizations and cybersecurity professionals. By staying ahead of malicious actors and continuously adapting to new threats, the resilience of digital ecosystems can be safeguarded against the growing sophistication of cyber threats. The call for proactive security measures and collaboration between stakeholders remains imperative in fortifying defenses and preserving the integrity of AI technologies amid an increasingly hostile cyber environment.

