HomeRisk ManagementsNation-State Hackers Exploit Gemini AI Tool

Nation-State Hackers Exploit Gemini AI Tool

Published on

spot_img

Nation-state threat actors have been identified as frequent abusers of Google’s generative AI tool, Gemini, for their malicious cyber operations. The Google Threat Intelligence Group (GTIG) conducted an analysis revealing that APT groups from Iran, China, Russia, and North Korea are utilizing the large language model (LLM) for a variety of nefarious activities. These tasks mainly include research, vulnerability exploitation, malware development, and the creation and localization of content like phishing emails.

While the GTIG has not yet observed any original or persistent attempts by nation-state threat actors to use prompt attacks or other AI-specific threats, the tool has primarily been used to enhance productivity thus far. There have been only a “handful” of unsuccessful attempts to bypass Gemini’s safety controls through publicly available jailbreak prompts. Gemini has responded with safety fallback responses and refused to follow the threat actors’ instructions.

The researchers at GTIG noted that instead of enabling disruptive change, generative AI allows threat actors to operate at a faster pace and with greater volume. However, with new AI models and agentic systems emerging on a daily basis, they anticipate that threat actors will evolve their use of AI accordingly.

In the case of Iran, government-backed actors, particularly APT42, accounted for the largest Gemini use among Iranian APT actors. They used Gemini for reconnaissance on potential targets, research on vulnerabilities, and crafting phishing emails that appeared legitimate. Iranian actors targeted defense experts, organizations, foreign governments, and individual dissidents.

Chinese APT groups focused on reconnaissance of US military and IT organizations using Gemini. They also used the tool to assist with compromise, post-compromise activities, and finding solutions to technical challenges. For instance, a PRC-backed group sought assistance from Gemini in deploying a plugin for Microsoft Outlook to all computers silently.

North Korean state actors utilized Gemini for various stages of the attack lifecycle, including researching how to compromise Gmail accounts and other Google services. They also used Gemini for IT worker schemes, generating revenue for the DPRK government. Some North Korean APT groups attempted to use Gemini for development and scripting tasks, such as code for sandbox evasion.

Russian nation-state groups showed limited engagement with Gemini compared to other nations, with observed uses including rewriting malware into another language and adding encryption functionality to code. The low engagement may be due to Russian actors avoiding Western-controlled platforms like Gemini to evade monitoring of their activities, instead opting for AI tools from Russian firms or locally hosting LLMs.

Overall, the abuse of generative AI tools by nation-state threat actors poses a significant cybersecurity risk that could potentially lead to more sophisticated and widespread malicious activities in the future. The continuous evolution of AI technology will undoubtedly shape the landscape of cyber warfare, requiring constant vigilance and innovation in defense mechanisms to combat these threats effectively.

Source link

Latest articles

Top 5 NIS2 Compliance Software and Solution Providers from heimdalsecurity.com

The Network and Information Systems Directive 2 (NIS2) has been officially implemented by the...

Challenges of balancing AI personalization and voter privacy in political campaigns

Researcher Mateusz Łabuz, from the IFSH, recently shared insights in a Help Net Security...

Microsoft Issues Warning About ViewState Code Injection

Microsoft's recent warning about ViewState code injection attacks highlights a growing threat to web...

More like this

Top 5 NIS2 Compliance Software and Solution Providers from heimdalsecurity.com

The Network and Information Systems Directive 2 (NIS2) has been officially implemented by the...

Challenges of balancing AI personalization and voter privacy in political campaigns

Researcher Mateusz Łabuz, from the IFSH, recently shared insights in a Help Net Security...