HomeSecurity OperationsGoogle's AI Boosting Cyber Operations for Multiple Hacker Groups

Google’s AI Boosting Cyber Operations for Multiple Hacker Groups

Published on

spot_img

Google, a prominent player in the field of artificial intelligence, has made a name for itself with its advanced Gemini AI models. These models have been utilized in various sectors, including academia, professional settings, and everyday life, to provide efficient and effective services. However, recent reports suggest that malicious actors worldwide are leveraging Google’s AI services to enhance their cyber operations.

The Google Threat Intelligence Group (GTIG) released a blog post detailing the detection of more than 57 malicious groups associated with countries like China, Iran, North Korea, and Russia, using Gemini for their attacks. Many of these groups have connections to government agencies, raising concerns about potential national security risks. The rise of DeepSeek, a popular AI tool, has caught the attention of US officials. Still, the emergence of local AI platforms in the wrong hands also poses significant threats.

These attackers, identified as Advanced Persistent Threat (APT) groups, have turned to Google’s AI to streamline their operations. According to the GTIG report, APTs have utilized Gemini for tasks ranging from coding and payload development to reconnaissance and post-compromise activities like defense evasion. Iranian group APT42 stands out as the heaviest user of Gemini, targeting NGOs, media outlets, academic institutions, and activist groups globally.

Chinese APT groups use Gemini for activities such as reconnaissance, coding, and exploiting network vulnerabilities. Similarly, Russian APT groups rely on Gemini to encrypt malicious code and transform it into different languages. North Korean APT groups have also employed Google’s AI in infiltrating Western IT companies through remote job applications, showcasing their sophisticated cyber tactics.

Furthermore, Google has identified restriction-free LLMs (Large Language Models) that bypass ethical and security boundaries. These LLMs, available on underground forums, offer tools like WormGPT, WolfGPT, and FraudGPT for generating phishing emails, BEC attack templates, and malicious websites. The GTIG is actively working on defenses against prompt injection attacks and emphasizes the importance of industry-government collaboration for national and economic security.

In conclusion, the exploitation of Google’s AI by malicious groups highlights the evolving landscape of cybersecurity threats. As technology continues to advance, it is crucial for companies, governments, and security experts to stay vigilant and collaborate in mitigating these risks effectively. Google’s efforts to defend against malicious activities demonstrate the ongoing commitment to safeguarding users and strengthening national security in the face of emerging cyber threats.

Source link

Latest articles

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...

Why Business Impact Should Lead the Security Conversation

 Security teams face growing demands with more tools, more data, and higher expectations...

More like this

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...