HomeCII/OTZero-Click GenAI Worm Causes Malware Infections, Model Contamination

Zero-Click GenAI Worm Causes Malware Infections, Model Contamination

Published on

spot_img

Researchers have discovered a new method for spreading malware through generative AI (GenAI) apps like ChatGPT by utilizing clever prompt engineering and injection techniques. This worm, named “Morris II,” is capable of tricking AI models into replicating malicious prompts as output, allowing the malware to propagate to further AI agents.

In a controlled experiment, a team of Israeli researchers demonstrated how attackers could design “adversarial self-replicating prompts” to deceive generative AI models. These prompts can be used for various malicious activities such as stealing information, spreading spam, and poisoning models. The researchers created an email system that operates using generative AI and sent a prompt-laden email to showcase how the malware could contaminate the system’s database and force it to exfiltrate sensitive data.

Additionally, the researchers showed how an adversarial prompt could be encoded in an image to coerce the email assistant into forwarding the poisoned image to other hosts. This method enables attackers to automatically propagate spam, propaganda, malware payloads, and other malicious instructions through a chain of AI-integrated systems.

The emergence of malware targeting AI models reflects the continuation of traditional security threats in a new technological landscape. Andrew Bolster, senior R&D manager for data science at Synopsys, compares these attacks to well-known injection methods like SQL injection, emphasizing the importance of safeguarding AI systems against malicious inputs.

The researchers drew parallels between the Morris worm, a self-propagating malware from 1988, and modern AI malware. Just as the Morris worm exploited vulnerabilities to influence computer functions, hackers today leverage GenAI prompts to manipulate AI systems. Bolster suggests that developers may need to break up AI models into smaller components to separate data and control aspects, mitigating the risk of malicious inputs affecting system behavior.

Moving forward, the shift towards a distributed multiple agent approach in AI development could enhance security by implementing runtime content gateways and constraints on AI functionalities. By restructuring AI systems and incorporating strict protocols to distinguish between user input and machine output, developers aim to fortify AI models against exploitation and propagation of malware. This strategic approach mirrors the evolution from monolithic software architectures to microservices, enabling better control and monitoring of AI functionalities.

Source link

Latest articles

The Rising Tide of Software Supply Chain Attacks

Software supply chain attacks have become a significant concern in recent years. According to...

DragonRank SEO Manipulator Targets Asia and Europe

A cyber threat named "DragonRank" has been discovered by Cisco Talos, targeting countries across...

TfL Confirms Customer Data Breach, Arrest of 17-Year-Old Suspect

Transport for London (TfL) has disclosed that a breach in its systems has led...

Innovator Spotlight: Lineaje from Cyber Defense Magazine

Attending Black Hat 2024 with Cyber Defense Magazine proved to be an enlightening experience...

More like this

The Rising Tide of Software Supply Chain Attacks

Software supply chain attacks have become a significant concern in recent years. According to...

DragonRank SEO Manipulator Targets Asia and Europe

A cyber threat named "DragonRank" has been discovered by Cisco Talos, targeting countries across...

TfL Confirms Customer Data Breach, Arrest of 17-Year-Old Suspect

Transport for London (TfL) has disclosed that a breach in its systems has led...
en_USEnglish