HomeCyber BalkansOpenAI reveals how threat actors are exploiting ChatGPT

OpenAI reveals how threat actors are exploiting ChatGPT

Published on

spot_img

Nation-state actors have been found to be using ChatGPT for malicious activities such as malware debugging, as per a recent report by OpenAI released on Wednesday. The report, titled “Influence and Cyber Operations: An Update,” delved into how threat actors have leveraged OpenAI models, primarily ChatGPT, to carry out various threat activities, as well as how OpenAI intervened to disrupt such activities. According to the report, OpenAI has thwarted “more than 20 operations and deceptive networks worldwide that attempted to exploit our models” since the beginning of 2024.

The report highlighted different use cases of threat actors, ranging from the common practice of generating spear phishing emails to more innovative approaches. For instance, an Iranian threat actor known as Storm-0817 utilized OpenAI models to aid in the development and debugging of basic Android malware and its associated command and control infrastructure. Storm-0817 also utilized ChatGPT for creating an Instagram scraper, translating LinkedIn profiles into Persian, and other activities.

Moreover, a suspected China-based threat actor named “SweetSpecter” attempted failed phishing attacks against OpenAI and used ChatGPT to debug code for a cybersecurity tool extension and a framework for sending malicious text messages. Additionally, the report documented activities from a group called “CyberAv3ngers,” linked to the Iranian Islamic Revolutionary Guard Corps, which had targeted water utilities in the previous year. CyberAv3ngers employed ChatGPT for debugging, vulnerability research, scripting advice, and gathering specific information on industrial protocols and internet-exposed ports.

While the use of generative AI in developing malware is not a new phenomenon, OpenAI’s report indicated how threat groups are capitalizing on tools like ChatGPT to enhance their tactics, techniques, and procedures. The report also shed light on the use of ChatGPT in influence operations, particularly in the context of election threats. It referenced a previous OpenAI report from August, which detailed how an Iranian influence operation called Storm-2035 used ChatGPT to generate content for social media and websites on subjects like the U.S. presidential election and the conflict in Gaza.

Despite these advancements in using AI for malicious activities, threat actors have not seen significant breakthroughs in creating new malware or building viral audiences thus far, as per OpenAI’s assessment. The report mentioned instances of political influence campaigns generating low-engagement content in various languages, including Russian and Turkish. OpenAI took action to disrupt these campaigns by banning the involved accounts.

However, there was an exception noted where a Russian-speaking user on X (formerly Twitter) appeared to post an AI-generated comment during an argument about Donald Trump, claiming to have run out of ChatGPT credits. This particular post garnered high levels of engagement but was later found to be manually written to generate attention on social media, rather than being genuinely AI-generated.

TechTarget Editorial reached out to OpenAI for further insights on the matter, but the company had not responded at the time of publication. The evolving landscape of cyber threats and the adoption of AI tools by threat actors continue to pose challenges for cybersecurity experts and organizations worldwide.

In conclusion, the use of AI, specifically ChatGPT, by nation-state actors for malicious activities underscores the need for greater vigilance and proactive measures to combat the evolving threat landscape in cyberspace. OpenAI’s efforts in monitoring and disrupting such activities demonstrate the importance of ongoing research and collaboration in addressing cybersecurity challenges posed by advanced technologies.

Source link

Latest articles

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...

Why Business Impact Should Lead the Security Conversation

 Security teams face growing demands with more tools, more data, and higher expectations...

More like this

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...