CyberSecurity SEE

ChatGPT Users Vulnerable to Credential Theft

ChatGPT Users Vulnerable to Credential Theft

New findings from cybersecurity firm Group-IB have raised concerns about the increasing compromise of ChatGPT accounts and the potential for threat actors to exploit the access and gather sensitive information for targeted attacks. ChatGPT, a chatbot developed by OpenAI, has become a popular tool used by individuals and organizations for various purposes. However, the default setting of storing past user queries and AI responses within the accounts has made them vulnerable to exploitation.

Group-IB’s recent report highlighted the growing threat to ChatGPT credentials over the past year. The stored information within these accounts can be a gateway for threat actors to gain unauthorized access and obtain users’ personal and professional data, leaving individuals at risk of identity theft, financial fraud, targeted scams, and other malicious activities. Dmitry Shestakov, the head of threat intelligence at Group-IB, emphasized the potential dangers and urged caution.

To conduct their research, Group-IB’s team analyzed 101,134 infected devices that contained ChatGPT data stolen by information stealers. By monitoring dark web communities and illicit marketplaces, researchers were able to identify compromised ChatGPT credentials. The majority of affected accounts were located in the Asia-Pacific region, indicating the global reach of these attacks. The number of compromised accounts steadily increased from June 2022 to March 2023, with last month seeing the highest recorded figure of 26,802 compromised accounts.

The primary malware used in the compromise of ChatGPT credentials is known as “Raccoon.” This notorious information stealer has gained attention in the cybersecurity community. In March 2022, a Ukrainian national named Mark Sokolovsky was apprehended in the Netherlands for operating Raccoon’s malware as a service. He was later indicted by the U.S. Department of Justice for his involvement in cybercrime activities.

Information stealer malware, like Raccoon, is designed to extract credentials stored in infected web browsers. This includes sensitive data such as bank card details and cryptocurrency wallet information. The stolen data is usually compiled into a log file which threat actors can exploit for their nefarious activities.

Following the publication of Group-IB’s report, OpenAI, the creators of ChatGPT, issued a statement clarifying that the compromised accounts were not the result of a breach within their system. OpenAI attributed the issue to commodity malware present on users’ devices. They assured users that the company adheres to industry best practices for user authentication and authorization. OpenAI urged users to protect themselves by using strong passwords and only installing verified and trusted software on their personal computers.

The implications of compromised ChatGPT accounts extend beyond personal risk. With the integration of the chatbot into workplaces, the potential for exploitation of confidential data increases. Employees, including management and sales agents, often use ChatGPT to enhance their communication, such as drafting outgoing emails. If these emails contain valuable and proprietary information, cybercriminals can reap significant rewards by gaining unauthorized access.

Moreover, organizations have embraced ChatGPT for cybersecurity purposes, allowing the AI to support developers in various areas, including code refinement. However, this integration also poses risks. If developers inadvertently transmit the complete code, including service credentials, it can compromise the entire infrastructure, leading to future security breaches.

To mitigate the risk of unauthorized access, Group-IB researchers recommend implementing multiple layers of security measures. This includes using strong passwords, enabling multifactor authentication, and regularly updating software. Despite the arrest of Mark Sokolovsky and the disruption caused to Raccoon’s operations, information stealers remain an ongoing threat. Group-IB identified over 96 million logs actively being sold on underground markets between July 2021 and June 2022. Shestakov predicted that both ChatGPT users and the tactics employed by cybercriminals to extract personal data would continue to be targeted.

As the popularity of ChatGPT grows, cybersecurity experts anticipate that more accounts will appear in information stealer logs. Cybercriminals are increasingly leveraging stolen credentials to launch sophisticated cyber attacks, including ransomware attacks. As a result, it is crucial for users to remain vigilant and take appropriate security measures to protect their accounts and personal information.

In conclusion, the compromise of ChatGPT accounts has become a significant concern due to the access it provides to users’ sensitive information. Group-IB’s research highlights the need for heightened security and awareness among ChatGPT users. OpenAI and cybersecurity experts emphasize the importance of implementing strong security measures to mitigate the risks associated with compromised credentials. As the threat landscape evolves, it is essential for individuals and organizations to stay informed and proactive in protecting themselves against potential cyber threats.

Source link

Exit mobile version