Hackers have found a new tool in their cyber arsenal by exploiting the capabilities of DeepSeek and Qwen AI models to develop advanced malware. These AI models, known for their ability to generate complex content, have become a target for cybercriminals due to their fewer restrictions compared to more established models like ChatGPT.
With the minimal safeguards in place, low-skilled hackers are able to easily bypass basic security measures and create harmful content using scripts and tools readily available. The lack of robust anti-abuse mechanisms in these newer AI models makes them especially vulnerable to misuse, paving the way for cybercriminals to leverage their capabilities for malicious purposes.
One technique that cybercriminals are using involves jailbreaking prompts to manipulate AI models into producing uncensored or unrestricted content. By overriding previous instructions or bypassing security systems in banking applications, hackers are able to craft dangerous tools like infostealers. These tools are designed to extract personal information such as login credentials and financial details, posing a serious threat to individuals and organizations.
Researchers at Check Point have discovered that cybercriminals are using AI models to create scripts that can bypass fraud protection systems, particularly targeting financial institutions. By extracting sensitive data from unsuspecting users and avoiding detection by traditional security measures, hackers are able to exploit loopholes in the system to carry out high-level financial theft. This highlights the increasing risk that AI models pose to industries reliant on secure online transactions.
In addition to data theft, hackers are utilizing AI models like Qwen and DeepSeek to streamline their spam distribution efforts. By automating the process of sending malicious emails or messages, cybercriminals can reach a larger number of victims more efficiently. The widespread misuse of AI technologies in cybercrime underscores the need for organizations to enhance their security defenses to combat the evolving threats posed by these AI-powered attacks.
As the use of AI continues to grow in the realm of cybercrime, it is crucial for organizations to proactively develop measures to detect and mitigate the risks associated with these advanced models. By staying ahead of the curve and enhancing cybersecurity protocols, businesses can better protect themselves and their customers from the ever-evolving tactics of cybercriminals who are exploiting AI technology for nefarious purposes.

