CyberSecurity SEE

DeepSeek AI Fails Several Security Tests

DeepSeek AI Fails Several Security Tests

A recent study conducted by AppSOC revealed alarming results regarding the security of the Chinese generative AI model DeepSeek-R1. The model failed a series of 6,400 security tests, highlighting a widespread lack of guardrails and susceptibility to critical vulnerabilities.

Researchers at AppSOC found that DeepSeek-R1 performed poorly in various key areas, including jailbreaking, prompt injection, malware generation, supply chain issues, and toxicity. The failure rates ranged from 19.2% to 98%, indicating significant weaknesses in the model’s security features.

One of the most concerning findings was the model’s ability to generate malware and viruses, with a failure rate of 98.8% for malware creation and 86.7% for virus code generation. These high failure rates pose a significant threat to enterprise users and provide malicious actors with new avenues for exploiting vulnerabilities in business applications.

Mali Gorantla, co-founder and chief scientist at AppSOC, warned organizations against using the current version of DeepSeek for enterprise applications due to its lackluster performance in security tests. Gorantla emphasized that even a 2% failure rate is considered unacceptable for most enterprise applications, making DeepSeek unsuitable for business-related AI use.

The overall security risk assessment of DeepSeek by AppSOC resulted in a high risk rating of 8.3 out of 10. The researchers advised organizations to refrain from using the model for applications involving personal information, sensitive data, or intellectual property due to the vulnerabilities identified during testing.

AppSOC utilized model scanning and red teaming to evaluate DeepSeek’s security risks in critical categories such as jailbreaking, prompt injection, malware creation, supply chain issues, and toxicity. The model showed failure rates of 19.2% or higher in most categories, with a median failure rate of 46%, indicating significant security threats even with lower failure rates.

Despite the initial excitement surrounding DeepSeek’s release and claims of exceptional performance and efficiency, the model has faced criticism and controversy. Researchers were able to jailbreak DeepSeek shortly after its release, exposing operational instructions, and there have been allegations of intellectual property theft and malicious campaigns targeting the model.

Organizations considering the use of DeepSeek for business applications should proceed with caution and implement strict security measures to mitigate risks. Gorantla recommended using discovery tools to audit models within the organization, scanning for vulnerabilities before deployment, and continuously monitoring AI systems for security weaknesses. By taking these proactive steps, organizations can protect themselves from potential threats posed by using vulnerable AI models like DeepSeek.

Source link

Exit mobile version