CyberSecurity SEE

Threat Actor Offers Tools for Bypassing Two-Factor Authentication to ProKYC Customers

Threat Actor Offers Tools for Bypassing Two-Factor Authentication to ProKYC Customers

Threat actors have been exploiting a newly uncovered deepfake tool, ProKYC, to circumvent two-factor authentication on cryptocurrency exchanges. This tool specifically targets New Account Fraud (NAF) attacks, enabling threat actors to create synthetic accounts that are verified using facial recognition authentication mimics.

The ramifications of this exploit are significant, as threat actors can then engage in activities such as money laundering, creating mule accounts, and other fraudulent practices. In 2023 alone, losses from such attacks have already exceeded $5.3 billion. The sophistication of ProKYC underscores the increasing threat that deepfake technology poses to financial institutions around the world.

Emerging artificial intelligence-powered tools have significantly enhanced cybercriminals’ ability to bypass multi-factor authentication (MFA) by generating meticulously forged documents. In the past, fraudsters relied on low-quality scanned documents obtained from the dark web. However, with AI-driven tools, cybercriminals can now forge highly detailed documents that closely resemble authentic ones, making it easier to deceive security systems and gain unauthorized access to sensitive information.

ProKYC’s deepfake tool, available for purchase on the dark web, takes advantage of deep learning technology to outsmart authentication processes by generating counterfeit documents and realistic videos of fictitious identities. By doing so, cybercriminals can bypass facial recognition systems with ease. The effectiveness of this tool is evident in its ability to sidestep ByBit’s security measures, posing a grave threat to online platforms by undermining their authentication protocols and enabling fraudulent activities.

To perpetrate an account fraud attack, the attacker leverages AI-generated deepfakes to create a synthetic identity complete with forged government documents like an Australian passport, along with a video that bypasses facial recognition checks. The video mimics head movements and is inputted into the system instead of a live camera feed, tricking the system and allowing for a successful account fraud attempt.

Detecting such attacks proves to be challenging due to the difficulty in balancing strict biometric authentication systems to avoid false positives and implementing lax controls that can lead to an increased risk of fraud. Signs of digital forgery include high-quality images and videos, inconsistencies in facial features, and unnatural eye and lip movements during biometric authentication.

According to insights from Cato Networks, organizations must be proactive in defending against AI threats by gathering threat intelligence from various sources, including human and open-source intelligence. As threat actors continually evolve their use of deepfake technologies and software, it is crucial to stay updated on the latest trends in cybercrime to effectively combat these threats.

In conclusion, the emergence of tools like ProKYC’s deepfake technology underscores the ever-evolving landscape of cybersecurity threats. Financial institutions and organizations must remain vigilant and adopt robust cybersecurity measures to safeguard against the growing sophistication of cybercriminal tactics.

Source link

Exit mobile version