HomeCII/OTAI-Enabled Voice Cloning Used to Fabricate Kidnapping Video

AI-Enabled Voice Cloning Used to Fabricate Kidnapping Video

Published on

spot_img

A recent incident involving a cybercriminal’s attempt to extort $1 million from a woman in Arizona has shed light on the growing danger of voice cloning enabled by artificial intelligence (AI). The criminal claimed to have kidnapped the woman’s daughter and used deepfake technology to create convincing audio recordings of the girl’s distress. This incident is just one example of how cybercriminals are exploiting AI tools to scam people.

In June, the FBI issued a warning to consumers about criminals using manipulated videos and photos to target individuals in extortion attempts. These scams, which often involve deepfake technology, have added a new twist to imposter scams and have resulted in billions of dollars in losses for US consumers.

Creating deepfake videos and audio is relatively easy for attackers, requiring only small samples of biometric content. Trend Micro, a cybersecurity company, reported that even a few seconds of audio from social media platforms like Facebook, TikTok, and Instagram are enough for threat actors to clone someone’s voice. With a plethora of AI-enabled voice cloning tools available, cybercriminals have easy access to these technologies for a minimal cost.

Moreover, the Dark Web provides threat actors with a vast amount of identity-containing data that can be correlated with publicly available information to identify potential targets for scams. Trend Micro researchers also mentioned the emergence of specific tools on the Dark Web that enable virtual kidnapping scams, allowing threat actors to refine their attacks.

AI tools like ChatGPT enable attackers to combine data from various sources to identify and target individuals for voice cloning and other scams. These tools allow imposters to generate automated conversations that can be used to deceive victims. Threat actors are also expected to use SIM-jacking, where they hijack an individual’s phone, in imposter scams like virtual kidnapping. This tactic increases the chances of a successful ransom payout.

While cybercriminals are leveraging AI-enabled technologies for their scams, voice cloning vendors are becoming increasingly aware of the risks associated with their tools. Companies like ElevenLabs and Microsoft are considering additional measures to mitigate the misuse of their voice cloning technologies. Facebook parent company Meta has decided to exercise caution in making its generative AI speech tool, VoiceBox, generally available, citing concerns about potential misuse.

The incident involving the attempted extortion of a mother in Arizona highlights the urgent need for stronger cybersecurity measures to combat the growing threat of AI-enabled voice cloning. As cybercriminals continue to exploit these technologies, individuals and companies must remain vigilant and take steps to protect themselves from falling victim to scams. With the proliferation of deepfakes and other AI-driven cyber threats, it is crucial to stay informed and adopt robust security practices to safeguard against these risks.

Source link

Latest articles

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...

Fortinet Warns of Active Exploitation of FortiOS SSL VPN 2FA Bypass Vulnerability

 Fortinet on Wednesday said it observed "recent abuse" of a five-year-old security flaw in FortiOS...

More like this

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...