HomeCII/OTHow Deepfake Doctors Peddle Bogus Cures on TikTok

How Deepfake Doctors Peddle Bogus Cures on TikTok

Published on

spot_img

Beware of AI-Generated ‘TikDocs’ Exploiting Public Trust in Medical Advice to Sell Dubious Supplements

The emergence of generative AI technology, once the reserve of research laboratories, has found its way into the hands of everyday users, including those with nefarious intentions. Rather than harnessing these capabilities for creativity or positive innovation, some individuals are leveraging deepfake technology to deceive and mislead. This advancement, which enables the creation of hyper-realistic videos, images, and audio, has shifted from merely entertaining uses—such as celebrity impersonations and attempts to manipulate public opinion—to more malicious applications, including identity theft and a range of scams.

Recent investigations by ESET researchers in Latin America have uncovered a troubling trend on social media platforms like TikTok and Instagram. Here, AI-generated avatars are masquerading as medical professionals—gynecologists, dietitians, and other healthcare experts—to promote questionable supplements and wellness products. These videos, crafted with meticulous polish and apparent sincerity, blur the lines between genuine medical guidance and persuasive marketing tactics. By wrapping their sales pitches in a veneer of professional authority, these deepfake avatars effectively exploit the public’s trust in the medical profession, turning ethical practices into avenues for unethical profit.

The Structure of Deception

Typically, these videos adhere to a strikingly similar formula: a talking avatar occupies a corner of the screen while delivering health and wellness tips that appear authoritative. The advice often leans heavily on embracing “natural” remedies, subtly guiding viewers toward specific products for sale. The tactic not only capitalizes on the trust placed in medical experts but does so in a manner that is both highly unethical and disturbingly effective.

In one alarming instance, a deepfake “doctor” promotes a “natural extract” as a superior alternative to Ozempic, a medication known for its weight loss benefits. The avatar’s persuasive pitch promises remarkable results, all while directing viewers to an Amazon page where the product is labeled as “relaxation drops” or “anti-swelling aids,” without any legitimate connection to the touted benefits.

Moreover, some deepfake avatars have escalated their deceptions by endorsing unapproved medications or offering false cures for serious illnesses, occasionally even hijacking the appearance of well-known, real-life doctors. This level of impersonation raises significant ethical concerns and endangers public health by promoting harmful falsehoods.

AI’s Role in the Misleading Landscape

The crafting of these videos utilizes legitimate AI tools, allowing individuals to convert short clips into sleek avatars. While this technology can benefit content creators and influencers by enhancing their output, it simultaneously presents a significant risk, transforming from an innocuous marketing tool into a conduit for spreading misinformation and deception.

ESET’s findings revealed that over 20 TikTok and Instagram accounts are currently utilizing these deepfake avatars to push their dubious health products. One case featured a “gynecologist” claiming 13 years of experience, a persona directly traced back to a common app’s avatar library. This misuse, while clearly violating the terms and conditions of these AI tools, starkly illustrates how accessible technology can be weaponized for deceit.

The implications extend beyond simply being conned into purchasing worthless supplements; the danger lies in undermining public confidence in sound medical advice. Such videos can promote detrimental “cures” or remedies and complicate the pursuit of appropriate medical treatment.

Recognizing and Combating Deepfake Deception

As artificial intelligence continues to evolve, identifying these deceptive videos becomes increasingly complex, posing challenges even for individuals adept in technology. However, there are several indicators to aid in detection:

  • Mismatched lip movements and unnatural facial expressions can reveal discrepancies.
  • Look for visual anomalies, such as blurred edges or abrupt lighting changes.
  • A robotic or overly polished voice often indicates a lack of authenticity.
  • Check the account history: new profiles with minimal followers or no substantial background are suspicious.
  • Be wary of hyperbolic claims featuring “miracle cures” or “guaranteed results,” particularly if they lack credible validation.
  • Always verify information against trusted medical resources and refrain from sharing suspect content. Additionally, report dubious videos to the platform.

As AI technology advances, distinguishing authentic content from fabricated media is likely to become an ever-growing challenge. This reality underscores the critical necessity of not only establishing robust technological safeguards but also enhancing collective digital literacy. Such measures will empower individuals to better navigate the landscape of misinformation and scams, ultimately protecting their health and financial well-being against potentially harmful deceptions.

Source link

Latest articles

Mature But Vulnerable: Pharmaceutical Sector’s Cyber Reality

In a digital world where every click can open a door for attackers,...

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...

When Your “Security” Plugin is the Hacker

Source: The Hacker NewsImagine installing a plugin that promises to protect your WordPress...

More like this

Mature But Vulnerable: Pharmaceutical Sector’s Cyber Reality

In a digital world where every click can open a door for attackers,...

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...