CyberSecurity SEE

Attackers Utilize AI-Based Facebook Ad Lures to Hijack Business Accounts

Attackers Utilize AI-Based Facebook Ad Lures to Hijack Business Accounts

A recent report has revealed that a threat actor has been exploiting paid Facebook ads in order to deceive victims with the promise of AI technology. The ultimate goal of this scheme is to spread a malicious Chrome browser extension that is designed to steal users’ credentials, with the intention of taking over business accounts.

The fraudulent pages and ads have since been removed by Meta, Facebook’s parent company, after Trend Micro reported the suspicious activity. Senior threat researchers from Trend Micro, Jindrich Karasek and Jaromir Horejsi, shared their findings in a blog post.

The ads used in this campaign feature fake profiles of marketing companies or departments that claim to utilize AI to enhance productivity, increase reach and revenue, or offer assistance with teaching. Some of these deceptive ads even mention access to the conversational AI chatbot Google Bard, which is currently only available on a limited basis, in an attempt to attract victims.

The researchers noted certain telltale signs that can help identify fake profiles. These signs include purchased or bot followers, fake reviews left by other hijacked or inauthentic profiles, and a limited online history.

The primary targets of this campaign are business social networking managers or administrators and marketing specialists. These individuals often hold administrative roles for the company’s social networking sites. In one particular attack, a Trend Micro researcher who assisted with incident response observed the threat actor adding suspicious users to the victim’s Meta Business Manager. Although the threat actor has not attempted to contact the victim, they have used the victim’s prepaid promotion budget to promote their own content. This demonstrates the actor’s intention to exploit stolen accounts for malicious purposes.

The operation works by enticing Facebook users to click on the ads, redirecting them to a simple website that highlights the advantages of using large language models (LLMs) and provides a link to download the alleged “AI package.” To avoid detection by antivirus software, the attacker distributes the package as an encrypted archive, usually hosted on cloud storage platforms like Google Drive or Dropbox, using simple passwords such as “999” or “888.”

Once the package is opened and decrypted with the correct password, it typically contains a single MSI installer file. This file drops various files linked to a Chrome extension that aims to steal Facebook cookies, the user’s access token, the browser’s user agent, as well as the user’s managed pages, business account information, and advertisement account information. The extension also attempts to access the user’s IP address.

This campaign is notable for exploiting the growing interest in AI technology. The Trend Micro researchers highlighted that while early adoption of AI can provide a competitive advantage in various industries, it also presents opportunities for cybercriminals. In a similar campaign discovered earlier this year, attackers concealed the RedLine Stealer behind seemingly legitimate sponsored ads on compromised Facebook business and community pages that promoted free downloads of AI chat apps.

A report released by Deep Instinct supports the positive impact of generative AI, with 70% of security professionals stating that it enhances employee productivity and collaboration, while 63% believe it improves employee morale.

In addition to removing the fraudulent ads and pages, Meta has assured Trend Micro that it will continue strengthening its detection systems to identify similar deceptive ads and pages, benefiting from insights gained from both internal and external threat research. Trend Micro advises deploying an antivirus solution with web reputation services as a preventative measure against such threats. They also emphasize the importance of scanning files downloaded from the internet and staying vigilant against threat actors who exploit the hype surrounding new AI developments.

To avoid falling victim to this type of campaign, individuals should be cautious of specific warning signs. These include a visually appealing landing site containing a link to a malicious file, a promise of access to Google Bard despite its limited availability, an overly attractive service offering, inconsistencies in promotional posts, and the presence of a password-protected file on the landing site. By remaining vigilant and following these precautions, users can better protect themselves from such malicious schemes.

Source link

Exit mobile version