Threat actors have recently been using artificial intelligence to create decoy ads that outsmart malvertising-detection systems on the Google Ads platform, as reported by multiple sources. This new technique involves attackers purchasing Google Search ads and utilizing AI to generate ad pages with unique, non-malicious content. The intention behind these decoy ads is to attract visitors to phishing sites where their sensitive data, including credentials, can be stolen.
Malvertising campaigns, a form of online threat where malicious ads are designed to appear legitimate and draw in unsuspecting users, have been on the rise. These ads often impersonate well-known brands and contain content that appears genuine but actually redirects users to phishing pages or downloads malware onto their devices. While consumers are typically the target of malvertising, there has been a shift towards targeting corporate users in recent campaigns. Examples include attempts to distribute the Lobshot backdoor on corporate systems and phishing Lowe’s employees.
Researchers at Malwarebytes have observed an increase in fake content created for deception purposes, known as “white pages” in the criminal underground. These pages serve as visually appealing decoys hiding malicious activities behind them, fooling detection engines with their seemingly harmless content. This trend has become more prevalent following Microsoft’s decision in 2022 to block macros in certain file formats, pushing threat actors towards alternative malware distribution methods like malvertising.
Despite efforts by major online ad distributors to combat malvertising, such as Google, malicious actors continue to find ways to evade detection. A recent study by Malwarebytes identified Amazon as the most spoofed brand in malvertising campaigns, indicating the widespread nature of this cyber threat. The use of AI-generated content to create decoy ads adds a new layer of complexity to these scams, making it harder for security measures to identify and block malicious ads.
Malwarebytes provided examples of decoy ads spotted on Google Ads, including campaigns targeting users searching for specific apps like Securitas OneID and Parsec. These ads featured entirely AI-generated content, including images and website design, to appear authentic to unsuspecting visitors. While these ads may seem blatantly fake to a human observer, the use of AI technology makes them difficult for automated detection systems to flag as malicious.
In conclusion, the use of AI in malvertising campaigns represents a new challenge for cybersecurity professionals as threat actors continue to innovate and adapt their tactics. As the volume of malvertising continues to rise, it is crucial for organizations and individuals to remain vigilant and implement robust security measures to protect against these evolving threats.
