CyberSecurity SEE

Act quickly to put a stop to employee curiosity about ‘free’ AI apps

Act quickly to put a stop to employee curiosity about ‘free’ AI apps

The proliferation of free AI-themed apps has become a significant concern for employees seeking to enhance their work efficiency. With the release of ChatGPT in late 2022, the market has been flooded with free AI apps, some of which are created by malicious actors. One such example, reported by Malwarebytes, involves an AI video editor that actually installs the Lumma Stealer malware on users’ systems.

The deceptive lure of promises such as “Create breathtaking videos in minutes” and “No special skills required” has attracted unsuspecting victims to download these malicious apps. Despite the growing use of AI in the enterprise, with 81% of users reporting improved productivity, there is a lack of AI usage guidelines in many companies. This has led to the phenomenon of Shadow AI, where employees use unapproved AI applications without supervision.

To address this growing threat, CISOs must develop strategies to mitigate the risks posed by fake AI apps. One key step is for management to decide whether to allow the use of AI in the workplace and to establish clear guidelines for its use. Additionally, IT departments should implement measures to prevent the installation of unapproved applications, such as restricting administrator access and requiring approval for any new software installations.

According to Pieter Arntz, a Malwarebytes intelligence researcher, cybercriminals are increasingly exploiting the popularity of AI to deceive users and distribute malware. He warns users to be cautious of free AI tools and recommends using browser extensions to block malware and phishing attempts. The Lumma Stealer malware, in particular, targets cryptocurrency wallets and sensitive information on victims’ machines, highlighting the importance of cybersecurity measures to prevent data breaches.

CISOs should prioritize training employees on security awareness, implementing phishing-resistant multifactor authentication, and monitoring networks for suspicious activity to prevent infections from malicious AI apps. In the case of the AI video editor distributing Lumma Stealer, IT professionals should be on the lookout for specific file names on Windows and macOS systems to detect signs of infection.

Overall, the rise of fake AI apps underscores the need for vigilance and proactive cybersecurity measures in today’s digital landscape. As the popularity of AI continues to grow, organizations must stay ahead of evolving threats and educate employees on the dangers of malicious software posing as legitimate applications. By taking a proactive approach to cybersecurity, companies can safeguard their data and mitigate the risks associated with fake AI apps.

Source link

Exit mobile version