The rapid integration of artificial intelligence (AI) within enterprise environments presents not only a host of new security vulnerabilities but also resurrects long-standing security failures, according to a high-ranking spokesperson from Mandiant, a leading cybersecurity firm. During a recent discussion with Infosecurity at the Google Cloud Next 26 event, Jurgen Kutscher, Vice President of Mandiant Consulting, a segment of Google Cloud, highlighted the troubling trends observed as AI technologies are implemented in business settings.
He noted a significant oversight: as organizations rush to deploy AI solutions, they often neglect the most fundamental security controls that have historically protected their infrastructure. "A lot of the old problems are new again," observed Kutscher, pointing out that while enterprises are preoccupied with innovative AI threats—such as large language model poisoning—they frequently abandon basic security hygiene. This oversight could potentially expose organizations to a range of cyber threats that could have been mitigated with proper measures.
Mandiant’s red team, tasked with simulating real-world attacks to identify security weaknesses, has done extensive work in this area. Kutscher elaborated on the findings from various red-team engagements, which aimed to mimic the strategies employed by genuine adversaries. The team’s investigations revealed alarming security deficiencies that stemmed from mismanagement of AI technology within organizations. For instance, Kutscher mentioned instances where AI-enabled systems allowed attackers to manipulate data classifications, resulting in successful bypassing of protections like Data Loss Prevention (DLP) solutions.
One particularly concerning example involved an AI system deployed in a financial institution that was vulnerable to fundamental security missteps. Kutscher recounted how during their testing, the team discovered an unencrypted communication stream between the AI and the browser, underscoring how easily foundational security measures can be overlooked. Such negligence not only increases the risk of data breaches but also illustrates the potential for exploitation by malicious entities.
Furthermore, Kutscher revealed that during several engagements, Mandiant’s red team was able to gain initial access through social engineering tactics. Once they gained foothold in the system, the AI was manipulated to carry out additional actions, including data exfiltration and policy alterations. “Once we’re inside, we’ve had the AI do the rest for us, including data theft and everything. And I’m talking about authorized AI deployments, not even shadow AI cases where employees have deployed AI workflows without the company’s oversight,” he stated. This scenario raises serious concerns about the control organizations maintain over their AI integrations.
To mitigate these risks, Kutscher stressed the urgent need for organizations to establish robust AI security governance processes as a priority. He advised that creating comprehensive policies and governance structures is significantly easier than rectifying uncontrolled AI usage after problems arise. Organizations should reexamine their secure architectural practices and consider conducting red-team validations to ensure that critical assets are properly segmented and safeguarded from potential threats.
While Kutscher acknowledged the promising capabilities of AI in enhancing cybersecurity defenses, he cautioned that Chief Information Security Officers (CISOs) should not presume that the adoption of AI technologies absolves them from maintaining essential cybersecurity protocols. "It’s possible that these mistakes partly come from the fact that CISOs aren’t always involved in the deployment of AI workflows," he reflected, suggesting various contributing factors to the lack of security surrounding AI implementations. Such deficiencies represent a significant risk for businesses navigating the complex landscape of digital security.
The conversation surrounding AI and cybersecurity is increasingly urgent as the technology continues to evolve and integrate into business processes. Kutscher’s insights serve as a reminder that while the allure of AI innovation is strong, organizations must balance it with vigilant security measures to protect sensitive data and maintain integrity in their operational frameworks. Prioritizing AI security governance today can be crucial for preventing larger issues in the future, ensuring that enterprises harness the potential of AI without compromising their security posture.
