HomeCII/OTAI Agents Making Purchases Require Security Teams to Rethink Risk

AI Agents Making Purchases Require Security Teams to Rethink Risk

Published on

spot_img

In a recent interview featured on Help Net Security, Donald Kossmann, the Chief Technology Officer of Chargebacks911, discussed the emerging landscape of digital commerce influenced by what he terms “agentic commerce.” This innovative framework outlines a future in which artificial intelligence (AI) agents autonomously handle purchasing decisions on behalf of users or organizations, a development that is reshaping traditional notions of security, fraud, and governance.

Kossmann emphasized that the advancement of AI agents has reached a pivotal point where these digital entities possess the ability to shop, negotiate prices, select suppliers, and carry out transactions independently. This evolution presents significant challenges to the established, primarily click-based approaches to digital commerce, prompting a critical reassessment of how security measures are defined and implemented in this new context.

A central point of concern raised by Kossmann is a prevalent yet underappreciated assumption: that a transaction sanctioned by a technical authority inevitably aligns with the genuine intentions of the user. While this notion held true in traditional commerce practices, the rise of agentic systems introduces complexities that blur those lines. When AI agents act on behalf of customers consistently, the focus shifts from the risk of credential theft to what Kossmann describes as “intent drift.” Essentially, even if agents operate within their defined permissions, they may still yield results that diverge from user expectations or desires. This unpredictability creates a gray area in payments and dispute management, complicating matters in ways that existing security protocols were ill-equipped to handle.

Current security frameworks, particularly those based on OAuth-style delegated authorization, are viewed as inadequate for the persistent nature of these AI agents wielding financial authority. While these frameworks serve as a basic starting point, they were not designed with the continuous operational capacity of AI agents in mind. Kossmann pointed out that traditional access models are indicative of user-initiated actions that are relatively bounded. In contrast, persistent agents introduce unique risk factors, such as permissions remaining valid long after user intent has shifted. To address these challenges, Kossmann advocates for the evolution of these frameworks. This includes the introduction of more granular, revocable, and context-aware permissions, along with enhanced audit capabilities that detail not only what was authorized but how that authority was exercised over time.

Kossmann outlined four critical controls enterprises should implement before permitting AI agents to connect with corporate systems. Firstly, he stressed the importance of establishing tight, time-bound permissions, ensuring agents lack open-ended purchasing authority. Spending limits, category controls, supplier restrictions, and expiration conditions should be explicitly defined. Secondly, organizations require meticulous decision transparency, entailing detailed logs elucidating the rationale behind agent-supplier choices and transaction executions.

The third control is the necessity for real-time human override capacity. In instances where an agent’s behavior appears erratic or unexpected, organization teams must retain the ability to pause or revoke that authority immediately. Lastly, Kossmann highlighted the need for robust post-transaction evidence capture. In light of potential disputes or audit challenges, enterprises must be equipped to demonstrate both the permissions granted to agents and the actions they undertook.

Amid the evolution of agentic commerce, Kossmann acknowledged a less-discussed risk: the potential for shopping agents to become prime targets for threat actors aiming for behavioral profiling on a scale never previously imagined. Well-trained shopping or procurement agents amass valuable behavioral data encompassing purchasing preferences, supplier relationships, and timing patterns. If compromised, these agents could provide malicious actors with deep insights into individual behaviors and corporate procurement strategies. The stakes are not merely confined to fraud; they extend into realms of competitive intelligence, targeted manipulation, and intricate social engineering.

The implications of agent-to-agent commerce could also inject a new layer of opacity into interactions. Unlike human transactions, which exhibit natural friction points — moments when intent, consent, and identity can be assessed — autonomous agent interactions may erase many of these checkpoints. This raises substantial privacy concerns, as the potential for excessive data exposure between agents grows, especially in scenarios where negotiation, personalization, or dynamic pricing models necessitate shared behavioral signals.

In terms of security, organizations must shift their evaluation strategies when dealing with AI vendors that incorporate purchasing autonomy into their platforms. Rather than leaning on traditional vendor assessments, organizations should concentrate on how these vendors govern decision-making processes. Key areas for scrutiny include how the vendor scopes agent permissions and their methodologies for logging and explaining decisions.

With the advent of autonomous agents, as opposed to human counterparts, security teams will need to revamp third-party risk management frameworks. It is no longer sufficient to merely assess the entity’s trust; organizations must evaluate the decision-trust capabilities of the autonomous systems acting on their behalf.

Kossmann’s insights reflect a significant shift in the landscape of digital commerce, emphasizing the urgent need for an evolution in security practices as AI agents become increasingly autonomous and capable. By addressing these challenges head-on, organizations can better navigate the complex interplay between emerging technologies and persistent governance needs.

Source link

Latest articles

Microsoft Certificate Case Leads to Distributor’s Imprisonment

A Florida software distributor named Heidi Richards has faced significant legal consequences for her...

Top 10 Cybersecurity Marketing Agencies for 2026

As the digital landscape transforms at an unprecedented pace, the marketplace for security solutions...

Indigenous HSMs for DPDP and RBI Compliance

Data Sovereignty in the Digital Age: The Role of Hardware Security Modules (HSMs) in...

AI Is Making Social Engineering Detection More Challenging—Yet Training Methods Remain Stuck in 2015

In a groundbreaking incident that has significant implications for cybersecurity, Hong Kong police revealed...

More like this

Microsoft Certificate Case Leads to Distributor’s Imprisonment

A Florida software distributor named Heidi Richards has faced significant legal consequences for her...

Top 10 Cybersecurity Marketing Agencies for 2026

As the digital landscape transforms at an unprecedented pace, the marketplace for security solutions...

Indigenous HSMs for DPDP and RBI Compliance

Data Sovereignty in the Digital Age: The Role of Hardware Security Modules (HSMs) in...