Agentic AI,
Events,
Gartner Summit
Okta’s Arkadiusz Krowczynski on Why Governing AI Agents Starts With Identity
The rapid deployment of artificial intelligence agents poses significant challenges for enterprises striving to secure these technologies effectively. As businesses integrate these AI solutions, potential blind spots emerge related to access, ownership, and governance of these agents. Addressing these issues requires a robust identity security fabric, emphasizing visibility, control, and governance. This assertion comes from Arkadiusz Krowczynski, a principal product acceleration specialist at Okta.
According to Krowczynski, the identity security framework operates on three essential layers. Firstly, it provides visibility into the deployment locations of AI agents and identifies their respective owners. This transparency is crucial for organizations to understand who is accountable for these intelligent systems. The second layer involves establishing control over what applications and data these agents can access, ensuring that their capabilities are not misused. The third component focuses on governance—maintaining security over time through access reviews and implementing measures to deactivate agents should they begin to operate outside their intended parameters.
With a structured identity fabric in place, Krowczynski suggests, organizations can effectively secure AI agents throughout their lifecycle—from pre-authentication, during usage, and even post-authentication. This security strategy not only mitigates risks but also allows companies to innovate and adapt more swiftly to the evolving landscape of AI technologies. He emphasizes the delicate balance required: “You will stay secure, but you’d still be able to innovate, work faster, and adopt capabilities of AI agents,” he explains.
This perspective was shared during a recent video interview with the Information Security Media Group (ISMG) at the esteemed Gartner Identity and Access Management Summit. During the discussion, Krowczynski elaborated on several critical areas concerning the governance of AI agents:
- The evolving nature of security threats, noting the shift from traditional phishing attacks to more sophisticated AI-weaponized token-based access strategies.
- The importance of governance controls not just within an organization but also across third-party suppliers and the broader supply chain.
- The risks inherent when business teams deploy AI agents without comprehensive visibility from the IT department.
As the automotive industry accelerates its reliance on AI applications, Krowczynski’s insights are particularly timely. He manages the high-stakes intersection of engineering, product strategy, and the requirements of Global 2000 C-suite executives. His focus lies in translating complex technical road maps into pragmatic business outcomes, paving the way for secure digital transformation while also expediting market entry for enterprise-scale identity and security solutions.
Krowczynski’s thoughts resonate as businesses find themselves at a critical juncture. The imperative for organizations to govern AI agents responsibly has never been more pressing. Failure to manage these capabilities effectively may lead to significant vulnerabilities that could jeopardize both operational integrity and data security. As AI continues to proliferate across various sectors, the frameworks established today will dictate the safety and efficacy of these advanced technologies in the future.

