Agentic AI,
Application Security,
Artificial Intelligence & Machine Learning
Costanoa Ventures’ John Cowgill on Moving From Static Analysis to Runtime Defense
Artificial intelligence-generated code is progressing at an astonishing pace, presenting significant challenges for security teams that struggle to keep up. According to John Cowgill, a partner at Costanoa Ventures, the risks associated with this rapid development are transitioning from isolated line-level concerns to more systemic issues that encompass entire systems.
Cowgill emphasized that while contemporary AI coding models tend to produce code that is more secure at the individual line level, this improvement often conceals a more profound issue. Specifically, even if individual lines of code pass muster, the overall system can still be vulnerable, brittle, and insecure. This paradox raises essential questions about the safeguarding mechanisms currently in place. “We’re going to need to have dynamic analysis running at all times in application security,” he noted, underscoring the urgency of adapting to new realities in cybersecurity.
As he elaborated, Cowgill articulated a vision of a future marked by what he termed AI Security 2.0, contrasting it with the previous model, referred to as AI Security 1.0. In the earlier phase, security measures largely focused on perimeter defenses—utilizing prompt filtering and controlled large language models (LLMs) to serve as barriers against potential threats. However, the shift to AI Security 2.0 means continuously monitoring the activities of AI agents in real-time across various distributed systems. “This approach is vital for ensuring that security teams can effectively respond to vulnerabilities as they unfold,” he explained.
Cowgill recently shared these insights during a video interview with the Information Security Media Group at the prestigious RSAC Conference 2026, where he discussed several paramount themes. Among them was an alarming prediction that 2026 is shaping up to be the year of what he dubbed the “vulnpocalypse.” This term encapsulates the growing concern that as AI-generated code becomes more prevalent, so too do the vulnerabilities associated with it. He elaborated on this concept, suggesting that the sheer volume of new features and functionalities introduced by AI creates an ever-expanding attack surface for cybercriminals.
During the interview, Cowgill also touched upon the critical advantage AI agents can offer organizations when it comes to managing vulnerabilities. In contemporary cybersecurity landscape, AI agents can aid in the triage and prioritization of vulnerabilities, streamlining the process of identifying which issues need immediate attention. Eventually, the goal is for these agents to facilitate not just detection but also remediation, thus navigating a path toward a more secure future.
Looking toward the horizon, Cowgill identified what it would take for a new class of AI-driven detection and response vendors to carve a niche in the burgeoning runtime security market. He posited that companies must focus on innovating solutions that can keep pace with the rapid developments in AI technologies while also addressing the intricacies of runtime environments, which are often complex and specific to individual organizations.
John Cowgill leads the cybersecurity practice at Costanoa Ventures, where he invests in technologies that integrate applied AI with national security. Prior to this role, he gained significant experience as a consultant at McKinsey & Company, where he provided advisory services to various sectors, including consumer goods, healthcare, and technology. It’s clear that his insights into the future of application security and AI-driven solutions will have long-lasting implications in the industry.
