Byron V. Acohido Reports on Evolving Cybersecurity Landscape at RSAC 2025
SAN FRANCISCO — Amidst the bustling atmosphere of the annual RSAC 2025 conference, a significant discussion is unfolding regarding the shifting dynamics of cybersecurity. Over 40,000 cybersecurity professionals and executives have gathered at San Francisco’s Moscone Center to confront familiar challenges that have become increasingly pressing, all while exploring new technology’s impact on their field. A notable transformation is evident: the introduction of language models (LLMs) and Generative AI (GenAI) has sparked unprecedented change in both offensive and defensive cybersecurity strategies.
As the landscape grows more complex, bad actors have quickly adopted GenAI to accelerate the efficiency and effectiveness of their attacks. At the same time, experts in the field emphasize that the defenders are also leveraging these powerful new tools to bolster their security measures. Incrementally, language-activated AI technology is beginning to reshape defenses, aiming to counterbalance the threats posed by malicious entities.
One of the central themes emerging at the conference is the recalibration of how industry leaders are utilizing LLMs. Byron V. Acohido, the author of this piece, had the opportunity to engage with several innovators, each working diligently to tailor LLMs and GenAI to support their cybersecurity efforts.
For instance, Brian Dye, CEO of Corelight—known for its open-source-based network evidence solutions—provided insight into the ways the cybersecurity field is dividing. Smaller security teams are scrambling to adopt vendor-curated AI solutions, while larger enterprises are developing customized LLMs to suit their unique circumstances. This dichotomy reflects a larger trend within the industry, wherein organizations of varying sizes are finding different paths to integrate AI effectively into their operations.
Similarly, John DiLullo, CEO of Deepwatch, a managed detection and response firm, has made an unexpected discovery regarding LLMs: properly calibrated and human-vetted models are outperforming junior analysts in generating incident reports. The consistency, accuracy, and reduced error rate in automated reports represent a significant advancement, indicating that technology is transforming traditional roles within the workforce.
Jamison Utter, a security evangelist at A10 Networks, offers a broader perspective on the competitive landscape. He points out that adversaries are rapidly advancing their capabilities by employing AI to create malware and orchestrate attacks at unprecedented speeds. To remain effective, defenders must adapt to not only leverage AI but also to think and react in the same rapid tempo that AI can facilitate.
The emerging consensus among industry leaders highlights a necessary shift in mindset for cybersecurity professionals. Rather than merely deploying AI tools, experts are now focusing on the calibration and orchestration of these solutions. Cybersecurity practitioners are learning that they must cultivate an intuitive understanding of how to effectively integrate AI into their workflows.
Key capabilities are developing around knowing when to trust AI-generated outputs, when to verify those outputs, and when to introduce friction into automated processes to ensure accuracy. This nuanced approach to AI technology echoes experiences shared by Acohido regarding his own use of AI tools like ChatGPT-4o for reporting. As he noted, speed alone is insufficient; the true value lies in understanding when to rely on AI versus when to engage human judgment.
Dye metaphorically described AI as a triage engine, excelling in straightforward attack scenarios but struggling in situations that demand nuanced understanding. He observed that the perennial challenge of needing to accomplish more with limited resources continues to shape the discussion around AI integration. "Help me understand what this alert means in English," he suggested, indicating that clear communication remains paramount.
DiLullo echoed this sentiment by emphasizing the need for analysts to trust AI in generating foundational reports while still exercising caution and oversight. The process of making educated inferences remains essential in cybersecurity, and LLMs play a robust role in surfacing insights that may not be readily apparent to human analysts.
Utter’s team has also recognized the importance of AI-driven telemetry but notes the necessity of thoughtful constraints in how these tools are employed. Making deliberate decisions about how to orchestrate AI’s role in security operations is becoming an essential skill.
As the conference continues and discussions evolve, the narrative focuses on the intentional orchestration of AI and LLMs within cybersecurity frameworks. Vendors that adapt to treat AI as a powerful yet flawed collaborator stand to gain a competitive advantage. The future of human-centered security is not a matter of replacing human roles but an evolution in craftsmanship, where human insight and technological capability work harmoniously.
Continuing this exploration at the RSA conference, Acohido and other industry leaders are poised to adapt and refine their strategies in the face of rapidly changing technological landscapes. The emerging role of prompt engineering—where practitioners shape AI outputs without surrendering judgment—promises to redefine standards of practice moving forward. Each thoughtful decision made today will chart a course for the industry’s response to future challenges, ensuring that human ingenuity remains central to navigating the complexities of cybersecurity.