New York, NY, March 17th, 2026, CyberNewswire
In a revealing report released by GitGuardian, a prominent security leader known for its GitHub applications, the impact of AI adoption on software delivery and security practices is brought to light. The 5th edition of the “State of Secrets Sprawl” report highlights significant shifts in the developer landscape as the interaction of mainstream AI with coding practices has led to an alarming increase in leaked secrets.
According to the report, an astounding 3.2% of secrets leaked from developer commits assisted by Claude Code, a notable AI coding tool, stands in stark contrast to the 1.5% baseline leak rate. This leakage is described as a product of the rapid acceleration in software development facilitated by AI technologies. As GitGuardian’s findings suggest, while the software ecosystem is thriving, the security practices are lagging, resulting in a mismatch between growth and remediation efforts.
The Year Software Changed Forever
The year 2025 has been marked as a transformative period for software engineering, characterized by an unprecedented +43% year-over-year increase in public code commits. This surge in activity is coupled with a concerning trend: secrets appear to be leaking at an alarming rate—1.6 times faster than the increase in active developers. The consequences of this trend are stark, with a 34% year-over-year increase in newly leaked secrets registered on GitHub, leading to approximately 29 million secrets detected for the year—the highest annual increase recorded to date.
Implications for CISOs
For Chief Information Security Officers (CISOs), the revelations in the report underscore persistent challenges in safeguarding Non-Human Identities (NHIs). Exposed credentials, which have consistently paved the way for breaches, are exacerbated by AI’s involvement in the software creation process. This technological advancement has not been accompanied by comparable enhancements in governance frameworks, amplifying vulnerabilities in systems where sensitive information resides. Developers, particularly those without formal training, might find themselves unaware of the security implications of including sensitive data within their coding projects, often ignoring AI alerts to omit secure information.
New Categories of Risk
The report details several categories of risk that have surfaced due to the rise of AI assistance in coding. Notably, the leakage of AI service credentials surged by 81% year over year, totaling over 1.27 million incidents, revealing a significant oversight in security protocols aimed primarily at traditional development workflows. Moreover, the report points to a rising danger in Misconfigured Cloud Provider (MCP) environments, where recommended practices frequently allow for credentials to be hardcoded within configuration files, resulting in over 24,000 unique secrets exposed in 2025.
Expanding Attack Surfaces
AI’s influence has dramatically expanded the attack surfaces available to malicious actors. Exposed secrets are not limited to source code; internal repositories carry the highest risk, being nearly six times more likely than public repositories to harbor hardcoded secrets. Further complicating the landscape, approximately 28% of data breaches stem from leaks in collaboration and productivity tools, which often reach broader audiences and automate processes without adequate oversight. Moreover, as AI agents integrate more deeply into the development ecosystem, developer machines emerge as critical vectors for risk. Security experts caution that when AI agents possess local credentials, they can transform developer systems into significant attack surfaces, amplifying organizational vulnerabilities.
Eric Fourrier, CEO of GitGuardian, emphasized the importance of understanding and mitigating these risks. He noted that as AI agents require local credentials for cross-system connectivity, protecting developer laptops is vital. "We’ve developed a local scanning and identities inventory tool to safeguard these assets," he stated, highlighting the need for security teams to precisely identify which machines contain sensitive secrets.
Governance Over Detection
The report further emphasizes an urgent need for improved governance as the industry confronts an alarming backlog of unresolved security issues. Long-lived secrets, which constitute around 60% of policy violations, highlight the necessity of shifting toward ephemeral and least-privilege access protocols. Compounding this challenge, 46% of critical secrets lack effective vendor validation mechanisms, making it complex for organizations to prioritize vulnerabilities based on real-world exploitability.
Lastly, GitGuardian disclosed that 64% of legitimate secrets identified in 2022 remain unreclaimed in 2026. This stagnation is primarily due to the absence of a robust governance framework to facilitate a repeatable and effective remediation process for leaked secrets.
In conclusion, GitGuardian posits that to face these challenges, security strategies must evolve to regard non-human identities as crucial assets, incorporating dedicated governance mechanisms, contextual analysis, and automated remediation strategies across both code-based and non-code surfaces.
For additional insights, the full report can be accessed directly on GitGuardian’s website.
About GitGuardian
GitGuardian is recognized as a comprehensive NHI Security platform, supporting organizations in safeguarding Non-Human Identities (NHIs) and adhering to industry standards. With a focus on protecting against attacks targeting service accounts and applications, GitGuardian ensures robust secrets security in various development environments. The platform has earned the trust of over 600,000 developers and leading enterprises such as Snowflake and BASF. More information can be found on www.gitguardian.com.
Contact
Holly Hagerman, PR Partner
Connect Marketing
Email: [protected email]

