In a significant development, a bill that would empower the National Institute of Standards and Technology (NIST) to establish a formal mechanism for reporting security vulnerabilities in artificial intelligence (AI) systems has been advanced by a House committee. However, funding concerns loom large over this initiative, as is often the case with security-related projects.
The AI Incident Reporting and Security Enhancement Act garnered approval through a voice vote by the House Science, Space, and Technology committee on Wednesday. The bill, which was introduced by a bipartisan group of representatives hailing from North Carolina, California, and Virginia, aims to give NIST the authority to integrate AI systems into the National Vulnerability Database (NVD).
The NVD serves as the central repository for tracking security vulnerabilities in both software and hardware within the federal government. Should this bill pass through Congress and be enacted into law, it would place additional responsibilities on the already overwhelmed teams at NIST who oversee the NVD. Earlier this year, NIST made the decision to temporarily halt the updating of data on reported vulnerabilities due to a combination of budget constraints, stagnant staff growth, and a surge in email traffic related to the database.
While the bill does acknowledge that the increased workload for NIST would be contingent upon the availability of funding, Rep. Deborah Ross (D-N.C.), one of the sponsors of the bill, emphasized the recognition of the substantial challenges faced by NIST in maintaining the database. In response, she stated that the committee is actively seeking solutions to assist NIST in addressing these issues and securing the necessary funding.
Despite the committee’s approval of the bill, there were concerns raised by some members regarding certain terminology used within the legislation. Specifically, terms like “substantial artificial intelligence security incident” and “intelligence incident” were highlighted as needing further clarification to improve the bill’s chances of passing. This call for specificity comes in the aftermath of the Supreme Court’s decision to overturn the Chevron doctrine, which has heightened the focus on clarity and precision in legislative language.
Furthermore, the bill would mandate NIST to collaborate with various federal agencies, including the Cybersecurity and Infrastructure Security Agency, as well as private sector organizations, standards bodies, and civil society groups to establish a unified vocabulary for reporting AI-related cybersecurity incidents.
As this bill moves forward in the legislative process, the balancing act between enhancing AI security measures and securing the necessary resources to support such initiatives remains a critical concern. The potential impact of this legislation on the broader landscape of AI security and vulnerability reporting underscores the importance of addressing funding challenges and refining the language of the bill to ensure its successful implementation.

