California Governor Gavin Newsom signed a groundbreaking bill into law, known as Assembly Bill 963, aimed at regulating the use of artificial intelligence (AI) models capable of causing significant harm. The bill specifically focuses on preventing the use of AI in creating or utilizing weapons of mass destruction that could lead to mass casualties, conducting cyberattacks on critical infrastructures resulting in mass casualties or at least $500 million in damages, or causing harm in a manner that would constitute a crime if committed by a human.
Additionally, developers of AI models covered under the bill are required to implement a kill-switch or shutdown capabilities to mitigate disruptions to critical infrastructure. The bill mandates that these models adhere to stringent cybersecurity and safety protocols, undergo rigorous testing, assessments, and audits. However, some AI experts have raised concerns that the provisions of the bill may be overly burdensome.
David Brauchler, head of AI and machine learning for North America at NCC Group, believes that the bill may be an overreaction to perceived risks associated with AI models. He emphasized that the fears of AI models going rogue are largely unfounded based on current observations and implementations. Brauchler suggests that the bill’s requirements may be excessive given the relatively low immediate or near-term risks posed by AI systems.
Furthermore, the critical harms burdens outlined in the bill could potentially be too cumbersome for even major players in the AI industry to comply with. Benjamin Brooks, a Fellow at the Berkman Klein Center for Internet & Society at Harvard University, and former head of public policy for Stability AI, highlights the broad scope of the critical harm definition. He points out that developers would need to provide assurances and guarantees across a wide array of risk areas, which may be challenging to fulfill when releasing AI models to the public.
The bill’s emphasis on preventing AI-related harms reflects a growing concern over the potential negative impacts of AI technology. While the intention behind the legislation is to protect against catastrophic scenarios involving AI misuse, critics argue that the bill’s provisions may be too stringent and impractical given the current state of AI technology.
As AI continues to advance and integrate into various industries, finding a balance between innovation and regulation remains a complex challenge. Lawmakers, experts, and industry stakeholders must work together to establish effective guidelines that promote the safe and responsible use of AI while fostering continued technological development.
Overall, Governor Newsom’s signing of Assembly Bill 963 represents a significant step towards addressing the risks associated with AI and highlights the ongoing debate surrounding the regulation of emerging technologies. As the implications of AI become more prevalent, the need for thoughtful and nuanced regulatory frameworks will play a crucial role in shaping the future of AI innovation and usage.