California Governor Gavin Newsom’s decision to veto SB-1047, a bill aimed at imposing restrictions on developers of advanced artificial intelligence (AI) models, has garnered mixed reactions from various stakeholders. While some, including leading AI researchers, the Center for AI Security (CAIS), and the Screen Actors Guild, supported the bill for its potential to enhance safety and privacy in AI development, Newsom found the bill to be overly stringent and not tailored to address specific risks associated with AI technologies.
In his veto announcement, Newsom highlighted the need for a more nuanced approach to regulating AI systems, particularly those deployed in high-risk environments or involved in critical decision-making. He pointed out that SB-1047 applied stringent standards to even basic functions, which he believed was not the most effective way to address actual threats posed by AI technology. Newsom also mentioned other AI-related bills that he had signed recently to govern the use of generative AI tools in the state.
SB-1047, proposed by California State senators Scott Wiener, Richard Roth, Susan Rubio, and Henry Stern, aimed to impose oversight on companies like OpenAI, Meta, and Google, involved in developing AI technologies worth millions of dollars. The bill required companies developing large language models (LLMs) to ensure their technologies did not enable critical harm, defined as incidents involving the misuse of AI technologies for nefarious purposes. SB-1047 also included provisions for implementing failsafe capabilities to shut down LLMs in certain circumstances.
Despite broad bipartisan support for SB-1047 and endorsements from prominent figures in the AI industry like Geoffrey Hinton and Yoshua Bengio, Newsom’s veto decision was based on the belief that the bill was too broad and could hinder innovation in the AI sector. Even Elon Musk, whose company would have been affected by the bill, expressed support for its passage due to concerns about the potential existential risks posed by unregulated AI development.
However, a coalition that included entities like the Bay Area Council, Chamber of Progress, TechFreedom, and Silicon Valley Leadership Group viewed SB-1047 as flawed and based on hypothetical doomsday scenarios rather than factual evidence. The group argued that large language models like ChatGPT posed no existential threat and criticized the bill for holding developers accountable for how others used their products.
Arlo Gilbert, CEO of data-privacy firm Osano, supported Newsom’s veto, citing the need for a more comprehensive and targeted approach to AI regulation. He emphasized the importance of balancing policy and technology advancements to ensure the responsible development of AI technologies. Melissa Ruzzi, director of artificial intelligence at AppOmni, acknowledged the challenges of regulating AI but stressed the necessity of starting somewhere to address issues surrounding AI development.
In conclusion, Newsom’s veto of SB-1047 has sparked debates about the appropriate level of regulation needed to ensure the safe and responsible use of AI technologies. While some see the bill as a necessary step to mitigate potential risks, others believe it is premature and could stifle innovation in the AI industry. Moving forward, finding a balance between regulation and innovation will be crucial in shaping the future of AI development in California and beyond.

