Newsom Blocks AI Bill: ’We Need Broader Regulation’ | Free ai courses | | Generative ai certification | Turtles AI
California Governor Gavin Newsom recently blocked an AI bill, SB 1047, arguing that the outlined regulation is not an adequate solution to ensure public safety. The bill, which focuses on large AI systems, does not take into account a wide range of smaller, more specialized models, which Newsom said could be just as dangerous. However, the veto does not represent a final shutdown; Newsom has asked lawmakers to return to the table to create more comprehensive and flexible legislation.
Key points:
- The bill aimed to regulate only large AI models to the exclusion of smaller, specialized ones, which are considered equally dangerous.
- Newsom criticized the approach for its lack of adaptability to a rapidly evolving technology and insufficient empirical analysis.
- The governor fears that overly restrictive regulation could discourage AI companies from remaining in California, while stressing the importance of safe regulation.
- Despite the veto, Newsom supports the need for strong regulation and has urged lawmakers to draft an improved bill.
On Sunday, Governor Gavin Newsom exercised his veto over a bill that would have introduced new regulations for AI in California. The bill, known as SB 1047, was intended to establish a control framework for large AI models, but Newsom pointed out that this approach was too narrow. In an official statement, he reiterated the need for action to prevent any technology-related disasters, but expressed concerns about several aspects of the proposed text. The main point of criticism is that the legislation would have covered only the largest AI models, while smaller, more specialized systems would have been excluded from scrutiny. According to the governor, the latter could pose an equally serious, if not greater, threat, and the risk is that the law would create a false sense of security among the public.
Newsom emphasized that adaptability is a priority in regulating a technology that is still evolving. In fact, the bill would not have considered the context in which AI systems are used, such as their application in critical decisions or the use of sensitive data. As a result, even basic functions performed by a large system would have had to meet strict standards, something the governor said is not the best approach to protect citizens from real technological threats. For Newsom, more calibrated legislative action is needed, based on concrete analysis and able to adapt to ongoing technological innovations.
SB 1047, if passed, would have required AI developers to take a number of security measures to prevent their systems from being exploited to create weapons of mass destruction, cause at least $500 million in economic damage through cyber attacks, or commit crimes equivalent to those for which a human being would be criminally prosecuted. Other required measures included a requirement to implement a “kill switch,” which is a system that allows an AI model to be turned off immediately, during both the training and operational phases. In addition to this, companies would have had to adopt strict cybersecurity measures to prevent misuse or unauthorized use of their systems, and implement protocols for risk management, with periodic audits and progress reports.
Despite the governor’s concerns, the proposal had gained support from some political and technology sectors. Among the supporters was Senator Scott Wiener, author of the bill, who called the veto a setback for those who believe in the need for oversight of large technology companies. According to Wiener, the absence of binding regulation for companies developing advanced technologies leaves the public exposed to potentially devastating risks. On the other hand, some experts believe Newsom’s veto was necessary: Dean Ball, a researcher at the Mercatus Center, argued that the size thresholds in the law were already outdated and no longer in line with the speed of technological advances.
Despite the blocking of SB 1047, Newsom signed a bill that will require generative AI developers to publish detailed summaries of the datasets used to train their models by 2026. This will provide greater transparency into the sources of information AI models draw from, but the road to comprehensive and sweeping AI regulation in California remains long and rocky.
The balance between innovation and safety remains a central theme in the debate, with Newsom aiming to keep the state at the forefront of technological development while also ensuring that the growth of AI occurs within a robust regulatory framework that protects the public.