New York Passes First Statewide Regulatory Framework for AI Safety | Visual generative ai tools | | Microsoft artificial intelligence name | Turtles AI
The New York legislature has approved the RAISE Act, the first US state measure with transparency requirements for the most powerful AI models. Standards, reports, fines of up to $30 million: now it’s Governor Hochul’s turn.
Key points:
- Requirement for major AI labs to publish safety plans and critical incidents (>100 victims or >1bn USD damages).
- Regulatory interest aimed at those who have spent over $100m on computing and make their models available in New York.
- Penalties of up to $30m for non-compliance, with enforcement entrusted to the Attorney General.
- Kill switch and post-training liability excluded; protection for startups and academic research.
The bill called the RAISE Act, sponsored by Senator Andrew Gounardes and Representative Alex Bores, was approved on June 12, 2025 by both houses of the state legislature and sent to Governor Kathy Hochul for signature or possible veto. The rule requires OpenAI, Google, Anthropic and the like, entities that have invested more than $100 million in computational resources and offer models to users in New York, to prepare in-depth safety and security reports, report suspicious behavior of models or theft, and have plans to mitigate potentially catastrophic risks, defined as death or injury to at least 100 people or damages exceeding $1 billion. The state attorney general can impose fines of up to $30 million for failure to adopt the requirements; there are also protections for internal whistleblowers.
The legislative committee and the authors note that the measures do not impede innovation, impose kill switches or blame post-trained models, exempting smaller companies and university research. However, Silicon Valley – represented among others by Anjney Midha of Andreessen Horowitz and YCombinator – has expressed strong opposition, arguing that the law penalizes US industry and favors foreign competitors. Anthropic has not yet taken an official position, but through co-founder Jack Clark it has expressed concerns about the impact on small-to-medium-sized entities; the authors of the law respond that the regulation specifically excludes them.
A critical analysis casts doubt on the real effectiveness of the bill with respect to the risks of AI, judging it bureaucratically complex and potentially ineffective in mitigating malicious uses such as those related to bioterrorism or advanced hacking. Among other criticisms, there is the concern that laboratories could choose to no longer make their models available in the State, replicating dynamics already recorded in Europe after stringent regulations. However, the promoters of the RAISE Act claim that New York, with its third-largest GDP among US states, remains a reference market that discourages withdrawals.
The debate highlights a regulatory approach that is unprecedented in the American landscape: mandatory transparency, independent auditing, rapid reporting of critical incidents, protected whistleblowers and severe sanctions represent an attempt to reconcile security needs with the vitality of technological innovation, but leaves open questions about compatibility and long-term effectiveness. It now remains to be seen whether Governor Hochul will sign the text unchanged or initiate amendments.
A rule that, if signed, would place New York at the center of a new legal standard in the management of the powerful state AI.