Key takeaways
Governor Kathy Hochul signed the Responsible AI Safety and Enforcement Act into law on Friday, establishing comprehensive safety requirements for artificial intelligence developers operating in New York State.
The legislation positions New York as a national leader in AI regulation as federal efforts to address the technology remain stalled.
The RAISE Act requires large AI developers to create and publish detailed information about their safety protocols and report any incidents to the state within 72 hours of determining that an incident has occurred.
The law also establishes a new oversight office within the Department of Financial Services to assess frontier AI developers and ensure transparency through annual public reports.
Balancing innovation with accountability
"By enacting the RAISE Act, New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety, holding the biggest developers accountable for their safety and transparency protocols," Governor Hochul said in an official statement.
"This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public."
The legislation targets what are known as "frontier" AI models; the most powerful artificial intelligence systems trained with computing power exceeding 10^26 floating-point operations.
These advanced models have sparked concerns among experts about potential risks ranging from automated criminal activity to biological weapons development.
Under the new law, the Attorney General can bring civil actions against large frontier developers for failure to submit required reporting or making false statements.
Penalties are up to $1 million for the first violation and up to $3 million for subsequent violations, significantly reduced from the original bill's proposed fines of $10 million and $30 million, respectively.
Legislative victory after intense negotiations
The bill's passage came after months of negotiations between Governor Hochul and the legislation's sponsors, State Senator Andrew Gounardes and Assemblymember Alex Bores.
Tech industry lobbying initially prompted Hochul to propose substantial changes that would have weakened the bill, but legislators successfully negotiated back several key provisions.
"This is an enormous win for the safety of our communities, the growth of our economy, and the future of our society," State Senator Andrew Gounardes said.
"The RAISE Act lays the groundwork for a world where AI innovation makes life better instead of putting it at risk. Big tech oligarchs think it's fine to put their profits ahead of our safety — we disagree. With this law, we make clear that tech innovation and safety don't have to be at odds. In New York, we can lead in both."
One significant victory for the bill's sponsors was maintaining the 72-hour incident reporting requirement.
The governor's initial proposal would have extended this to 15 days and only required reporting when an incident had definitively occurred, mirroring California's approach.
The final version requires companies to report within 72 hours of determining an incident occurred or if they reasonably believe one is imminent.
Setting a new national standard
Assemblymember Alex Bores emphasized the legislation's national significance.
"Today is a major victory in what will soon be a national fight to harness the best of AI's potential and protect Americans from the worst of its harms," Bores said.
"New York now has the strongest AI transparency law in the country. This bill moves beyond California's SB53 in significant ways and sets the stage for greater disclosure, learning, and legislative action in years to come.
In New York, we defeated last-ditch attempts from AI oligarchs to wipe out this bill and, by doing so, raised the floor for what AI safety legislation can look like."
The legislation arrives amid broader concerns about AI governance.
California Governor Gavin Newsom signed similar safety legislation in September, creating what officials describe as a unified regulatory approach between the nation's two largest tech hubs.
The timing is particularly significant as President Trump recently signed an executive order directing federal agencies to challenge state AI laws, setting up potential legal battles ahead.
Major AI companies, including OpenAI and Anthropic, have expressed support for the legislation while calling for federal standards.
"The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them," Sarah Heck, head of external affairs at Anthropic, told The New York Times.
Kaitlin Asrow, Acting Superintendent of the New York State Department of Financial Services, said, "DFS has been a leader in developing rules that are facilitating the responsible adoption of artificial intelligence by financial services companies.
DFS looks forward to supporting Governor Hochul's continued efforts to foster innovation and establish standards for the safe development of artificial intelligence models."
Read more:
Amazon Names New AI Chief Amid Battle To Take On Tech Rivals
AI-Generated Images Fuel Misinformation Following Bondi Beach Terror Attack
Coalition Of Attorneys General Demands Meta To Remove Deceptive AI Weight-Loss Ads