Everything You Need to Know About New York’s RAISE Act 

New York became the second state last year to enact a frontier model disclosure law when Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The new law requires frontier model providers to disclose certain safety processes for their models and report certain safety incidents to state regulators, with many similarities to California’s slate of AI laws passed last fall. The RAISE Act will take effect on January 1, 2027. This article covers who must comply with the RAISE Act, what transparency obligations the law creates, and how the law will be enforced.

Scope of the RAISE Act

The RAISE Act applies to “large frontier developers” that train, or initiate the training of, frontier models. An entity is considered a large frontier developer if it had (collectively with its affiliates) a gross revenue that exceeds $500 million dollars in the previous calendar year. Frontier models developed by these companies are covered by the law if they are foundational models that were “trained using a quantity of computing power greater than 10^26 integer or floating-point operations.” The law is limited to frontier models that were “developed, deployed, or operat[e] in whole or in part in New York state.” This means that the RAISE act will reach only a handful of model providers, such as OpenAI, Anthropic, and Meta. 

The law’s key requirements are also anchored around “catastrophic risks” posed by frontier models. The RAISE Act defines these risks foreseeable and material to the frontier model throughout its lifecycle, and that “materially” cause death or serious injury to 50 or more people, or cause more than $1 billion in damage from a single incident. The harm caused by a catastrophic risk must come from specific incidents, such as providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon. There are some limited exceptions for what counts as a catastrophic risk, such as frontier model output information that is publicly available or lawful federal government activity. The catastrophic risk language imposes additional limits on the law’s applicability and obligations. 

Key Transparency and Reporting Requirements

Frontier model developers that are covered under the law must disclose certain information about their frontier models. The law requires that frontier model developers develop, implement, and publicly disclose a frontier AI framework that describes how the developers address certain safety activities, such as an assessment for thresholds that could trigger a catastrophic risk, mitigations that can be applied for catastrophic risks, and processes for updating the frontier AI framework. 

Frontier model developers are also required to update their frontier AI frameworks annually, as well as when their frontier models are materially modified. Updates to the framework because of model modifications require a published disclosure and justification within 30 days of the changes. Before a new or substantially modified version of a model is deployed, the Frontier model developer must publish a transparency report on their website that contains information such as how consumers can communicate with the developer, the model’s release date, intended model uses, and model use restrictions.

The RAISE Act also imposes reporting obligations on frontier model providers that are impacted by critical safety incidents. These incidents include unauthorized access or modification of model weights that cause death or bodily injury, harm the results from a catastrophic risk, loss of model control that results in death or bodily injury, or a model that uses deceptive techniques against the frontier developer to subvert controls or monitoring from the frontier developer. Critical safety incidents must be reported to the state regulators within 72 hours of determining that an incident has occurred. Incidents that pose an imminent risk of death or serious injury must be reported within 24 hours.   

Enforcement and Penalties

The law empowers the Attorney General to bring civil suits for violating the law and explicitly states that it does not create a private right of action. Penalties for first time violations can be as high as $1 million dollars and up to $3 million dollars per subsequent violations. The law does not prevent frontier model developers from asserting that “person, entity, or factor” caused the alleged harm.  

FAQs About the RAISE Act

How does the RAISE Act compare to California’s SB-53?

The RAISE Act and SB-53 are substantially similar, with some very minor differences. SB-53 has a 15 day reporting period for critical incidents, whereas the RAISE Act is 72 hours. The penalties are capped under SB-53 at $1 million dollars. SB-53 establishes whistleblower protections for employees at frontier model companies who submit complaints about violating the law, whereas the RAISE Act does not address this specifically (note: there may be protections codified elsewhere under New York state law). The RAISE Act also explicitly scopes the law around models developed or deployed within New York state, whereas SB-53 does not include similar language.

How does the RAISE Act interact with the White House AI executive order?

Governor Hochul signed the RAISE Act in the wake of President Trump’s Ensuring a National Policy Framework for AI Executive Order (EO), which seeks to prohibit states from enacting their own AI laws. The EO directs the Department of Justice (DOJ) to identify state AI laws that unconstitutionally regulate interstate commerce and bring legal challenges against them. Disclosure requirements for AI companies (i.e., the RAISE Act) is specifically mentioned as a category of law that will face evaluation from the DOJ. While the EO cannot prevent states from actually enacting AI laws, the threatened lawsuits and funding cuts are meant to deter them. It is possible that the legal challenges to the law was a motivating factor for Governor Hochul to sign the law.

What does the RAISE Act mean for AI governance professionals?

The law targets disclosure requirements for frontier model developers, which means in the immediate future there may not be explicit requirements for downstream deployers. However, as the model developers begin implementing their AI frameworks, it is possible that third party agreements or their terms of service may impose new reporting obligations on downstream actors. For instance, the model providers may shift some risk identification responsibilities to downstream deployers and users because they would be better suited to understand how risks are realized in the real world.   

Share:

Related Posts

Everything You Need to Know About the Executive Order on a National AI Policy Framework (2025)

On December 11, 2025, President Trump signed an Executive Order directing the federal government to build a “minimally burdensome” national framework for AI and to push back against state AI laws the Administration views as harmful to innovation. The EO takes a new, novel approach via Executive Branch authority, creating an AI Litigation Task Force and asking the U.S. Department of Commerce to evaluate state AI laws and identify “onerous” laws (explicitly citing laws that require models to “alter their truthful outputs”.)

Read More