NEW YORK STATE ASSEMBLY MEMORANDUM IN SUPPORT OF LEGISLATION submitted in accordance with Assembly Rule III, Sec 1(f)
 
BILL NUMBER: A6453REVISED 3/12/25
SPONSOR: Bores
 
TITLE OF BILL:
An act to amend the general business law, in relation to the training
and use of artificial intelligence frontier models
 
PURPOSE OR GENERAL IDEA OF BILL:
To allow for safety reports for powerful frontier artificial intelli-
gence models to limit critical harm.
 
SUMMARY OF PROVISIONS:
Section 1421 details transparency requirements regarding frontier model
training and deployment, including requiring safety plans, having them
audited by third parties, and disclosing safety incidents.
Section 1422 protects employees of artificial intelligence labs that
disclose irresponsible risks.
Section 1423 defines penalties.
Section 1424 clarifies that this bill does not preclude other laws and
responsibilities. Section 1425 clarifies that any section preempted by
federal law shall not apply.
 
JUSTIFICATION:
Artificial intelligence is evolving faster than any technology in human
history. It is driving groundbreaking scientific advances leading to
life-changing medicines, unlocking new creative potential, and automat-
ing mundane tasks. At the same time, experts and practitioners in the
field readily acknowledge the potential for serious risks. Al companies,
leading scientists, and international bodies are preparing for a world
in which Al can be used to conduct devastating cyberattacks, aid in the
production of bioweapons, and even circumvent controls imposed by devel-
opers.
In March 2023, more than 1,000 tech leaders signed a letter calling for
a "pause for at least 6 months" on training frontier models until inter-
national safety standards could be established. The signatories from
figures like Rachel Bronson, Steve Wozniak, Andrew Yang, and Elon Musk
span the political spectrum and underscore the urgent need for caution
(1).
Since then, AI models have gotten exponentially more powerful. We are
only a few years away from when AI models will code themselves; already,
25% of the new code from Google's parent company Alphabet is written by
AI (2). In December 2024, Apollo Research tested large AI models by
making them believe they would be shut down; models from every lab test-
ed (OpenAI, Anthropic, Google, and Meta) tried to make copies of them-
selves on new servers and then lied to humans about their actions (3).
Current models are already showing the potential to aid nefarious actors
in inflicting harms. China's government has employed Meta's Al model for
both broad military uses (4) and citizen surveillance (5). The Interna-
tional Al Safety Report, written by over 100 experts from 30 countries
(including the US) and led by the "godfather of Al" Yoshua Bengio, iden-
tified several emerging risks including that an existing model produced
plans for biological weapons "rated superior to plans generated by
experts with a PhD 72% of the time and provides details that expert
evaluators could not find online (6)."
Developers of this technology continue to sound the alarm. In reviewing
the safety of its latest model, OpenAl stated, "Several of our biology
evaluations indicate our models are on the cusp of being able to mean-
ingfully help novices create known biological threats, which would cross
our high-risk threshold. We expect current trends of rapidly increasing
capability to continue, and for models to cross this threshold in the
near future (7)."
Another leading lab, Anthropic, warned that "the window for proactive
risk prevention is closing fast" and that governments must put in place
regulation of frontier models by April 2026 at the latest (8). Given
New York's legislative calendar, that requires urgent action in our 2025
session. Anthropic also said that while they would prefer action at the
federal level, they admitted that "the federal legislative process will
not be fast enough to address risks on the timescale about which we're
concerned" and "urgency may demand it is instead developed by individual
states (9)."
Our laws have not kept up. We do not let people do things as mundane as
open a daycare center without a safety plan. This bill simply says that
companies spending hundreds of millions of dollars to train the most
advanced Al models need to take the following common-sense steps:
1. Have a safety plan to prevent severe risks (as most of them already
do);
2. Have a qualified third-party review that safety plan;
3. Not fire or otherwise punish employees that flag risks; and
4. Disclose major security incidents, so that no one has to make the
same mistake twice.
The risks noted above are more than sufficient to justify the measures
taken in this bill, but we should be mindful that experts have repeated-
ly warned of even more severe threats. In 2023, more than a thousand
experts, including the CEOs of Google DeepMind, Anthropic, and OpenAl
and many world leading academics signed a letter stating that "mitigat-
ing the risk of extinction from Al should be a global priority alongside
other societal-scale risks such as pandemics and nuclear war (10)."
In the face of these dangers, we must keep a clear focus on the immense
promises of Al. Regulation needs to be targeted and surgical to reduce
risks while promoting Al's benefits. The RAISE Act is designed to do
exactly this. Limited to only a small set of very severe risks, the
RAISE act will not require many changes from what the vast majority of
Al companies are currently doing; instead, it will simply ensure that no
company has an economic incentive to cut corners or abandon its safety
plan. Notably, this bill does not attempt to tackle every issue with Al:
important questions about bias, authenticity, workforce impacts, and
other concerns can and should be handled with additional legislation.
The RAISE Act focuses on severe risks that could cause over $1 billion
in damage or hundreds of deaths or injuries. For these kinds of risks,
this bill is the bare minimum that New Yorkers expect.
(1) https://futureoflife.org/open-letter /pause-giant-ai-experiments/
(2) https://fortune.com/2024/10/30 /googles-code-ai-sundar-pichai/
(3) https://www.apolloresearch.ai/research /scheming-reasoning-evalua-
tions
(4)https://www.reuters.com/technology/artificial- intelligence/chinese-
researchersdevelop-ai-model- military-use-back-metas-Ilama-2024-11-01/
(5) https://www.nytimes.com/2025/02/21/technology /openai-chinese-sur-
veillance.html
(6) https://arxiv.org/pClf/2501.17805
(7) https://cdn.openai.com /deep-research-system-card.pdf
(8) https://www.anthropic.com/news/the- case-for-targeted-regulation
(9) !bid
(10) https://www.safe.ai/work /statement-on-al-risk
 
PRIOR LEGISLATIVE HISTORY:
This is a new bill.
 
FISCAL IMPLICATIONS FOR STATE AND LOCAL GOVERNMENTS:
None
 
EFFECTIVE DATE:
This act shall take effect ninety days after it becomes law