Enacts the "New York artificial intelligence consumer protection act", in relation to preventing the use of artificial intelligence algorithms to discriminate against protected classes.
STATE OF NEW YORK
________________________________________________________________________
768
2025-2026 Regular Sessions
IN ASSEMBLY(Prefiled)
January 8, 2025
___________
Introduced by M. of A. BORES -- read once and referred to the Committee
on Consumer Affairs and Protection
AN ACT to amend the general business law, in relation to preventing the
use of artificial intelligence algorithms to discriminate against
protected classes
The People of the State of New York, represented in Senate and Assem-bly, do enact as follows:
1 Section 1. Short title. This act shall be known and may be cited as
2 the "New York artificial intelligence consumer protection act".
3 § 2. The general business law is amended by adding a new article 45-A
4 to read as follows:
5 ARTICLE 45-A
6 NEW YORK ARTIFICIAL INTELLIGENCE CONSUMER PROTECTION ACT
7 Section 1550. Definitions.
8 1551. Required documentation.
9 1552. Risk management.
10 1553. Technical documentation.
11 1554. Required disclosure.
12 1555. Preemption.
13 1556. Enforcement.
14 § 1550. Definitions. For the purposes of this article, the following
15 terms shall have the following meanings:
16 1. "Algorithmic discrimination":
17 (a) shall mean any condition in which the use of an artificial intel-
18 ligence decision system results in any unlawful differential treatment
19 or impact that disfavors any individual or group of individuals on the
20 basis of their actual or perceived age, color, disability, ethnicity,
21 genetic information, English language proficiency, national origin,
22 race, religion, reproductive health, sex, veteran status, or other clas-
23 sification protected pursuant to state or federal law; and
EXPLANATION--Matter in italics (underscored) is new; matter in brackets
[] is old law to be omitted.
LBD01361-01-5
A. 768 2
1 (b) shall not include:
2 (i) the offer, license, or use of a high-risk artificial intelligence
3 decision system by a developer or deployer for the sole purpose of:
4 (A) such developer's or deployer's self-testing to identify, mitigate,
5 or prevent discrimination or otherwise ensure compliance with state and
6 federal law; or
7 (B) expanding an applicant, customer, or participant pool to increase
8 diversity or redress historic discrimination; or
9 (ii) an act or omission by or on behalf of a private club or other
10 establishment not open to the general public, as set forth in title II
11 of the Civil Rights Act of 1964, 42 U.S.C. § 2000a(e), as amended.
12 2. "Artificial intelligence decision system" shall mean any computa-
13 tional process, derived from machine learning, statistical modeling,
14 data analytics, or artificial intelligence, that issues simplified
15 output, including any content, decision, prediction, or recommendation,
16 that is used to substantially assist or replace discretionary decision
17 making for making consequential decisions that impact consumers.
18 3. "Bias and governance audit" means an impartial evaluation by an
19 independent auditor, which shall include, at a minimum, the testing of
20 an artificial intelligence decision system to assess such system's
21 disparate impact on employees because of such employee's age, race,
22 creed, color, ethnicity, national origin, disability, citizenship or
23 immigration status, marital or familial status, military status, reli-
24 gion, or sex, including sexual orientation, gender identity, gender
25 expression, pregnancy, pregnancy outcomes, and reproductive healthcare
26 choices.
27 4. "Consequential decision" shall mean any decision that has a materi-
28 al legal or similarly significant effect on the provision or denial to
29 any consumer of, or the cost or terms of, any:
30 (a) education enrollment or education opportunity;
31 (b) employment or employment opportunity;
32 (c) financial or lending service;
33 (d) essential government service;
34 (e) health care service, as defined in section 42 U.S.C. § 324(d)(2),
35 as amended;
36 (f) housing or housing opportunity;
37 (g) insurance; or
38 (h) legal service.
39 5. "Consumer" shall mean any New York state resident.
40 6. "Deploy" shall mean to use a high-risk artificial intelligence
41 decision system.
42 7. "Deployer" shall mean any person doing business in this state that
43 deploys a high-risk artificial intelligence decision system.
44 8. "Developer" shall mean any person doing business in this state that
45 develops, or intentionally and substantially modifies, an artificial
46 intelligence decision system.
47 9. "General-purpose artificial intelligence model":
48 (a) shall mean any form of artificial intelligence decision system
49 that:
50 (i) displays significant generality;
51 (ii) is capable of competently performing a wide range of distinct
52 tasks; and
53 (iii) can be integrated into a variety of downstream applications or
54 systems; and
A. 768 3
1 (b) shall not include any artificial intelligence model that is used
2 for development, prototyping, and research activities before such arti-
3 ficial intelligence model is released on the market.
4 10. "High-risk artificial intelligence decision system":
5 (a) shall mean any artificial intelligence decision system that, when
6 deployed, makes, or is a substantial factor in making, a consequential
7 decision; and
8 (b) shall not include:
9 (i) any artificial intelligence decision system that is intended to:
10 (A) perform any narrow procedural task; or
11 (B) detect decision-making patterns, or deviations from decision-mak-
12 ing patterns, unless such artificial intelligence decision system is
13 intended to replace or influence any assessment previously completed by
14 an individual without sufficient human review; or
15 (ii) unless the technology, when deployed, makes, or is a substantial
16 factor in making, a consequential decision:
17 (A) any anti-fraud technology that does not make use of facial recog-
18 nition technology;
19 (B) any artificial intelligence-enabled video game technology;
20 (C) any anti-malware, anti-virus, calculator, cybersecurity, database,
21 data storage, firewall, Internet domain registration, Internet-web-site
22 loading, networking, robocall-filtering, spam-filtering, spellchecking,
23 spreadsheet, web-caching, web-hosting, or similar technology;
24 (D) any technology that performs tasks exclusively related to an enti-
25 ty's internal management affairs, including, but not limited to, order-
26 ing office supplies or processing payments; or
27 (E) any technology that communicates with consumers in natural
28 language for the purpose of providing consumers with information, making
29 referrals or recommendations, and answering questions, and is subject to
30 an accepted use policy that prohibits generating content that is discri-
31 minatory or harmful.
32 11. "Intentional and substantial modification":
33 (a) shall mean any deliberate change made to:
34 (i) an artificial intelligence decision system that results in any new
35 reasonably foreseeable risk of algorithmic discrimination; or
36 (ii) a general-purpose artificial intelligence model that:
37 (A) affects compliance of the general-purpose artificial intelligence
38 model;
39 (B) materially changes the purpose of the general-purpose artificial
40 intelligence model; or
41 (C) results in any new reasonably foreseeable risk of algorithmic
42 discrimination; and
43 (b) shall not include any change made to a high-risk artificial intel-
44 ligence decision system, or the performance of a high-risk artificial
45 intelligence decision system, if:
46 (i) the high-risk artificial intelligence decision system continues to
47 learn after such high-risk artificial intelligence decision system is:
48 (A) offered, sold, leased, licensed, given or otherwise made available
49 to a deployer; or
50 (B) deployed; and
51 (ii) such change:
52 (A) is made to such high-risk artificial intelligence decision system
53 as a result of any learning described in subparagraph (i) of this para-
54 graph;
55 (B) was predetermined by the deployer, or the third party contracted
56 by the deployer, when such deployer or third party completed the initial
A. 768 4
1 impact assessment of such high-risk artificial intelligence decision
2 system pursuant to subdivision three of section one thousand five
3 hundred fifty-two of this article; and
4 (C) is included in the technical documentation for such high-risk
5 artificial intelligence decision system.
6 12. "Person" shall mean any individual, association, corporation,
7 limited liability company, partnership, trust or other legal entity
8 authorized to do business in this state.
9 13. "Red-teaming" shall mean an exercise that is conducted to identify
10 the potential adverse behaviors or outcomes of an artificial intelli-
11 gence decision system and how such behaviors or outcomes occur, and
12 stress test the safeguards against such adverse behaviors or outcomes.
13 14. "Substantial factor":
14 (a) shall mean a factor that:
15 (i) assists in making a consequential decision;
16 (ii) is capable of altering the outcome of a consequential decision;
17 and
18 (iii) is generated by an artificial intelligence decision system; and
19 (b) includes, but is not limited to, any use of an artificial intelli-
20 gence decision system to generate any content, decision, prediction, or
21 recommendation concerning a consumer that is used as a basis to make a
22 consequential decision concerning such consumer.
23 15. "Synthetic digital content" shall mean any digital content,
24 including, but not limited to, any audio, image, text, or video, that is
25 produced or manipulated by an artificial intelligence decision system,
26 including, but not limited to, a general-purpose artificial intelligence
27 model.
28 16. "Trade secret" shall mean any form and type of financial, busi-
29 ness, scientific, technical, economic, or engineering information,
30 including, but not limited to, a pattern, plan, compilation, program
31 device, formula, design, prototype, method, technique, process, proce-
32 dure, program, or code, whether tangible or intangible, and whether
33 stored, compiled, or memorialized physically, electronically, graph-
34 ically, photographically, or in writing, that:
35 (a) derives independent economic value, whether actual or potential,
36 from not being generally known to, or readily ascertainable by proper
37 means by, other persons who can obtain economic value from its disclo-
38 sure or use; and
39 (b) is the subject of efforts that are reasonable under the circum-
40 stances to maintain its secrecy.
41 § 1551. Required documentation. 1. (a) Beginning on January first, two
42 thousand twenty-seven, each developer of a high-risk artificial intelli-
43 gence decision system shall use reasonable care to protect consumers
44 from any known or reasonably foreseeable risks of algorithmic discrimi-
45 nation arising from the intended and contracted uses of a high-risk
46 artificial intelligence decision system. In any enforcement action
47 brought on or after such date by the attorney general pursuant to this
48 article, there shall be a rebuttable presumption that a developer used
49 reasonable care as required pursuant to this subdivision if:
50 (i) the developer complied with the provisions of this section; and
51 (ii) an independent third party identified by the attorney general
52 pursuant to paragraph (b) of this subdivision and retained by the devel-
53 oper completed bias and governance audits for the high-risk artificial
54 intelligence decision system.
55 (b) No later than January first, two thousand twenty-six, and at least
56 annually thereafter, the attorney general shall:
A. 768 5
1 (i) identify independent third parties who, in the attorney general's
2 opinion, are qualified to complete bias and governance audits for the
3 purposes of subparagraph (ii) of paragraph (a) of this subdivision; and
4 (ii) publish a list of such independent third parties available on the
5 attorney general's website.
6 2. Beginning on January first, two thousand twenty-seven, and except
7 as provided in subdivision five of this section, a developer of a high-
8 risk artificial intelligence decision system shall make available to
9 each deployer or other developer the following information:
10 (a) A general statement describing the reasonably foreseeable uses,
11 and the known harmful or inappropriate uses, of such high-risk artifi-
12 cial intelligence decision system;
13 (b) Documentation disclosing:
14 (i) high-level summaries of the type of data used to train such high-
15 risk artificial intelligence decision system;
16 (ii) the known or reasonably foreseeable limitations of such high-risk
17 artificial intelligence decision system, including, but not limited to,
18 the known or reasonably foreseeable risks of algorithmic discrimination
19 arising from the intended uses of such high-risk artificial intelligence
20 decision system;
21 (iii) the purpose of such high-risk artificial intelligence decision
22 system;
23 (iv) the intended benefits and uses of such high-risk artificial
24 intelligence decision system; and
25 (v) any other information necessary to enable such deployer or other
26 developer to comply with the provisions of this article;
27 (c) Documentation describing:
28 (i) how such high-risk artificial intelligence decision system was
29 evaluated for performance, and mitigation of algorithmic discrimination,
30 before such high-risk artificial intelligence decision system was
31 offered, sold, leased, licensed, given, or otherwise made available to
32 such deployer or other developer;
33 (ii) the data governance measures used to cover the training datasets
34 and examine the suitability of data sources, possible biases, and appro-
35 priate mitigation;
36 (iii) the intended outputs of such high-risk artificial intelligence
37 decision system;
38 (iv) the measures such deployer or other developer has taken to miti-
39 gate any known or reasonably foreseeable risks of algorithmic discrimi-
40 nation that may arise from deployment of such high-risk artificial
41 intelligence decision system; and
42 (v) how such high-risk artificial intelligence decision system should
43 be used, not be used, and be monitored by an individual when such high-
44 risk artificial intelligence decision system is used to make, or as a
45 substantial factor in making, a consequential decision; and
46 (d) Any additional documentation that is reasonably necessary to
47 assist a deployer or other developer to:
48 (i) understand the outputs of such high-risk artificial intelligence
49 decision system; and
50 (ii) monitor the performance of such high-risk artificial intelligence
51 decision system for risks of algorithmic discrimination.
52 3. (a) Except as provided in subdivision five of this section, any
53 developer that, on or after January first, two thousand twenty-seven,
54 offers, sells, leases, licenses, gives, or otherwise makes available to
55 a deployer or other developer a high-risk artificial intelligence deci-
56 sion system shall, to the extent feasible, make available to such
A. 768 6
1 deployers and other developers the documentation and information relat-
2 ing to such high-risk artificial intelligence decision system necessary
3 for a deployer, or the third party contracted by a deployer, to complete
4 an impact assessment pursuant to this article. The developer shall make
5 such documentation and information available through artifacts such as
6 model cards, dataset cards, or other impact assessments.
7 (b) A developer that also serves as a deployer for any high-risk arti-
8 ficial intelligence decision system shall not be required to generate
9 the documentation and information required pursuant to this section
10 unless such high-risk artificial intelligence decision system is
11 provided to an unaffiliated entity acting as a deployer.
12 4. (a) Beginning on January first, two thousand twenty-seven, each
13 developer shall publish, in a manner that is clear and readily avail-
14 able, on such developer's website, or a public use case inventory, a
15 statement summarizing:
16 (i) the types of high-risk artificial intelligence decision systems
17 that such developer:
18 (A) has developed or intentionally and substantially modified; and
19 (B) currently makes available to a deployer or other developer; and
20 (ii) how such developer manages any known or reasonably foreseeable
21 risks of algorithmic discrimination that may arise from the development
22 or intentional and substantial modification of the types of high-risk
23 artificial intelligence decision systems described in subparagraph (i)
24 of this subdivision.
25 (b) Each developer shall update the statement described in paragraph
26 (a) of this subdivision:
27 (i) as necessary to ensure that such statement remains accurate; and
28 (ii) no later than ninety days after the developer intentionally and
29 substantially modifies any high-risk artificial intelligence decision
30 system described in subparagraph (i) of paragraph (a) of this subdivi-
31 sion.
32 5. Nothing in subdivisions two or four of this section shall be
33 construed to require a developer to disclose any information:
34 (a) that is a trade secret or otherwise protected from disclosure
35 pursuant to state or federal law; or
36 (b) the disclosure of which would present a security risk to such
37 developer.
38 6. Beginning on January first, two thousand twenty-seven, the attorney
39 general may require that a developer disclose to the attorney general,
40 as part of an investigation conducted by the attorney general and in a
41 form and manner prescribed by the attorney general, the general state-
42 ment or documentation described in subdivision two of this section. The
43 attorney general may evaluate such general statement or documentation to
44 ensure compliance with the provisions of this section. In disclosing
45 such general statement or documentation to the attorney general pursuant
46 to this subdivision, the developer may designate such general statement
47 or documentation as including any information that is exempt from
48 disclosure pursuant to subdivision five of this section or article six
49 of the public officers law. To the extent such general statement or
50 documentation includes such information, such general statement or
51 documentation shall be exempt from disclosure. To the extent any infor-
52 mation contained in such general statement or documentation is subject
53 to the attorney-client privilege or work product protection, such
54 disclosure shall not constitute a waiver of such privilege or
55 protection.
A. 768 7
1 § 1552. Risk management. 1. (a) Beginning on January first, two thou-
2 sand twenty-seven, each deployer of a high-risk artificial intelligence
3 decision system shall use reasonable care to protect consumers from any
4 known or reasonably foreseeable risks of algorithmic discrimination. In
5 any enforcement action brought on or after said date by the attorney
6 general pursuant to this article, there shall be a rebuttable presump-
7 tion that a deployer of a high-risk artificial intelligence decision
8 system used reasonable care as required pursuant to this subdivision if:
9 (i) the deployer complied with the provisions of this section; and
10 (ii) an independent third party identified by the attorney general
11 pursuant to paragraph (b) of this subdivision and retained by the
12 deployer completed bias and governance audits for the high-risk artifi-
13 cial intelligence decision system.
14 (b) No later than January first, two thousand twenty-seven, and at
15 least annually thereafter, the attorney general shall:
16 (i) identify the independent third parties who, in the attorney gener-
17 al's opinion, are qualified to complete bias and governance audits for
18 the purposes of subparagraph (ii) of paragraph (a) of this subdivision;
19 and
20 (ii) make a list of such independent third parties available on the
21 attorney general's web site.
22 2. (a) Beginning on January first, two thousand twenty-seven, and
23 except as provided in subdivision seven of this section, each deployer
24 of a high-risk artificial intelligence decision system shall implement
25 and maintain a risk management policy and program to govern such
26 deployer's deployment of the high-risk artificial intelligence decision
27 system. The risk management policy and program shall specify and incor-
28 porate the principles, processes, and personnel that the deployer shall
29 use to identify, document, and mitigate any known or reasonably foresee-
30 able risks of algorithmic discrimination. The risk management policy
31 shall be the product of an iterative process, the risk management
32 program shall be an iterative process and both the risk management poli-
33 cy and program shall be planned, implemented, and regularly and system-
34 atically reviewed and updated over the lifecycle of the high-risk arti-
35 ficial intelligence decision system. Each risk management policy and
36 program implemented and maintained pursuant to this subdivision shall be
37 reasonable, considering:
38 (i) the guidance and standards set forth in the latest version of:
39 (A) the "Artificial Intelligence Risk Management Framework" published
40 by the national institute of standards and technology;
41 (B) ISO or IEC 42001 of the international organization for standardi-
42 zation; or
43 (C) a nationally or internationally recognized risk management frame-
44 work for artificial intelligence decision systems, other than the guid-
45 ance and standards specified in clauses (A) and (B) of this subpara-
46 graph, that imposes requirements that are substantially equivalent to,
47 and at least as stringent as, the requirements established pursuant to
48 this section for risk management policies and programs;
49 (ii) the size and complexity of the deployer;
50 (iii) the nature and scope of the high-risk artificial intelligence
51 decision systems deployed by the deployer, including, but not limited
52 to, the intended uses of such high-risk artificial intelligence decision
53 systems; and
54 (iv) the sensitivity and volume of data processed in connection with
55 the high-risk artificial intelligence decision systems deployed by the
56 deployer.
A. 768 8
1 (b) A risk management policy and program implemented and maintained
2 pursuant to paragraph (a) of this subdivision may cover multiple high-
3 risk artificial intelligence decision systems deployed by the deployer.
4 3. (a) Except as provided in paragraphs (c) and (d) of this subdivi-
5 sion and subdivision seven of this section:
6 (i) a deployer that deploys a high-risk artificial intelligence deci-
7 sion system on or after January first, two thousand twenty-seven, or a
8 third party contracted by the deployer, shall complete an impact assess-
9 ment of the high-risk artificial intelligence decision system; and
10 (ii) beginning on January first, two thousand twenty-seven, a deploy-
11 er, or a third party contracted by the deployer, shall complete an
12 impact assessment of a deployed high-risk artificial intelligence deci-
13 sion system:
14 (A) at least annually; and
15 (B) no later than ninety days after an intentional and substantial
16 modification to such high-risk artificial intelligence decision system
17 is made available.
18 (b) (i) Each impact assessment completed pursuant to this subdivision
19 shall include, at a minimum and to the extent reasonably known by, or
20 available to, the deployer:
21 (A) a statement by the deployer disclosing the purpose, intended use
22 cases and deployment context of, and benefits afforded by, the high-risk
23 artificial intelligence decision system;
24 (B) an analysis of whether the deployment of the high-risk artificial
25 intelligence decision system poses any known or reasonably foreseeable
26 risks of algorithmic discrimination and, if so, the nature of such algo-
27 rithmic discrimination and the steps that have been taken to mitigate
28 such risks;
29 (C) A description of:
30 (I) the categories of data the high-risk artificial intelligence deci-
31 sion system processes as inputs; and
32 (II) the outputs such high-risk artificial intelligence decision
33 system produces;
34 (D) if the deployer used data to customize the high-risk artificial
35 intelligence decision system, an overview of the categories of data the
36 deployer used to customize such high-risk artificial intelligence deci-
37 sion system;
38 (E) any metrics used to evaluate the performance and known limitations
39 of the high-risk artificial intelligence decision system;
40 (F) a description of any transparency measures taken concerning the
41 high-risk artificial intelligence decision system, including, but not
42 limited to, any measures taken to disclose to a consumer that such high-
43 risk artificial intelligence decision system is in use when such high-
44 risk artificial intelligence decision system is in use; and
45 (G) a description of the post-deployment monitoring and user safe-
46 guards provided concerning such high-risk artificial intelligence deci-
47 sion system, including, but not limited to, the oversight, use, and
48 learning process established by the deployer to address issues arising
49 from deployment of such high-risk artificial intelligence decision
50 system.
51 (ii) In addition to the statement, analysis, descriptions, overview,
52 and metrics required pursuant to subparagraph (i) of this paragraph, an
53 impact assessment completed pursuant to this subdivision following an
54 intentional and substantial modification made to a high-risk artificial
55 intelligence decision system on or after January first, two thousand
56 twenty-seven, shall include a statement disclosing the extent to which
A. 768 9
1 the high-risk artificial intelligence decision system was used in a
2 manner that was consistent with, or varied from, the developer's
3 intended uses of such high-risk artificial intelligence decision system.
4 (c) A single impact assessment may address a comparable set of high-
5 risk artificial intelligence decision systems deployed by a deployer.
6 (d) If a deployer, or a third party contracted by the deployer,
7 completes an impact assessment for the purpose of complying with another
8 applicable law or regulation, such impact assessment shall be deemed to
9 satisfy the requirements established in this subdivision if such impact
10 assessment is reasonably similar in scope and effect to the impact
11 assessment that would otherwise be completed pursuant to this subdivi-
12 sion.
13 (e) A deployer shall maintain the most recently completed impact
14 assessment of a high-risk artificial intelligence decision system as
15 required pursuant to this subdivision, all records concerning each such
16 impact assessment and all prior impact assessments, if any, for a period
17 of at least three years following the final deployment of the high-risk
18 artificial intelligence decision system.
19 4. Except as provided in subdivision seven of this section, a deploy-
20 er, or a third party contracted by the deployer, shall review, no later
21 than January first, two thousand twenty-seven, and at least annually
22 thereafter, the deployment of each high-risk artificial intelligence
23 decision system deployed by the deployer to ensure that such high-risk
24 artificial intelligence decision system is not causing algorithmic
25 discrimination.
26 5. (a) Beginning on January first, two thousand twenty-seven, and
27 before a deployer deploys a high-risk artificial intelligence decision
28 system to make, or be a substantial factor in making, a consequential
29 decision concerning a consumer, the deployer shall:
30 (i) notify the consumer that the deployer has deployed a high-risk
31 artificial intelligence decision system to make, or be a substantial
32 factor in making, such consequential decision; and
33 (ii) provide to the consumer:
34 (A) a statement disclosing:
35 (I) the purpose of such high-risk artificial intelligence decision
36 system; and
37 (II) the nature of such consequential decision;
38 (B) contact information for such deployer;
39 (C) a description, in plain language, of such high-risk artificial
40 intelligence decision system; and
41 (D) instructions on how to access the statement made available pursu-
42 ant to paragraph (a) of subdivision six of this section.
43 (b) Beginning on January first, two thousand twenty-seven, a deployer
44 that has deployed a high-risk artificial intelligence decision system to
45 make, or as a substantial factor in making, a consequential decision
46 concerning a consumer shall, if such consequential decision is adverse
47 to the consumer, provide to such consumer:
48 (i) a statement disclosing the principal reason or reasons for such
49 adverse consequential decision, including, but not limited to:
50 (A) the degree to which, and manner in which, the high-risk artificial
51 intelligence decision system contributed to such adverse consequential
52 decision;
53 (B) the type of data that was processed by such high-risk artificial
54 intelligence decision system in making such adverse consequential deci-
55 sion; and
56 (C) the source of such data; and
A. 768 10
1 (ii) an opportunity to:
2 (A) correct any incorrect personal data that the high-risk artificial
3 intelligence decision system processed in making, or as a substantial
4 factor in making, such adverse consequential decision; and
5 (B) appeal such adverse consequential decision, which shall, if tech-
6 nically feasible, allow for human review unless providing such opportu-
7 nity is not in the best interest of such consumer, including, but not
8 limited to, in instances in which any delay might pose a risk to the
9 life or safety of such consumer.
10 (c) The deployer shall provide the notice, statements, information,
11 description, and instructions required pursuant to paragraphs (a) and
12 (b) of this subdivision:
13 (i) directly to the consumer;
14 (ii) in plain language;
15 (iii) in all languages in which such deployer, in the ordinary course
16 of such deployer's business, provides contracts, disclaimers, sale
17 announcements, and other information to consumers; and
18 (iv) in a format that is accessible to consumers with disabilities.
19 6. (a) Beginning on January first, two thousand twenty-seven, and
20 except as provided in subdivision seven of this section, each deployer
21 shall make available, in a manner that is clear and readily available on
22 such deployer's website, a statement summarizing:
23 (i) the types of high-risk artificial intelligence decision systems
24 that are currently deployed by such deployer;
25 (ii) how such deployer manages any known or reasonably foreseeable
26 risks of algorithmic discrimination that may arise from deployment of
27 each high-risk artificial intelligence decision system described in
28 subparagraph (i) of this paragraph; and
29 (iii) in detail, the nature, source and extent of the information
30 collected and used by such deployer.
31 (b) Each deployer shall periodically update the statement required
32 pursuant to paragraph (a) of this subdivision.
33 7. The provisions of subdivisions two, three, four, and six of this
34 section shall not apply to a deployer if, at the time the deployer
35 deploys a high-risk artificial intelligence decision system, and at all
36 times while the high-risk artificial intelligence decision system is
37 deployed:
38 (a) the deployer:
39 (i) has entered into a contract with the developer in which the devel-
40 oper has agreed to assume the deployer's duties pursuant to subdivisions
41 two, three, four, or six of this section; and
42 (ii) does not exclusively use such deployer's own data to train such
43 high-risk artificial intelligence decision system;
44 (b) such high-risk artificial intelligence decision system:
45 (i) is used for the intended uses that are disclosed to such deployer
46 pursuant to subparagraph (iv) of paragraph (b) of subdivision two of
47 section one thousand five hundred fifty-one of this article; and
48 (ii) continues learning based on a broad range of data sources and not
49 solely based on the deployer's own data; and
50 (c) such deployer makes available to consumers any impact assessment
51 that:
52 (i) the developer of such high-risk artificial intelligence decision
53 system has completed and provided to such deployer; and
54 (ii) includes information that is substantially similar to the infor-
55 mation included in the statement, analysis, descriptions, overview, and
A. 768 11
1 metrics required pursuant to subparagraph (i) of paragraph (b) of subdi-
2 vision three of this section.
3 8. Nothing in this subdivision or subdivisions two, three, four, five,
4 or six of this section shall be construed to require a deployer to
5 disclose any information that is a trade secret or otherwise protected
6 from disclosure pursuant to state or federal law. If a deployer with-
7 holds any information from a consumer pursuant this subdivision, the
8 deployer shall send notice to such consumer disclosing:
9 (a) that the deployer is withholding such information from such
10 consumer; and
11 (b) the basis for the deployer's decision to withhold such information
12 from such consumer.
13 9. Beginning on January first, two thousand twenty-seven, the attorney
14 general may require that a deployer, or a third party contracted by the
15 deployer pursuant to subdivision three of this section, as applicable,
16 disclose to the attorney general, as part of an investigation conducted
17 by the attorney general, no later than ninety days after a request by
18 the attorney general, and in a form and manner prescribed by the attor-
19 ney general, the risk management policy implemented pursuant to subdivi-
20 sion two of this section, the impact assessment completed pursuant to
21 subdivision three of this section; or records maintained pursuant to
22 paragraph (e) of subdivision three of this section. The attorney general
23 may evaluate such risk management policy, impact assessment or records
24 to ensure compliance with the provisions of this section. In disclosing
25 such risk management policy, impact assessment or records to the attor-
26 ney general pursuant to this subdivision, the deployer or third-party
27 contractor, as applicable, may designate such risk management policy,
28 impact assessment or records as including any information that is exempt
29 from disclosure pursuant to subdivision eight of this section or article
30 six of the public officers law. To the extent such risk management poli-
31 cy, impact assessment, or records include such information, such risk
32 management policy, impact assessment, or records shall be exempt from
33 disclosure. To the extent any information contained in such risk manage-
34 ment policy, impact assessment, or record is subject to the attorney-
35 client privilege or work product protection, such disclosure shall not
36 constitute a waiver of such privilege or protection.
37 § 1553. Technical documentation. 1. Beginning on January first, two
38 thousand twenty-seven, each developer of a general-purpose artificial
39 intelligence model shall, except as provided in subdivision two of this
40 section:
41 (a) create and maintain technical documentation for the general-pur-
42 pose artificial intelligence model, which shall:
43 (i) include:
44 (A) the training and testing processes for such general-purpose arti-
45 ficial intelligence model; and
46 (B) the results of an evaluation of such general-purpose artificial
47 intelligence model performed to determine whether such general-purpose
48 artificial intelligence model is in compliance with the provisions of
49 this article;
50 (ii) include, as appropriate, considering the size and risk profile of
51 such general-purpose artificial intelligence model, at least:
52 (A) the tasks such general-purpose artificial intelligence model is
53 intended to perform;
54 (B) the type and nature of artificial intelligence decision systems in
55 which such general-purpose artificial intelligence model is intended to
56 be integrated;
A. 768 12
1 (C) acceptable use policies for such general-purpose artificial intel-
2 ligence model;
3 (D) the date such general-purpose artificial intelligence model is
4 released;
5 (E) the methods by which such general-purpose artificial intelligence
6 model is distributed; and
7 (F) the modality and format of inputs and outputs for such general-
8 purpose artificial intelligence model; and
9 (iii) be reviewed and revised at least annually, or more frequently,
10 as necessary to maintain the accuracy of such technical documentation;
11 and
12 (b) create, implement, maintain and make available to persons that
13 intend to integrate such general-purpose artificial intelligence model
14 into such persons' artificial intelligence decision systems documenta-
15 tion and information that:
16 (i) enables such persons to:
17 (A) understand the capabilities and limitations of such general-pur-
18 pose artificial intelligence model; and
19 (B) comply with such persons' obligations pursuant to this article;
20 (ii) discloses, at a minimum:
21 (A) the technical means required for such general-purpose artificial
22 intelligence model to be integrated into such persons' artificial intel-
23 ligence decision systems;
24 (B) the information listed in subparagraph (ii) of paragraph (a) of
25 this subdivision; and
26 (iii) except as provided in subdivision two of this section, is
27 reviewed and revised at least annually, or more frequently, as necessary
28 to maintain the accuracy of such documentation and information.
29 2. (a) The provisions of paragraph (a) and subparagraph (iii) of para-
30 graph (b) of subdivision one of this section shall not apply to a devel-
31 oper that develops, or intentionally and substantially modifies, a
32 general-purpose artificial intelligence model on or after January first,
33 two thousand twenty-seven, if:
34 (i) (A) the developer releases such general-purpose artificial intel-
35 ligence model under a free and open-source license that allows for:
36 (I) access to, and modification, distribution, and usage of, such
37 general-purpose artificial intelligence model; and
38 (II) the parameters of such general-purpose artificial intelligence
39 model to be made publicly available pursuant to clause (B) of this
40 subparagraph; and
41 (B) unless such general-purpose artificial intelligence model is
42 deployed as a high-risk artificial intelligence decision system, the
43 parameters of such general-purpose artificial intelligence model,
44 including, but not limited to, the weights and information concerning
45 the model architecture and model usage for such general-purpose artifi-
46 cial intelligence model, are made publicly available; or
47 (ii) the general-purpose artificial intelligence model is:
48 (A) not offered for sale in the market;
49 (B) not intended to interact with consumers; and
50 (C) solely utilized:
51 (I) for an entity's internal purposes; or
52 (II) pursuant to an agreement between multiple entities for such enti-
53 ties' internal purposes.
54 (b) The provisions of this section shall not apply to a developer that
55 develops, or intentionally and substantially modifies, a general-purpose
56 artificial intelligence model on or after January first, two thousand
A. 768 13
1 twenty-seven, if such general purpose artificial intelligence model
2 performs tasks exclusively related to an entity's internal management
3 affairs, including, but not limited to, ordering office supplies or
4 processing payments.
5 (c) A developer that takes any action under an exemption pursuant to
6 paragraph (a) or (b) of this subdivision shall bear the burden of demon-
7 strating that such action qualifies for such exemption.
8 (d) A developer that is exempt pursuant to subparagraph (ii) of para-
9 graph (a) of this subdivision shall establish and maintain an artificial
10 intelligence risk management framework, which shall:
11 (i) be the product of an iterative process and ongoing efforts; and
12 (ii) include, at a minimum:
13 (A) an internal governance function;
14 (B) a map function that shall establish the context to frame risks;
15 (C) a risk management function; and
16 (D) a function to measure identified risks by assessing, analyzing and
17 tracking such risks.
18 3. Nothing in subdivision one of this section shall be construed to
19 require a developer to disclose any information that is a trade secret
20 or otherwise protected from disclosure pursuant to state or federal law.
21 4. Beginning on January first, two thousand twenty-seven, the attorney
22 general may require that a developer disclose to the attorney general,
23 as part of an investigation conducted by the attorney general, no later
24 than ninety days after a request by the attorney general and in a form
25 and manner prescribed by the attorney general, any documentation main-
26 tained pursuant to this section. The attorney general may evaluate such
27 documentation to ensure compliance with the provisions of this section.
28 In disclosing any documentation to the attorney general pursuant to this
29 subdivision, the developer may designate such documentation as including
30 any information that is exempt from disclosure pursuant to subdivision
31 three of this section or article six of the public officers law. To the
32 extent such documentation includes such information, such documentation
33 shall be exempt from disclosure. To the extent any information contained
34 in such documentation is subject to the attorney-client privilege or
35 work product protection, such disclosure shall not constitute a waiver
36 of such privilege or protection.
37 § 1554. Required disclosure. 1. Beginning on January first, two thou-
38 sand twenty-seven, and except as provided in subdivision two of this
39 section, each person doing business in this state, including, but not
40 limited to, each deployer that deploys, offers, sells, leases, licenses,
41 gives, or otherwise makes available, as applicable, any artificial
42 intelligence decision system that is intended to interact with consumers
43 shall ensure that it is disclosed to each consumer who interacts with
44 such artificial intelligence decision system that such consumer is
45 interacting with an artificial intelligence decision system.
46 2. No disclosure shall be required pursuant to subdivision one of this
47 section under circumstances in which a reasonable person would deem it
48 obvious that such person is interacting with an artificial intelligence
49 decision system.
50 § 1555. Preemption. 1. Nothing in this article shall be construed to
51 restrict a developer's, deployer's, or other person's ability to:
52 (a) comply with federal, state or municipal law;
53 (b) comply with a civil, criminal or regulatory inquiry, investi-
54 gation, subpoena, or summons by a federal, state, municipal, or other
55 governmental authority;
A. 768 14
1 (c) cooperate with a law enforcement agency concerning conduct or
2 activity that the developer, deployer, or other person reasonably and in
3 good faith believes may violate federal, state, or municipal law;
4 (d) investigate, establish, exercise, prepare for, or defend a legal
5 claim;
6 (e) take immediate steps to protect an interest that is essential for
7 the life or physical safety of a consumer or another individual;
8 (f) (i) by any means other than facial recognition technology,
9 prevent, detect, protect against, or respond to:
10 (A) a security incident;
11 (B) a malicious or deceptive activity; or
12 (C) identity theft, fraud, harassment or any other illegal activity;
13 (ii) investigate, report, or prosecute the persons responsible for any
14 action described in subparagraph (i) of this paragraph; or
15 (iii) preserve the integrity or security of systems;
16 (g) engage in public or peer-reviewed scientific or statistical
17 research in the public interest that:
18 (i) adheres to all other applicable ethics and privacy laws; and
19 (ii) is conducted in accordance with:
20 (A) part forty-six of title forty-five of the code of federal regu-
21 lations, as amended; or
22 (B) relevant requirements established by the federal food and drug
23 administration;
24 (h) conduct research, testing, and development activities regarding an
25 artificial intelligence decision system or model, other than testing
26 conducted pursuant to real world conditions, before such artificial
27 intelligence decision system or model is placed on the market, deployed,
28 or put into service, as applicable;
29 (i) effectuate a product recall;
30 (j) identify and repair technical errors that impair existing or
31 intended functionality; or
32 (k) assist another developer, deployer, or person with any of the
33 obligations imposed pursuant to this article.
34 2. The obligations imposed on developers, deployers, or other persons
35 pursuant to this article shall not apply where compliance by the devel-
36 oper, deployer, or other person with the provisions of this article
37 would violate an evidentiary privilege pursuant to state law.
38 3. Nothing in this article shall be construed to impose any obligation
39 on a developer, deployer, or other person that adversely affects the
40 rights or freedoms of any person, including, but not limited to, the
41 rights of any person:
42 (a) to freedom of speech or freedom of the press guaranteed in:
43 (i) the first amendment to the United States constitution; and
44 (ii) section eight of the New York state constitution; or
45 (b) pursuant to section seventy-nine-h of the civil rights law.
46 4. Nothing in this article shall be construed to apply to any develop-
47 er, deployer, or other person:
48 (a) insofar as such developer, deployer or other person develops,
49 deploys, puts into service, or intentionally and substantially modifies,
50 as applicable, a high-risk artificial intelligence decision system:
51 (i) that has been approved, authorized, certified, cleared, developed,
52 or granted by:
53 (A) a federal agency, including, but not limited to, the federal food
54 and drug administration or the federal aviation administration, acting
55 within the scope of such federal agency's authority; or
A. 768 15
1 (B) a regulated entity subject to supervision and regulation by the
2 federal housing finance agency; or
3 (ii) in compliance with standards that are:
4 (A) established by:
5 (I) any federal agency, including, but not limited to, the federal
6 office of the national coordinator for health information technology; or
7 (II) a regulated entity subject to supervision and regulation by the
8 federal housing finance agency; and
9 (B) substantially equivalent to, and at least as stringent as, the
10 standards established pursuant to this article;
11 (b) conducting research to support an application:
12 (i) for approval or certification from any federal agency, including,
13 but not limited to, the federal food and drug administration, the feder-
14 al aviation administration, or the federal communications commission; or
15 (ii) that is otherwise subject to review by any federal agency;
16 (c) performing work pursuant to, or in connection with, a contract
17 with the federal department of commerce, the federal department of
18 defense, or the national aeronautics and space administration, unless
19 such developer, deployer, or other person is performing such work on a
20 high-risk artificial intelligence decision system that is used to make,
21 or as a substantial factor in making, a decision concerning employment
22 or housing; or
23 (d) that is a covered entity, as defined by the health insurance
24 portability and accountability act of 1996 and the regulations promul-
25 gated thereunder, as amended, and providing health care recommendations
26 that:
27 (i) are generated by an artificial intelligence decision system;
28 (ii) require a health care provider to take action to implement such
29 recommendations; and
30 (iii) are not considered to be high risk.
31 5. Nothing in this article shall be construed to apply to any artifi-
32 cial intelligence decision system that is acquired by or for the federal
33 government or any federal agency or department, including, but not
34 limited to, the federal department of commerce, the federal department
35 of defense, or the national aeronautics and space administration, unless
36 such artificial intelligence decision system is a high-risk artificial
37 intelligence decision system that is used to make, or as a substantial
38 factor in making, a decision concerning employment or housing.
39 6. Any insurer, as defined by section five hundred one of the insur-
40 ance law, or fraternal benefit society, as defined by section four thou-
41 sand five hundred one of the insurance law, shall be deemed to be in
42 full compliance with the provisions of this article if such insurer or
43 fraternal benefit society has implemented and maintains a written arti-
44 ficial intelligence decision systems program in accordance with all
45 requirements established by the superintendent of financial services.
46 7. (a) Any bank, out-of-state bank, New York credit union, federal
47 credit union, or out-of-state credit union, or any affiliate or subsid-
48 iary thereof, shall be deemed to be in full compliance with the
49 provisions of this article if such bank, out-of-state bank, New York
50 credit union, federal credit union, out-of-state credit union, affil-
51 iate, or subsidiary is subject to examination by any state or federal
52 prudential regulator pursuant to any published guidance or regulations
53 that apply to the use of high-risk artificial intelligence decision
54 systems, and such guidance or regulations:
55 (i) impose requirements that are substantially equivalent to, and at
56 least as stringent as, the requirements of this article; and
A. 768 16
1 (ii) at a minimum, require such bank, out-of-state bank, New York
2 credit union, federal credit union, out-of-state credit union, affil-
3 iate, or subsidiary to:
4 (A) regularly audit such bank's, out-of-state bank's, New York credit
5 union's, federal credit union's, out-of-state credit union's, affil-
6 iate's, or subsidiary's use of high-risk artificial intelligence deci-
7 sion systems for compliance with state and federal anti-discrimination
8 laws and regulations applicable to such bank, out-of-state bank, New
9 York credit union, federal credit union, out-of-state credit union,
10 affiliate, or subsidiary; and
11 (B) mitigate any algorithmic discrimination caused by the use of a
12 high-risk artificial intelligence decision system, or any risk of algo-
13 rithmic discrimination that is reasonably foreseeable as a result of the
14 use of a high-risk artificial intelligence decision system.
15 (b) For the purposes of this subdivision, the following terms shall
16 have the following meanings:
17 (i) "Affiliate" shall have the same meaning as set forth in section
18 nine hundred twelve of the business corporation law.
19 (ii) "Bank" shall have the same meaning as set forth in section two of
20 the banking law.
21 (iii) "Credit union" shall have the same meaning as set forth in
22 section two of the banking law.
23 (iv) "Out-of-state bank" shall have the same meaning as set forth in
24 section two hundred twenty-two of the banking law.
25 (v) "Subsidiary" shall have the same meaning as set forth in section
26 one hundred forty-one of the banking law.
27 8. If a developer, deployer, or other person engages in any action
28 under an exemption pursuant to subdivisions one, two, three, four, five,
29 six, or seven of this section, the developer, deployer, or other person
30 bears the burden of demonstrating that such action qualifies for such
31 exemption.
32 § 1556. Enforcement. 1. The attorney general shall have exclusive
33 authority to enforce the provisions of this article.
34 2. Except as provided in subdivision six of this section, during the
35 period beginning on January first, two thousand twenty-seven, and ending
36 on January first, two thousand twenty-eight, the attorney general shall,
37 prior to initiating any action for a violation of this section, issue a
38 notice of violation to the developer, deployer, or other person if the
39 attorney general determines that it is possible to cure such violation.
40 If the developer, deployer, or other person fails to cure such violation
41 within sixty days after receipt of such notice of violation, the attor-
42 ney general may bring an action pursuant to this section.
43 3. Except as provided in subdivision six of this section, beginning on
44 January first, two thousand twenty-eight, the attorney general may, in
45 determining whether to grant a developer, deployer, or other person the
46 opportunity to cure a violation described in subdivision two of this
47 section, consider:
48 (a) the number of violations;
49 (b) the size and complexity of the developer, deployer, or other
50 person;
51 (c) the nature and extent of the developer's, deployer's, or other
52 person's business;
53 (d) the substantial likelihood of injury to the public;
54 (e) the safety of persons or property; and
55 (f) whether such violation was likely caused by human or technical
56 error.
A. 768 17
1 4. Nothing in this article shall be construed as providing the basis
2 for a private right of action for violations of the provisions of this
3 article.
4 5. Except as provided in subdivisions one, two, three, four, and six
5 of this section, a violation of the requirements established in this
6 article shall constitute an unfair trade practice for purposes of
7 section three hundred forty-nine of this chapter and shall be enforced
8 solely by the attorney general; provided, however, that subdivision (h)
9 of section three hundred forty-nine of this chapter shall not apply to
10 any such violation.
11 6. (a) In any action commenced by the attorney general for any
12 violation of this article, it shall be an affirmative defense that the
13 developer, deployer, or other person:
14 (i) discovers a violation of any provision of this article through
15 red-teaming;
16 (ii) no later than sixty days after discovering such violation through
17 red-teaming:
18 (A) cures such violation; and
19 (B) provides to the attorney general, in a form and manner prescribed
20 by the attorney general, notice that such violation has been cured and
21 evidence that any harm caused by such violation has been mitigated; and
22 (iii) is otherwise in compliance with the latest version of:
23 (A) the Artificial Intelligence Risk Management Framework published by
24 the national institute of standards and technology;
25 (B) ISO/IEC 42001 of the international organization for standardi-
26 zation and the international electrotechnical commission;
27 (C) a nationally or internationally recognized risk management frame-
28 work for artificial intelligence decision systems, other than the risk
29 management frameworks described in clauses (A) and (B) of this subpara-
30 graph, that imposes requirements that are substantially equivalent to,
31 and at least as stringent as, the requirements established pursuant to
32 this article; or
33 (D) any risk management framework for artificial intelligence decision
34 systems that is substantially equivalent to, and at least as stringent
35 as, the risk management frameworks described in clauses (A), (B), and
36 (C) of this subparagraph.
37 (b) The developer, deployer, or other person bears the burden of
38 demonstrating to the attorney general that the requirements established
39 pursuant to paragraph (a) of this subdivision have been satisfied.
40 (c) Nothing in this article, including, but not limited to, the
41 enforcement authority granted to the attorney general pursuant to this
42 section, shall be construed to preempt or otherwise affect any right,
43 claim, remedy, presumption, or defense available at law or in equity.
44 Any rebuttable presumption or affirmative defense established pursuant
45 to this article shall apply only to an enforcement action brought by the
46 attorney general pursuant to this section and shall not apply to any
47 right, claim, remedy, presumption, or defense available at law or in
48 equity.
49 § 3. This act shall take effect on the two hundred seventieth day
50 after it shall have become a law.