A08884 Summary:
| BILL NO | A08884 |
|   | |
| SAME AS | SAME AS S01169-A |
|   | |
| SPONSOR | Solages |
|   | |
| COSPNSR | Gallagher, Forrest, Mitaynes, Valdez, Torres, Shrestha, Gonzalez-Rojas, Bores, Gibbs, Clark, Kelles, Raga, Vanel, Epstein, Jacobson, Lee, Alvarez, Reilly, Hooks |
|   | |
| MLTSPNSR | |
|   | |
| Add Art 8-A §§85 - 89-d, Civ Rts L; amd §296, Exec L | |
|   | |
| Regulates the development and use of certain artificial intelligence systems to prevent algorithmic discrimination; requires independent audits of high risk AI systems; provides for enforcement by the attorney general as well as a private right of action. | |
A08884 Actions:
| BILL NO | A08884 | |||||||||||||||||||||||||||||||||||||||||||||||||
|   | ||||||||||||||||||||||||||||||||||||||||||||||||||
| 06/09/2025 | referred to science and technology | |||||||||||||||||||||||||||||||||||||||||||||||||
| 06/11/2025 | reference changed to ways and means | |||||||||||||||||||||||||||||||||||||||||||||||||
| 01/07/2026 | referred to ways and means | |||||||||||||||||||||||||||||||||||||||||||||||||
| 01/12/2026 | reference changed to science and technology | |||||||||||||||||||||||||||||||||||||||||||||||||
A08884 Memo:
Go to topNEW YORK STATE ASSEMBLY
MEMORANDUM IN SUPPORT OF LEGISLATION
submitted in accordance with Assembly Rule III, Sec 1(f)   BILL NUMBER: A8884 SPONSOR: Solages
  TITLE OF BILL: An act to amend the civil rights law and the executive law, in relation to the use of artificial intelligence systems   PURPOSE: The purpose of this bill is to promote safe and responsible development, deployment, and use of artificial intelligence systemi.   SUMMARY: Section 1 sets out the legislative findings and intent. Al already occu- pies a prominent role in New Yorkers' lives, often without their know- ledge. A deep body of independent research shows that many Al systems are not fit for purpose. Of particular concern, AI errors can replicate and magnify existing bias. Entities that profit from development and deployment of Al tools should be held liable when they fail to adequate- ly prepare these tools and when they cause unintended consequences. Section 2 of the bill establishes protections regarding artificial intelligence in sections 85 through 89-d of a new article 8-A of the civil rights law: Section 85 sets forth definitions for this article. Section 86 prohibits discriminatory practices with respect to high-risk Al systems. Section 86-a creates deployer and developer obligations with respect to high-risk AI systems. Section 86-b enacts whistleblower protections for the employees of developers and deployers of high-risk Al systems. Section 87 obligates developers and deployers to commission independent audits of high-risk Al systems. Section 88 requires reporting on high-risk Al systems. Section 89 sets requirements for internal risk management policies and programs for developers and deployers of high-risk Al systems. Section 89-a prohibits the development, deployment, use, or sale of an Al system that assesses trustworthiness based on social behavior, or known or predicted personal characteristics, and leads to unjustified differential treatment. Section 89-b creates .a safe harbor for developers if deployers have agreed not to use an Al system as a high-risk AI system. Section 89-c enables enforcement of this article by the attorney general or in a private right of action. Section 89-d is the severability clause. Section 3 amends section 296 of the executive law to provide that a violation of subdivision 86 of this article shall be considered an unlawful discriminatory practice under the executive law.   JUSTIFICATION: Artificial intelligence technology is already an integral part of New Yorker's daily lives. In the private sector, Al is already being used in education, health care, employment, insurance, credit scoring, public safety, retail, banking and financial services, media, and more, yet few regulations have been adopted to protect New Yorkers from serious issues like bias and inaccuracy. It is the duty of the government to protect citizens and hold developers and deployers accountable for irresponsible practices and harm caused through errors of their systems. This bill creates a framework that is designed to address risks before they become harmful to citizens. Under this bill, if such harm occurs or an entity fails to proactively address risk, they may face liability.   LEGISLATIVE HISTORY: New bill.   FISCAL IMPLICATIONS: TBD,   EFFECTIVE DATE: This act shall take effect one year after it shall have become a law; provided, however, that section 87 of article 8-A of the civil rights law as added by section three of this act shall take effect two years after it shall have become a law.
A08884 Text:
Go to topSTATE OF NEW YORK ________________________________________________________________________ 8884 2025-2026 Regular Sessions IN ASSEMBLY June 9, 2025 ___________ Introduced by M. of A. SOLAGES -- read once and referred to the Commit- tee on Science and Technology AN ACT to amend the civil rights law and the executive law, in relation to the use of artificial intelligence systems The People of the State of New York, represented in Senate and Assem- bly, do enact as follows: 1 Section 1. This act shall be known and may be cited as the "New York 2 artificial intelligence act (New York AI act)". 3 § 2. Legislative findings and intent. The legislature finds and 4 declares the following: 5 (a) A revolution in artificial intelligence (AI) has advanced to the 6 point that comprehensive regulations must be enacted to protect New 7 Yorkers. 8 (b) Artificial intelligence is already an integral part of New York- 9 ers' daily lives. In the private sector, AI is currently in use in areas 10 such as education, health care, employment, insurance, credit scoring, 11 public safety, retail, banking and financial services, media, and more 12 with little transparency or oversight. A growing body of research shows 13 that AI systems that are deployed without adequate testing, sufficient 14 oversight and robust guardrails can harm consumers and deny historically 15 disadvantaged groups the full measure of their civil rights and liber- 16 ties, thereby further entrenching inequalities. The legislature must act 17 to ensure that all uses of AI, especially those that affect important 18 life chances, are free from harmful biases, protect our privacy, and 19 work for the public good. 20 (c) Safe innovation must remain a priority for the state. New York 21 state is home to thousands of technology start-ups, many of which exper- 22 iment with new applications of AI and which have the potential to find 23 new ways to employ technology at the service of New Yorkers. The goal of 24 the legislature is to encourage safe innovation in the AI sector by 25 providing clear guidance for AI development, testing, and validation EXPLANATION--Matter in italics (underscored) is new; matter in brackets [] is old law to be omitted. LBD04409-03-5A. 8884 2 1 both before a product is launched and throughout the product's life 2 cycle. 3 (d) New York must establish that the burden of responsibility of prov- 4 ing that AI products do not cause harm to New Yorkers will be shouldered 5 by the developers and deployers of AI. While government and civil socie- 6 ty must act to audit and enforce human rights laws around the use of AI, 7 the companies employing and profiting from the use of AI must lead in 8 ensuring that their products are free from algorithmic discrimination. 9 (e) Close collaboration and communication between New York state and 10 industry partners is key to ensuring that innovation can occur with 11 safeguards to protect all New Yorkers. This legislation will ensure that 12 lines of communication exist and that there is clear statutory authority 13 to investigate and prosecute entities that break the law. 14 (f) As new forms of AI are developed beyond what is currently techno- 15 logically feasible, the goal of the legislature is to use this section 16 as a guiding light for future regulations. 17 (g) Lastly, it is in the interest of all New Yorkers that certain uses 18 of AI that infringe on fundamental rights, deepen structural inequality, 19 or that result in unequal access to services shall be banned. 20 § 3. The civil rights law is amended by adding a new article 8-A to 21 read as follows: 22 ARTICLE 8-A 23 PROTECTIONS REGARDING USE OF ARTIFICIAL INTELLIGENCE 24 Section 85. Definitions. 25 86. Unlawful discriminatory practices. 26 86-a. Deployer and developer obligations. 27 86-b. Whistleblower protections. 28 87. Audits. 29 88. High-risk AI system reporting requirements. 30 89. Risk management policy and program. 31 89-a. Social scoring AI systems prohibited. 32 89-b. Developer safe harbor. 33 89-c. Enforcement. 34 89-d. Severability. 35 § 85. Definitions. The following terms shall have the following mean- 36 ings: 37 1. "Algorithmic discrimination" means any condition in which the use 38 of an AI system contributes to unjustified differential treatment or 39 impacts, disfavoring people based on their actual or perceived age, 40 race, ethnicity, creed, religion, color, national origin, citizenship or 41 immigration status, sexual orientation, gender identity, gender 42 expression, military status, sex, disability, predisposing genetic char- 43 acteristics, familial status, marital status, pregnancy, pregnancy 44 outcomes, disability, height, weight, reproductive health care or auton- 45 omy, status as a victim of domestic violence or other classification 46 protected under state or federal laws. Algorithmic discrimination shall 47 not include: 48 (a) a developer's or deployer's testing of their own AI system to 49 identify, mitigate, and prevent discriminatory bias; 50 (b) expanding an applicant, customer, or participant pool to increase 51 diversity or redress historical discrimination; or 52 (c) an act or omission by or on behalf of a private club or other 53 establishment that is not in fact open to the public, as set forth in 54 Title II of the federal Civil Rights Act of 1964, 42 U.S.C. section 55 2000a(e), as amended.A. 8884 3 1 2. "Artificial intelligence system" or "AI system" means a machine- 2 based system or combination of systems, that for explicit and implicit 3 objectives, infers, from the input it receives, how to generate outputs 4 such as predictions, content, recommendations, or decisions that can 5 influence physical or virtual environments. Artificial intelligence 6 system shall not include: 7 (a) any system that (i) is used by a business entity solely for inter- 8 nal purposes and (ii) is not used as a substantial factor in a conse- 9 quential decision; or 10 (b) any software used primarily for basic computerized processes, such 11 as anti-malware, anti-virus, auto-correct functions, calculators, data- 12 bases, data storage, electronic communications, firewall, internet 13 domain registration, internet website loading, networking, spam and 14 robocall-filtering, spellcheck tools, spreadsheets, web caching, web 15 hosting, or any tool that relates only to internal management affairs 16 such as ordering office supplies or processing payments, and that do not 17 materially affect the rights, liberties, benefits, safety or welfare of 18 any individual within the state. 19 3. "Auditor" shall refer to an independent entity including but not 20 limited to an individual, non-profit, firm, corporation, partnership, 21 cooperative, association, academic institution, or group affiliated with 22 an academic institution, commissioned to perform an audit. 23 4. "Consequential decision" means a decision or judgment that has a 24 material, legal or similarly significant effect on an individual's 25 access to, or the cost, terms, or availability of, any of the following: 26 (a) Employment, workers' management, or self-employment, including, 27 but not limited to, all of the following: 28 (i) Pay or promotion; and 29 (ii) Hiring or termination. 30 (b) Education and vocational training, including, but not limited to, 31 all of the following: 32 (i) Accreditation; 33 (ii) Certification; 34 (iii) Admissions; and 35 (iv) Financial aid or scholarships. 36 (c) Housing or lodging, including rental or short-term housing or 37 lodging. 38 (d) Family planning, including adoption services or reproductive 39 services, as well as assessments related to child protective services. 40 (e) Health care or health insurance, including mental health care, 41 dental, or vision. 42 (f) Financial services, including a financial service provided by a 43 mortgage company, mortgage broker, or creditor. 44 (g) Law enforcement activities, including the allocation of law 45 enforcement personnel or assets, the enforcement of laws, maintaining 46 public order, or managing public safety. 47 (h) Legal services. 48 5. "Deployer" means any person, partnership, association or corpo- 49 ration that offers or uses an AI system for commerce in the state of New 50 York, or provides an AI system for use by the general public in the 51 state of New York. A deployer shall not include any natural person 52 using an AI system for personal use. A developer may also be considered 53 a deployer if its actions satisfy this definition. 54 6. "Developer" means a person, partnership, or corporation that 55 designs, codes, or produces an AI system, or creates a substantial 56 change with respect to an AI system, whether for its own use in theA. 8884 4 1 state of New York or for use by a third party in the state of New York. 2 A deployer may also be considered a developer if its actions satisfy 3 this definition. 4 7. "Employee" means an individual who performs services for and under 5 the control and direction of an employer for wages or other remunera- 6 tion, including former employees, or natural persons employed as inde- 7 pendent contractors to carry out work in furtherance of an employer's 8 business enterprise who are not themselves employers. 9 8. "Employer" means any person, firm, partnership, institution, corpo- 10 ration, or association that employs one or more employees. 11 9. "End user" means any individual or group of individuals that: 12 (a) is the subject of a consequential decision made entirely by or 13 with the assistance of an AI system; or 14 (b) interacts, directly or indirectly, with the relevant AI system on 15 behalf of an individual or group that is the subject of a consequential 16 decision made entirely by or with the assistance of an AI system. 17 10. "High-risk AI system" means any AI system that, when deployed: 18 (a) is a substantial factor in making a consequential decision; or (b) 19 will have a material impact on the statutory or constitutional rights, 20 civil liberties, safety, or welfare of an individual in the state. 21 11. "Risk management policy and program" means the risk management 22 policy and program created pursuant to section eighty-nine of this arti- 23 cle. 24 12. "Substantial change" means any new version, new release, or any 25 other update to an AI system that results in significant changes to such 26 AI system's appropriate use cases, key functionality, or expected 27 outcomes. 28 13. "Substantial factor" means a factor that is (a) material in making 29 a consequential decision, or (b) is capable of altering the outcome of a 30 consequential decision. 31 § 86. Unlawful discriminatory practices. It shall be an unlawful 32 discriminatory practice for a developer or deployer to fail to comply 33 with the duties under this section. 34 1. A developer or deployer shall take reasonable care to prevent fore- 35 seeable risk of algorithmic discrimination that is a consequence of the 36 use, sale, or sharing of a high-risk AI system or a product featuring a 37 high-risk AI system. 38 2. Any developer or deployer that uses, sells, or shares a high-risk 39 AI system shall have completed an independent audit, pursuant to section 40 eighty-seven of this article, confirming that the developer or deployer 41 has taken reasonable care to prevent foreseeable risk of algorithmic 42 discrimination with respect to such high-risk AI system. 43 § 86-a. Deployer and developer obligations. 1. (a) Any deployer that 44 employs a high-risk AI system for a consequential decision shall comply 45 with the following requirements; provided, however, that where there is 46 an urgent necessity for a decision to be made to confer a benefit to the 47 end user, including, but not limited to, social benefits, housing 48 access, or dispensing of emergency funds, and compliance with this 49 section would cause imminent detriment to the welfare of the end user, 50 such obligation shall be considered waived; provided further, that noth- 51 ing in this section shall be construed to waive a natural person's 52 option to request human review of the decision: 53 (i) inform the end user at least five business days prior to the use 54 of such system for the making of a consequential decision in clear, 55 conspicuous, and consumer-friendly terms, made available in each of theA. 8884 5 1 languages in which the company offers its end services, that AI systems 2 will be used to make a decision or to assist in making a decision; and 3 (ii) allow sufficient time and opportunity in a clear, conspicuous, 4 and consumer-friendly manner for the consumer to opt-out of the auto- 5 mated consequential decision process and for the decision to be made by 6 a human representative. A consumer may not be punished or face any other 7 adverse action for opting out of a decision by an AI system and the 8 deployer shall render a decision to the consumer within forty-five days. 9 (b) If a deployer employs a high-risk AI system for a consequential 10 decision to determine whether to or on what terms to confer a benefit on 11 an end user, the deployer shall offer the end user the option to waive 12 their right to advance notice of five business days under this subdivi- 13 sion. 14 (c) If the end user clearly and affirmatively waives their right to 15 five business days' notice, the deployer shall then inform the end user 16 as early as practicable before the making of the consequential decision 17 in clear, conspicuous, and consumer-friendly terms, made available in 18 each of the languages in which the company offers its end services, that 19 AI systems will be used to make a decision or to assist in making a 20 decision. The deployer shall allow sufficient time and opportunity in a 21 clear, conspicuous, and consumer-friendly manner for the consumer to 22 opt-out of the automated process and for the decision to be made by a 23 human representative. A consumer may not be punished or face any other 24 adverse action for opting out of a decision by an AI system and the 25 deployer shall render a decision to the consumer within forty-five days. 26 (d) An end user shall be entitled to no more than one opt-out with 27 respect to the same consequential decision within a six-month period. 28 2. (a) Any deployer that employs a high-risk AI system for a conse- 29 quential decision shall inform the end user within five days in a clear, 30 conspicuous and consumer-friendly manner if a high-risk AI system has 31 been used to make a consequential decision. The deployer shall then 32 provide and explain a process for the end user to appeal the decision, 33 which shall at minimum allow the end user to (i) formally contest the 34 decision, (ii) provide information to support their position, and (iii) 35 obtain meaningful human review of the decision. A deployer shall 36 respond to an end user's appeal within forty-five days of receipt of the 37 appeal. That period may be extended once by forty-five additional days 38 where reasonably necessary, taking into account the complexity and 39 number of appeals. The deployer shall inform the end user of any such 40 extension within forty-five days of receipt of the appeal, together with 41 the reasons for the delay. 42 (b) An end user shall be entitled to no more than one appeal with 43 respect to the same consequential decision in a six-month period. 44 3. The deployer or developer of a high-risk AI system is legally 45 responsible for quality and accuracy of all consequential decisions 46 made, including any bias or algorithmic discrimination resulting from 47 the operation of the AI system on their behalf. 48 4. The rights and obligations under this section may not be waived by 49 any person, partnership, association or corporation. 50 5. With respect to a single consequential decision, an end user may 51 not exercise both its right to opt-out of a consequential decision under 52 subdivision one of this section and its right to appeal a consequential 53 decision under subdivision two of this section. 54 § 86-b. Whistleblower protections. 1. Developers and/or deployers of 55 high-risk AI systems shall not:A. 8884 6 1 (a) prevent any of their employees from disclosing information to the 2 attorney general, including through terms and conditions of employment 3 or seeking to enforce terms and conditions of employment, if the employ- 4 ee has reasonable cause to believe the information indicates a violation 5 of this article; or 6 (b) retaliate against an employee for disclosing information to the 7 attorney general pursuant to this section. 8 2. An employee harmed by a violation of this article may petition a 9 court for appropriate relief as provided in subdivision five of section 10 seven hundred forty of the labor law. 11 3. Developers and deployers of high-risk AI systems shall provide a 12 clear notice to all of their employees working on such AI systems of 13 their rights and responsibilities under this article, including the 14 right of employees of contractors and subcontractors to use the develop- 15 er's internal process for making protected disclosures pursuant to 16 subdivision four of this section. A developer or deployer is presumed to 17 be in compliance with the requirements of this subdivision if the devel- 18 oper or deployer does either of the following: 19 (a) at all times post and display within all workplaces maintained by 20 the developer or deployer a notice to all employees of their rights and 21 responsibilities under this article, ensure that all new employees 22 receive equivalent notice, and ensure that employees who work remotely 23 periodically receive an equivalent notice; or 24 (b) no less frequently than once every year, provide written notice to 25 all employees of their rights and responsibilities under this article 26 and ensure that the notice is received and acknowledged by all of those 27 employees. 28 4. Each developer and deployer shall provide a reasonable internal 29 process through which an employee may anonymously disclose information 30 to the developer or deployer if the employee believes in good faith that 31 the information indicates that the developer or deployer has violated 32 any provision of this article or any other law, or has made false or 33 materially misleading statements related to its risk management policy 34 and program, or failed to disclose known risks to employees, including, 35 at a minimum, a monthly update to the person who made the disclosure 36 regarding the status of the developer's or deployer's investigation of 37 the disclosure and the actions taken by the developer or deployer in 38 response to the disclosure. 39 5. This section does not limit protections provided to employees under 40 section seven hundred forty of the labor law. 41 § 87. Audits. 1. Developers of high-risk AI systems shall cause to be 42 conducted third-party audits in accordance with this section. 43 (a) A developer of a high-risk AI system shall complete at least: 44 (i) a first audit within six months after completion of development of 45 the high-risk AI system and the initial offering of the high-risk AI 46 system to a deployer for deployment or, if the developer is first 47 deployer to deploy the high-risk AI system, after initial deployment; 48 and 49 (ii) one audit every one year following the submission of the first 50 audit. 51 (b) A developer audit under this section shall include: 52 (i) an evaluation and determination of whether the developer has taken 53 reasonable care to prevent foreseeable risk of algorithmic discrimi- 54 nation with respect to such high-risk AI system; andA. 8884 7 1 (ii) an evaluation of the developer's documented risk management poli- 2 cy and program required under section eighty-nine of this article for 3 conformity with subdivision one of such section eighty-nine. 4 2. Deployers of high-risk AI systems shall cause to be conducted 5 third-party audits in accordance with this section. 6 (a) A deployer of a high-risk AI system shall complete at least: 7 (i) a first audit within six months after initial deployment; 8 (ii) a second audit within one year following the submission of the 9 first audit; and 10 (iii) one audit every two years following the submission of the second 11 audit. 12 (b) A deployer audit under this section shall include: 13 (i) an evaluation and determination of whether the deployer has taken 14 reasonable care to prevent foreseeable risk of algorithmic discrimi- 15 nation with respect to such high-risk AI system; 16 (ii) an evaluation of system accuracy and reliability with respect to 17 such high-risk AI system's deployer-intended and actual use cases; and 18 (iii) an evaluation of the deployer's documented risk management poli- 19 cy and program required under section eighty-nine of this article for 20 conformity with subdivision one of such section eighty-nine. 21 3. A deployer or developer may hire more than one auditor to fulfill 22 the requirements of this section. 23 4. At the attorney general's discretion, the attorney general may: 24 (a) promulgate further rules as necessary to ensure that audits under 25 this section assess whether or not AI systems produce algorithmic 26 discrimination and otherwise comply with the provisions of this article; 27 and 28 (b) recommend an updated AI system auditing framework to the legisla- 29 ture, where such recommendations are based on a standard or framework 30 (i) designed to evaluate the risks of AI systems, and (ii) that is 31 nationally or internationally recognized and consensus-driven, including 32 but not limited to a relevant framework or standard created by the 33 International Standards Organization. 34 5. The independent auditor shall have complete and unredacted copies 35 of all reports previously filed by the deployer or developer under 36 section eighty-eight of this article. 37 6. An audit conducted under this section may be completed in part, but 38 shall not be completed entirely, with the assistance of an AI system. 39 (a) Acceptable auditor uses of an AI system include, but are not 40 limited to: 41 (i) use of an audited high-risk AI system in a controlled environment 42 without impacts on end users for system testing purposes; or 43 (ii) detecting patterns in the behavior of an audited AI system. 44 (b) An auditor shall not: 45 (i) use a different high-risk AI system that is not the subject of an 46 audit to complete an audit; or 47 (ii) use an AI system to draft an audit under this section without 48 meaningful human review and oversight. 49 7. (a) An auditor shall be an independent entity including but not 50 limited to an individual, non-profit, firm, corporation, partnership, 51 cooperative, or association. 52 (b) For the purposes of this article, no auditor may be commissioned 53 by a developer or deployer of a high-risk AI system if such entity: 54 (i) has already been commissioned to provide any auditing or non-au- 55 diting service, including but not limited to financial auditing,A. 8884 8 1 cybersecurity auditing, or consulting services of any type, to the 2 commissioning company in the past twelve months; or 3 (ii) is, will be, or plans to be engaged in the business of developing 4 or deploying an AI system that can compete commercially with such devel- 5 oper's or deployer's high-risk AI system in the five years following an 6 audit. 7 (c) Fees paid to auditors may not be contingent on the result of the 8 audit and the commissioning company shall not provide any incentives or 9 bonuses for a positive audit result. 10 8. The attorney general may promulgate further rules to ensure (a) the 11 independence of auditors under this section, and (b) that teams conduct- 12 ing audits incorporate feedback from communities that may foreseeably be 13 the subject of algorithmic discrimination with respect to the AI system 14 being audited. 15 9. If a developer or deployer has an audit completed for the purpose 16 of complying with another applicable federal, state, or local law or 17 regulation, and the audit otherwise satisfies all other requirements of 18 this section, such audit shall be deemed to satisfy the requirements of 19 this section. 20 § 88. High-risk AI system reporting requirements. 1. Every developer 21 and deployer of a high-risk AI system shall comply with the reporting 22 requirements of this section. 23 2. Together with each report required to be filed under this section, 24 every developer and deployer shall file with the attorney general a copy 25 of the last completed independent audit required by this article. 26 3. Developers of high-risk AI systems shall complete and file with the 27 attorney general reports in accordance with this subdivision. 28 (a) A developer of a high-risk AI system shall complete and file with 29 the attorney general at least: 30 (i) a first report within six months after completion of development 31 of the high-risk AI system and the initial offering of the high-risk AI 32 system to a deployer for deployment or, if the developer is first 33 deployer to deploy the high-risk AI system, after initial deployment; 34 (ii) one report annually following the submission of the first report; 35 and 36 (iii) one report within six months of any substantial change to the 37 high-risk AI system. 38 (b) A developer report under this section shall include: 39 (i) a description of the system including: 40 (A) the uses of the high-risk AI system that the developer intends; 41 and 42 (B) any explicitly unintended or disallowed uses of the high-risk AI 43 system; 44 (ii) an overview of how the high-risk AI system was developed; 45 (iii) an overview of the high-risk AI system's training data; and 46 (iv) any other information necessary to allow a deployer to: 47 (A) understand the outputs and monitor the system for compliance with 48 this article; and 49 (B) fulfill its duties under this article. 50 4. Deployers of high-risk AI systems shall complete and file with the 51 attorney general reports in accordance with this subdivision. 52 (a) A deployer of a high-risk AI system shall complete and file with 53 the attorney general at least: 54 (i) a first report within six months after initial deployment; 55 (ii) a second report within one year following the completion and 56 filing of the first report;A. 8884 9 1 (iii) one report every two years following the completion and filing 2 of the second report; and 3 (iv) one report within six months of any substantial change to the 4 high-risk AI system. 5 (b) A deployer report under this section shall include: 6 (i) a description of the system including: 7 (A) the deployer's actual, intended, or planned uses of the high-risk 8 AI system with respect to consequential decisions; and 9 (B) whether the deployer is using the high-risk AI system for any 10 developer unintended or disallowed uses; and 11 (ii) an impact assessment including: 12 (A) whether the high-risk AI system poses a risk of algorithmic 13 discrimination and the steps taken to address the risk of algorithmic 14 discrimination; 15 (B) if the high-risk AI system is or will be monetized, how it is or 16 is planned to be monetized; and 17 (C) an evaluation of the costs and benefits to consumers and other end 18 users. 19 (c) A deployer that is also a developer and is required to submit 20 reports under subdivision three of this section may submit a single 21 joint report provided it contains the information required in this 22 subdivision. 23 5. The attorney general shall: 24 (a) promulgate rules for a process whereby developers and deployers 25 may request redaction of portions of reports required under this section 26 to ensure that they are not required to disclose sensitive and protected 27 information; and 28 (b) maintain an online database that is accessible to the general 29 public with reports, redacted in accordance with this subdivision, and 30 audits required by this article, which database shall be updated biannu- 31 ally. 32 6. For high-risk AI systems which are already in deployment at the 33 time of the effective date of this article, developers and deployers 34 shall have eighteen months from such effective date to complete and file 35 the first report and associated independent audit required by this arti- 36 cle. 37 (a) Each developer of a high-risk AI system shall thereafter file at 38 least one report annually following the submission of the first report 39 under this subdivision. 40 (b) Each deployer of a high-risk AI system shall thereafter file at 41 least one report every two years following the submission of the first 42 report under this subdivision. 43 § 89. Risk management policy and program. 1. Each developer or deploy- 44 er of high-risk AI systems shall plan, document, and implement a risk 45 management policy and program to govern development or deployment, as 46 applicable, of such high-risk AI system. The risk management policy and 47 program shall specify and incorporate the principles, processes, and 48 personnel that the deployer uses to identify, document, and mitigate 49 known or reasonably foreseeable risks of algorithmic discrimination 50 covered under subdivision one of section eighty-six of this article. The 51 risk management policy and program shall be an iterative process 52 planned, implemented, and regularly and systematically reviewed and 53 updated over the life cycle of a high-risk AI system, requiring regular, 54 systematic review and updates, including updates to documentation. A 55 risk management policy and program implemented and maintained pursuant 56 to this section shall be reasonable considering:A. 8884 10 1 (a) The guidance and standards set forth in: 2 (i) version 1.0 of the "Artificial Intelligence Risk Management Frame- 3 work" published by the National Institute of Standards and Technology in 4 the United States department of commerce, or 5 (ii) another substantially equivalent framework selected at the 6 discretion of the attorney general, if such framework was designed to 7 manage risks associated with AI systems, is nationally or interna- 8 tionally recognized and consensus-driven, and is at least as stringent 9 as version 1.0 of the "Artificial Intelligence Risk Management Frame- 10 work" published by the National Institute of Standards and Technology; 11 (b) The size and complexity of the developer or deployer; 12 (c) The nature, scope, and intended uses of the high-risk AI system 13 developed or deployed; and 14 (d) The sensitivity and volume of data processed in connection with 15 the high-risk AI system. 16 2. A risk management policy and program implemented pursuant to subdi- 17 vision one of this section may cover multiple high-risk AI systems 18 developed by the same developer or deployed by the same deployer if 19 sufficient. 20 3. The attorney general may require a developer or a deployer to 21 disclose the risk management policy and program implemented pursuant to 22 subdivision one of this section in a form and manner prescribed by the 23 attorney general. The attorney general may evaluate the risk management 24 policy and program to ensure compliance with this section. 25 § 89-a. Social scoring AI systems prohibited. No person, partnership, 26 association or corporation shall develop, deploy, use, or sell an AI 27 system which evaluates or classifies the trustworthiness of natural 28 persons over a certain period of time based on their social behavior or 29 known or predicted personal or personality characteristics, with the 30 social score leading to any of the following: 31 1. differential treatment of certain natural persons or whole groups 32 thereof in social contexts which are unrelated to the contexts in which 33 the data was originally generated or collected; 34 2. differential treatment of certain natural persons or whole groups 35 thereof that is unjustified or disproportionate to their social behavior 36 or its gravity; or 37 3. the infringement of any right guaranteed under the United States 38 constitution, the New York constitution, or state or federal law. 39 § 89-b. Developer safe harbor. A developer may be exempt from its 40 duties and obligations under sections eighty-six, eighty-six-a, eighty- 41 six-b, eighty-seven, eighty-eight, and eighty-nine of this article if 42 such developer: 43 1. receives a written and signed contractual agreement from each 44 deployer authorized to use the artificial intelligence system developed 45 by such developer, including the developer if they are also a deployer, 46 that such artificial intelligence system will not be used as a high-risk 47 AI system; 48 2. implements reasonable technical safeguards designed to prevent or 49 detect high-risk AI system use cases or otherwise demonstrates reason- 50 able steps taken to ensure that any unauthorized deployments of its AI 51 systems are not being used as a high-risk AI system; 52 3. prominently displays on its website, in marketing materials, and in 53 all licensing agreements offered to prospective deployers of its AI 54 system that the AI system cannot be used as a high-risk AI system; and 55 4. maintains records of deployer agreements for a period of not less 56 than five years.A. 8884 11 1 § 89-c. Enforcement. 1. Whenever there shall be a violation of section 2 eighty-six-a, eighty-six-b, eighty-seven, eighty-eight, eighty-nine, or 3 eighty-nine-a of this article, an application may be made by the attor- 4 ney general in the name of the people of the state of New York, to the 5 supreme court having jurisdiction to issue an injunction, and upon 6 notice to the respondent of not less than ten days, to enjoin and 7 restrain the continuance of such violation; and if it shall appear to 8 the satisfaction of the court that the respondent has, in fact, violated 9 this article, an injunction may be issued by the court, enjoining and 10 restraining any further violations, without requiring proof that any 11 person has, in fact, been injured or damaged thereby. In any such 12 proceeding, the court may make allowances to the attorney general as 13 provided in paragraph six of subdivision (a) of section eighty-three 14 hundred three of the civil practice law and rules, and direct restitu- 15 tion. Whenever the court shall determine that a violation of this arti- 16 cle has occurred, the court may impose a civil penalty of not more than 17 twenty thousand dollars for each violation. 18 2. There shall be a private right of action by plenary proceeding for 19 any person harmed by any violation of section eighty-six-a, 20 eighty-six-b, eighty-seven, eighty-eight, eighty-nine, or eighty-nine-a 21 of this article by any natural person or entity. The court shall award 22 compensatory damages and legal fees to the prevailing party. 23 3. In evaluating any motion to dismiss a plenary proceeding commenced 24 pursuant to subdivision two of this section, the court shall presume the 25 specified AI system was created and/or operated in violation of a speci- 26 fied law or laws and that such violation caused the harm or harms 27 alleged. 28 (a) A defendant can rebut presumptions made pursuant to this subdivi- 29 sion through clear and convincing evidence that the specified AI system 30 did not cause the harm or harms alleged and/or did not violate the 31 alleged law or laws. An algorithmic audit can be considered as evidence 32 in rebutting such presumptions, but the mere existence of such an audit, 33 without additional evidence, shall not be considered clear and convinc- 34 ing evidence. 35 (b) With respect to a violation of section eighty-six-a, eighty-six-b, 36 eighty-seven, eighty-eight, or eighty-nine of this article, a developer 37 can rebut presumptions made pursuant to this subdivision through clear 38 and convincing evidence that it has complied with the duties under 39 section eighty-nine-b of this article. 40 (c) Where such presumptions are not rebutted pursuant to this subdivi- 41 sion, the action shall not be dismissed. 42 (d) Where such presumptions are rebutted pursuant to this subdivision, 43 a motion to dismiss an action shall be adjudicated without any consider- 44 ation of this section. 45 4. The supreme court in the state shall have jurisdiction over any 46 action, claim, or lawsuit to enforce the provisions of this article. 47 § 89-d. Severability. If any clause, sentence, paragraph, subdivision, 48 section or part of this article shall be adjudged by any court of compe- 49 tent jurisdiction to be invalid, such judgment shall not affect, impair, 50 or invalidate the remainder thereof, but shall be confined in its opera- 51 tion to the clause, sentence, paragraph, subdivision, section, or part 52 thereof directly involved in the controversy in which such judgment 53 shall have been made. 54 § 4. Section 296 of the executive law is amended by adding a new 55 subdivision 23 to read as follows:A. 8884 12 1 23. It shall be an unlawful discriminatory practice under this section 2 for a deployer or a developer, as such terms are defined in section 3 eighty-five of the civil rights law, to engage in an unlawful discrimi- 4 natory practice under section eighty-six of the civil rights law. 5 § 5. This act shall take effect one year after it shall have become a 6 law; provided, however, that section 87 of article 8-A of the civil 7 rights law as added by section three of this act shall take effect two 8 years after it shall have become a law.