Skip to main content
main content, press tab to continue
Article | Global News Briefs

United States: New York City regulating artificial intelligence in employment decisions

By Gary Chase | November 21, 2022

New York City proposes new rules to clarify its law regulating the use of artificial intelligence in workforce hiring and promotion decisions over concerns of unequal treatment of job candidates.
Inclusion-and-Diversity|Benessere integrato|Health and Benefits
N/A

Employer Action Code: Monitor

The increasing use of artificial intelligence (AI) in employment-related decisions has prompted the New York City government to regulate its use by employers, driven in particular by concerns over potential unequal treatment of job candidates due to the programming or functioning of the AI. New York City’s Local Law 144 (LL 144) is effective January 1, 2023, and will require employers using automated employment decision tools (AEDTs) in hiring and promotions to satisfy a bias audit requirement and provide notices and disclosures regarding the audit results and the use of the AEDT. Proposed rules were issued in September, and a hearing was held on November 4, 2022. It is unclear whether final regulations will be issued before the end of 2022 or if the effective date will be delayed. Other jurisdictions, within the U.S. and globally, are also in various stages of addressing the employment-related use of AI.

Key details

New York City’s LL 144 defines an AEDT as a "computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision-making for making employment decisions that impact natural persons" but excludes tools that do not impact the decision-making process (such as junk email filters and antivirus software). LL 144 prohibits the use of an AEDT unless:

  1. A bias audit is completed within one year of its use
  2. The results are made publicly available
  3. The notice is provided to job candidates regarding the use of AEDTs
  4. Candidates or employees are allowed to request an alternative evaluation process as an accommodation

The proposed rules address several questions regarding compliance with LL 144, including clarifications regarding the definition of an AEDT, the focus of the bias audit, the data that must be made publicly available, and compliance with the notice and disclosure requirements. However, several questions remain unanswered, including (1) which entities are permitted to perform the bias audit, (2) whether the audit must be provided annually, and (3) the definition of an alternative evaluation process or the types of options that must be made available.

Several U.S. states (e.g., Illinois and Maryland) and some cities have enacted or are considering legislation that could impact the use of AI in hiring and other employment decisions. In the European Union, the European Commission is drafting an Artificial Intelligence Act to regulate the use of AI in general. The act would divide the use of AI into four broad categories of risk (to the rights of citizens):

  1. Unacceptable risks, such as the use of AI in social scoring by governments.
  2. High-risk uses, such as in educational or vocational training, employment, management of workers and remote biometric identification systems.
  3. Limited-risk applications with specific transparency obligations (e.g., a requirement to inform users when interacting with AI such as chatbots).
  4. Minimal-risk AI, such as spam filters. In the view of the commission, the vast majority of AI systems currently in use are in the minimal-risk category.

The U.S. federal government has also focused on the use of AI in employment decisions. The Equal Employment Opportunity Commission (EEOC) issued guidance in May 2022 outlining how certain employment-related uses of AI potentially could violate the Americans with Disabilities Act (ADA). In October, the Biden administration published a draft AI Bill of Rights intended to guide the design, use and deployment of automated systems. Brazil, Canada and the U.K. are working on the development of similar laws and frameworks (as are other governments).

Employer implications

The application of AI in employment is already far ahead of the development of regulatory regimes governing its use. The EEOC has estimated that more than 80% of U.S. employers use some form of AI in their work and employment decision making. Employers should monitor the development of legal restrictions and requirements on the use of AI in employment-related decisions. For employers with employees in New York City, the New York City law is currently set to go into effect in 2023; it may be a good test case for showing how regulation may affect the use of AI in making employment-related decisions.

Contact

Director, Retirement and Executive Compensation

Contact us