Listen to this post

Just after the fireworks’ finale, New York City’s Department of Consumer and Worker Protection will begin enforcing its new ordinance regulating the use of automation and artificial intelligence in employment decisions. The DCWP recently issued a Notice of Adoption of Final Rule establishing that enforcement efforts will begin July 5, 2023.

Here are three reasons this matters

  1. The new law requires time-sensitive, significant actions (read: audits, notices and public reporting) from employers using automated employment decisions tools to avoid civil penalties;
  2. Company compliance will require a cross-functional response immediately, so it’s time to get your ducks in a row; and
  3. Since the City’s law is (mostly) first-of-its-kind, it is likely a harbinger of things to come for employers across the country and it could be used as a framework in other cities and states.

The law in a nutshell

Local Law 144 prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates. Violations of the provisions of the law are subject to a civil penalty.

What is an automated employment decision tool?

The law defines AEDTs as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” It provides that the only “employment decisions” it covers involve hiring and promotion.

Significantly, the definition does not include a “a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.”

The phrase “to substantially assist or replace discretionary decision making” means:

  • To rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered; or
  • To use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set; or
  • To use a simplified output to overrule conclusions derived from other factors including human decision-making.

Simply put, the law likely covers any automatic tool or algorithm-based software program used to identify, select, evaluate, or recruit candidates for employment. Likewise, any data-driven tools used to review CVs, conduct skills testing, rank applicants, assess employee performance and productivity, or determine promotion likely fall within the scope of the law as well.

What are the new obligations for employers?

Essentially the law requires three actions from employers before using an AEDT: (1) annual bias audits, (2) publication of the results, and (3) notice to employees or candidates.

  1. Bias Audits: The law requires a bias audit to calculate an AEDT’s selection rate for race/ethnicity and sex categories, and to compare selection rates to determine an impact ratio. A “bias audit” is defined as “an impartial evaluation by an independent auditor” to assess the tool’s potential “disparate impact” on sex, race, and ethnicity. The final rules clarify the requisite calculations for a bias audit in detail.
  2. Published Results: Before using an AEDT, employers must make the following information publicly available on the employment section of their website in a clear and conspicuous manner:
    • The date of the most recent bias audit of the AEDT and a summary of the results, which must include the source and explanation of the data used to conduct the bias audit, the number of individuals the AEDT assessed that fall within an unknown category, the number of applicants or candidates, the selection or scoring rates, as applicable, and the impact ratios for all categories; and,
    • The distribution date of the AEDT.
  3. Notice: Employers must provide candidates for employment or promotion with notice 10 business days before use of the tool:
    • That an AEDT is being used in assessing and evaluating the candidate;
    • The job qualifications and characteristics the AEDT will use in its analysis;
    • If not disclosed elsewhere on its website, the AEDT’s data source, type, and the employer’s data retention policy; and
    • That a candidate may request an alternative selection process or accommodation.

Notice may be provided by a single clear and conspicuous notice on the employment section of the employer’s website, in a job posting, or by way of US mail or e-mail to the candidate.

Penalties

Employers who violate the law are liable for a civil penalty of $375 for the first violation (and each additional violation occurring on the first day of the first violation). Each subsequent violation (or default, if the violation is left uncorrected) occurring after the first day will result in a civil penalty of at least $500 and not more than $1,500. Each failure to meet applicant notice requirements constitutes a separate violation. Failure to meet the “bias audit” requirements results in separate, daily violations.

Next steps for NYC employers

The first (and immediate) step is to take inventory of HR tech tools. Legal should partner with HR and IT to determine whether the company uses AEDTs to make any employment decisions. This will require evaluating the technology that comprises the tool, as well as how and for what purposes the tool is used. 

Next, the company will need to commission an independent bias audit and publish a summary of the results. Thereafter the company must provide notice to applicants and employees of the tool’s use and functioning, and provide notice that affected individuals may request an accommodation or alternative selection process.

Beyond NYC, consider the broader implications

Just as employers are increasingly relying on AI to assist with screening candidates, as well as making hiring and promotion decisions, state legislators are growing wary of the potential for discriminatory adverse impact. Both Illinois and Maryland have enacted laws that require employers to disclose to job applicants if their job candidacies will be evaluated by AI tools and mandate that the employers seek prior consent from the candidates for such use.

In California, the Civil Rights Council has proposed modifications to the state’s employment regulations to incorporate the use of AI in connection with employment decision-making. Essentially, the regulations would make it unlawful for employers to use a selection criteria (e.g. an automated decision tool) if it has an adverse impact on, or constitutes disparate treatment of, applicants or employees under California’s civil rights law. Additionally, California legislators are debating AB 331, which would require “deployers” of automated decision tools to conduct “impact assessments.” (Sounds familiar, right?)

The federal government is itching to jump into the fray as well. Last fall, the White House released a “Blueprint for an AI Bill of Rights,” outlining several considerations regarding the use of automated decision-making tools and AI in the employment context. And on May 18, the Equal Opportunity Employment Commission released new guidance for employers on the use of artificial intelligence in employment, focusing on adverse impact under Title VII. (Read more here.)

Needless to say, the regulatory landscape around the use of AI in employment decision-making is rapidly evolving. Stay tuned to the Employer Report as we continue to monitor developments in this space, or contact a member of our team for support drafting policies and requisite notices.