Listen to this post

New York may soon restrict employers and employment agencies from using fully-automated decision making tools to screen job candidates or make other employment decisions that impact the compensation, benefits, work schedule, performance evaluations, or other terms of employment of employees or independent contractors. Draft Senate Bill 7623, introduced August 4, aims to limit the use of such tools and requires human oversight of certain final decisions regarding hiring, promotion, termination, disciplinary, or compensation decisions. Senate Bill 7623 also significantly regulates the use of certain workplace monitoring technologies, going beyond the notice requirements for workplace monitoring operative in New York since May 2022 and introducing data minimization and proportionality requirements that are becoming increasingly common in US state privacy laws.

While there is not yet a federal law focused on AI (the Biden administration and federal agencies have issued guidance documents on AI use and are actively studying the issue), a number of cities and states have introduced bills or resolutions relating to AI in the workplace. These state and local efforts are all at different stages of the legislative process, with some paving the path for others. For example, New York City’s Local Law 144 took effect on July 5, prohibiting employers and employment agencies from using certain automated employment decision tools unless the tools have undergone a bias audit within one year of the use of the tools, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates (read more here).

If enacted, Senate Bill 7623 would take things much further. Here are some of the most significant implications of the draft legislation:

Proposed Restrictions on Workplace AI Use

Like the NYC law, under Senate Bill 7623, employers will not be able to use an automated employment decision tool unless the tool has been the subject of a bias audit conducted no more than one year prior to the use of the tool. It is not clear how this requirement will work after an employer starts using the tool, and whether the bias audit will need to be repeated every year. A summary of the most recent bias audit will need to be made available publicly on an employer’s website before the tool is used. And there will be extensive notice and disclosure requirements for employees and candidates who reside in the state.

Also, employers will not be able to rely exclusively on output from an automated employment decision tool when making hiring, promotion, termination, disciplinary, or compensation decisions. The bill would impose specific requirements for establishing meaningful human oversight by a designated internal reviewer to corroborate the automated employment decision tool’s output by other means, including supervisory or managerial documentation, personnel files, or the consultation of coworkers. Detailed notice will be required when an automated employment decision tool is used to corroborate a hiring, promotion, termination, disciplinary, or compensation decision.

Proposed Restrictions on Workplace Monitoring Tools

Employers will be allowed to use electronic monitoring tools to monitor production processes or quality, assess worker performance, protect health and safety, etc., but such tools must be the “least invasive means” to accomplish the permitted purpose. Further, employers will have to notify employees who reside in New York that they will be subject to electronic monitoring.

The notice must, among other things, include:

  • A description of the specific employee data to be collected and the activities, locations, communications, and job roles that will be electronically monitored by the tool;
  • A description of the dates, times, and frequency that electronic monitoring will occur; and
  • A description of where any employee data collected by the electronic monitoring tool will be stored and how long it will be retained.

Notices also will have to be clear and conspicuous and provide the worker with actual notice of electronic monitoring activities. Notices that state electronic monitoring “may” take place or that the employer “reserves the right” to monitor will not suffice.

Impermissible Uses Under SB 7623

Monitoring tools won’t be permitted to monitor employees who are off-duty and not performing work-related tasks; to obtain information about an employee’s religious or political beliefs, health or disability status, or immigration status; or to identify, punish, or obtain information about employees engaging in protected activity. Employers also will be barred from using tools that incorporate facial recognition, gait, or emotion recognition technology.

Potential Penalties and Enforcement

Violators could see fines starting at $500 with a maximum penalty of $1,500 for subsequent violations. Each day on which an electronic monitoring tool or automated employment decision tool is used in violation of the proposed new law would give rise to a separate violation.

According to the bill, the attorney general or such other persons designated by the New York Department of Labor will be authorized to initiate any action or proceeding necessary for correction, including mandating compliance with the provisions of the law or such other relief as may be appropriate. And the legislation will specifically preserve the right of employees or candidates for employment to bring civil court actions and the New York Division of Human Rights to bring an enforcement action.

Next Steps

We are tracking this legislation closely and will provide updates as they become available. Note that it is unlikely the bill will pass before next January when lawmakers return to the state Capitol.

For now, we recommend the following actions:

  • Inventory all AI-supported technologies used throughout the organization. This likely will entail fact-gathering across Legal, Finance, Marketing/Digital, IT/Security, Human Resources, and Corporate Development.
  • Analyze contracts and service agreements to determine whether workforce service providers are using AI or automated decision making as part of their offering, if appropriate disclosures are being made and consents are being collected (to the extent required), and that risk-shifting provisions are incorporated.
  • Pause and devote some time to examine routine recruiting processes such as relying on AI tools to screen employment applications or resumes. Now is the right time to review and consider revising applicant and employee privacy notices and online privacy notices to account for information practices related to automated decision-making and monitoring.
  • Train Human Resources, Procurement, Legal and other internal stakeholders about the evolving and emerging laws on use of AI domestically and globally.
  • US-based multinational organizations should consider how the aforementioned recommendations will impact global programs, policies and internal AI adoption and deployment.

We are here to help, and are already leading AI assessments for our clients before performing a legal analysis / legal risk prioritization review to help map the road ahead. Please reach out to your Baker McKenzie attorney for more information.