Photo of Cynthia Cole

In Raines v. U.S. Healthworks Medical Group, the California Supreme Court expanded the definition of an “employer” under the state’s discrimination statute to include certain third-party business entities that perform employment-related functions on behalf of employers. These agents may now be deemed “employers” such that they can be directly liable for employment discrimination under the Fair Employment and Housing Act for certain activities that they carry out on behalf of employers.

Overview of Raines

The Raines‘ plaintiffs were job applicants who received offers of employment that were conditioned on the successful completion of pre-employment medical screenings conducted by a third-party company that used automated decision-making. Plaintiffs alleged that the screening form contained intrusive questions regarding their medical history that violated FEHA. They brought claims against their employers, as well as the third-party provider that conducted the medical screening. The question for the Court was whether business entities acting as agents of an employer, can be considered “employers” under FEHA and held directly liable for FEHA violations caused by their actions.

The Court examined the plain language in FEHA’s definition of “employer” and concluded that the definition did indeed encompass third-party corporate agents like the medical provider in his case. FEHA defines an employer as “any person regularly employing five or more persons, or any person acting as an agent of an employer, directly or indirectly.” Here, the Court reasoned, recognizing the medical provider as an agent of the employer extended liability to the company most directly responsible for the FEHA violation.Continue Reading Automated Decision-Making and AI: California Expands FEHA Liability to Include Third-Party Business Agents of Employers

New York may soon restrict employers and employment agencies from using fully-automated decision making tools to screen job candidates or make other employment decisions that impact the compensation, benefits, work schedule, performance evaluations, or other terms of employment of employees or independent contractors. Draft Senate Bill 7623, introduced August 4, aims to limit the use of such tools and requires human oversight of certain final decisions regarding hiring, promotion, termination, disciplinary, or compensation decisions. Senate Bill 7623 also significantly regulates the use of certain workplace monitoring technologies, going beyond the notice requirements for workplace monitoring operative in New York since May 2022 and introducing data minimization and proportionality requirements that are becoming increasingly common in US state privacy laws.

While there is not yet a federal law focused on AI (the Biden administration and federal agencies have issued guidance documents on AI use and are actively studying the issue), a number of cities and states have introduced bills or resolutions relating to AI in the workplace. These state and local efforts are all at different stages of the legislative process, with some paving the path for others. For example, New York City’s Local Law 144 took effect on July 5, prohibiting employers and employment agencies from using certain automated employment decision tools unless the tools have undergone a bias audit within one year of the use of the tools, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates (read more here).

If enacted, Senate Bill 7623 would take things much further. Here are some of the most significant implications of the draft legislation:Continue Reading Check Yourself Before You Wreck Yourself: New York and Other States Have Big Plans For Employer Use of AI and Other Workplace Monitoring Tools