Photo of Cynthia Cole

Baker McKenzie is pleased to invite you to an afternoon exploring strategy and risk in the year of the dragon.

Alongside industry leaders from Meta and Dayforce, this will be an interactive discussion exploring how businesses can harness the power and auspiciousness of the mythical dragon in building a comprehensive data, AI and cyber

Special thanks to co-presenters Teresa Michaud and Bradford Newman.

California’s CLE Compliance Deadline Is Approaching…    
We can help!

If your last name starts with H-M, you are probably well aware that your CLE compliance deadline is right around the corner – February 1, 2024. In addition to the general credit requirement, the state of California requires all attorneys to complete:

  • At least four hours of legal ethics
  • At least one hour on competence issues
  • At least two hours on the elimination of bias in the legal profession and society. Of the two hours, at least one hour must focus on implicit bias and the promotion of bias-reducing strategies.

Our lawyers will offer three virtual sessions, focused on key considerations for AI development and utilization, to help you meet your CLE requirements. These sessions will also offer CLE credit in the states of Illinois, Texas, and New York. Participants requesting CLE for other states will receive uniform CLE certificates. 

Please register and let us know which individual session(s) you plan to attend. We look forward to your participation!


Promoting Unity: Overcoming the Risks of Bias and Prejudice in the Workplace

Tuesday, January 16, 2024 | 1:00 – 2:00 pm Pacific
1 hour Elimination of Bias credit (pending approval)Continue Reading California AI CLE Series

In Raines v. U.S. Healthworks Medical Group, the California Supreme Court expanded the definition of an “employer” under the state’s discrimination statute to include certain third-party business entities that perform employment-related functions on behalf of employers. These agents may now be deemed “employers” such that they can be directly liable for employment discrimination under the Fair Employment and Housing Act for certain activities that they carry out on behalf of employers.

Overview of Raines

The Raines‘ plaintiffs were job applicants who received offers of employment that were conditioned on the successful completion of pre-employment medical screenings conducted by a third-party company that used automated decision-making. Plaintiffs alleged that the screening form contained intrusive questions regarding their medical history that violated FEHA. They brought claims against their employers, as well as the third-party provider that conducted the medical screening. The question for the Court was whether business entities acting as agents of an employer, can be considered “employers” under FEHA and held directly liable for FEHA violations caused by their actions.

The Court examined the plain language in FEHA’s definition of “employer” and concluded that the definition did indeed encompass third-party corporate agents like the medical provider in his case. FEHA defines an employer as “any person regularly employing five or more persons, or any person acting as an agent of an employer, directly or indirectly.” Here, the Court reasoned, recognizing the medical provider as an agent of the employer extended liability to the company most directly responsible for the FEHA violation.Continue Reading Automated Decision-Making and AI: California Expands FEHA Liability to Include Third-Party Business Agents of Employers

New York may soon restrict employers and employment agencies from using fully-automated decision making tools to screen job candidates or make other employment decisions that impact the compensation, benefits, work schedule, performance evaluations, or other terms of employment of employees or independent contractors. Draft Senate Bill 7623, introduced August 4, aims to limit the use of such tools and requires human oversight of certain final decisions regarding hiring, promotion, termination, disciplinary, or compensation decisions. Senate Bill 7623 also significantly regulates the use of certain workplace monitoring technologies, going beyond the notice requirements for workplace monitoring operative in New York since May 2022 and introducing data minimization and proportionality requirements that are becoming increasingly common in US state privacy laws.

While there is not yet a federal law focused on AI (the Biden administration and federal agencies have issued guidance documents on AI use and are actively studying the issue), a number of cities and states have introduced bills or resolutions relating to AI in the workplace. These state and local efforts are all at different stages of the legislative process, with some paving the path for others. For example, New York City’s Local Law 144 took effect on July 5, prohibiting employers and employment agencies from using certain automated employment decision tools unless the tools have undergone a bias audit within one year of the use of the tools, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates (read more here).

If enacted, Senate Bill 7623 would take things much further. Here are some of the most significant implications of the draft legislation:Continue Reading Check Yourself Before You Wreck Yourself: New York and Other States Have Big Plans For Employer Use of AI and Other Workplace Monitoring Tools