By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for “unlawfully excessive” or “disproportionate” collection. For example, the French data protection authorities recently imposed a USD $34 million fine related to a multinational e-commerce company’s use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:Continue Reading The Legal Playbook for AI in HR: Five Practical Steps to Help Mitigate Your Risk

On May 17, 2024 Colorado Governor Polis signed the landmark Colorado AI Act (Senate Bill 24-205) into law. Colorado is now the first US state with comprehensive AI regulation, adopting a classification system like the European Union’s recent AI Act. The law will take effect February 1, 2026

The law exempts small employers (fewer than fifty full-time employees) from some of its requirements but otherwise requires companies to take extensive measures to protect Colorado residents against harms such as algorithmic discrimination.

SB 205’s Details

SB 205 requires “developers” and “deployers” of “high-risk artificial intelligence systems” to use “reasonable care” to protect Colorado resident consumers from any known or reasonably foreseeable risks of “algorithmic discrimination.” As written, the law most likely applies to both creators of high-risk AI systems, as well as employers adopting high-risk AI technologies within their organization.  Continue Reading From Brussels to Boulder: Colorado Enacts Comprehensive AI Law with Significant Obligations for Employers on the Heels of the EU AI Act

Multinational companies with headcount in the UK will be keen to know how the legal landscape across the pond is shifting this spring. We’ve highlighted updates below in 3 key areas (employment law, immigration law and HR privacy).

First, there are number of employment law changes coming into force in April impacting:

  • Rights to

Special thanks to co-authors Priscila Kirchhoff* and Tricia Oliveira*.

In July, Brazil passed a new Gender Pay Gap law (effective immediately) that requires companies with more than 100 employees — for the first time — to publish a report on salary transparency and compensation criteria (a ‘Salary Transparency Report’) every six months. The

On January 1, 2024, businesses must post updated Privacy Policies under the California Consumer Privacy Act (CCPA), which requires annual updates of disclosures and fully applies in the job applicant and employment context since January 1, 2023.

With respect to job applicants and employees, businesses subject to the CCPA are required to:

  1. Issue detailed privacy notices with prescribed disclosures, terminology, and organization;
  2. Respond to data subject requests from employees and job candidates for copies of information about them, correction, and deletion;
  3. Offer opt-out rights regarding disclosures of information to service providers, vendors, or others, except to the extent they implement qualified agreements that contain particularly prescribed clauses; and
  4. Offer opt-out rights regarding the use of sensitive information except to the extent they have determined they use sensitive personal information only within the scope of statutory exceptions.

If employers sell, share for cross-context behavioral advertising, or use or disclose sensitive personal information outside of limited purposes, numerous additional compliance obligations apply. For more: see also our related previous post: Employers Must Prepare Now for New California Employee Privacy Rights.

Key recommendations to heed now

Continue Reading Looking ahead to 2024: California privacy law action items for employers

It is an unprecedented time for California companies’ privacy law obligations. The California Privacy Rights Act (CPRA) took effect on January 1, 2023 with a twelve-month look-back that also applies to the personal data of employees and business contacts. The California Privacy Protection Agency recently finalized regulations and has kicked off a new phase of rulemaking including on

Does your holiday wish list include CLE credit and a quick tutorial on what to expect in California labor and employment law next year?

Excellent!

Join us for our virtual California 2023-2024 Employment Law Update on Wednesday, December 13 @ 1PM PT.

2023 has been a year of dramatic change for California employers, but have

Special thanks to our Baker McKenzie speakers Pamela Church, Teisha Johnson, Cyrus Vance, Elizabeth Roper, Laura Estrada Vasquez, Joshua Wolkoff and Industry Experts, Alexandra Lopez, Privacy Counsel, Calix, Una Kang, VP and Associate General Counsel, Wolters Kluwer, and Pamela Weinstock, Managing Counsel, Intellectual Property, Tiffany & Co.

On October 30, 2023, President Biden issued a 63-page Executive Order to define the trajectory of artificial intelligence adoption, governance and usage within the United States government. The Executive Order outlines eight guiding principles and priorities for US federal agencies to adhere to as they adopt, govern and use AI. While safety and security are predictably high on the list, so too is a desire to make America a leader in the AI industry including AI development by the federal government. While executive orders are not a statute or regulation and do not require confirmation by Congress, they are binding and can have the force of law, usually based on existing statutory powers.

Instruction to Federal Agencies and Impact on Non-Governmental Entities

The Order directs a majority of federal agencies to address AI’s specific implications for their sectors, setting varied timelines ranging from 30 to 365 days for each applicable agency to implement specific requirements set forth in the Order.

The actions required of the federal agencies will impact non-government entities in a number of ways, because agencies will seek to impose contractual obligations to implement provisions of the Order or invoke statutory powers under the Defense Production Act for the national defense and the protection of critical infrastructure, including: (i) introducing reporting and other obligations for technology providers (both foundational model providers and IaaS providers); (ii) adding requirements for entities that work with the federal government in a contracting capacity; and (iii) influencing overall AI policy development.Continue Reading Biden’s Wide-Ranging Executive Order on Artificial Intelligence Sets Stage For Regulation, Investment, Oversight and Accountability

With special thanks to Danielle Benecke and Ben Allgrove for their contributions.

Baker McKenzie recently hosted industry leaders from Anthropic, Google Cloud and OpenAI in Palo Alto to discuss how in-house legal counsel can best reckon with the transformative power of GenAI.

Baker McKenzie partners joined the panel, sharing insights from their vantage point