As AI adoption accelerates across workplaces, labor organizations around the world are beginning to take notice—and action. The current regulatory focus in the US centers on state-specific laws like those in California, Illinois, Colorado and New York City, but the labor implications of AI are quickly becoming a front-line issue for unions, potentially signaling a new wave of collective bargaining considerations. Similarly, in Europe the deployment of certain AI tools within the organization may trigger information, consultation, and—in some European countries—negotiation obligations. AI tools may only be introduced once the process is completed.

This marks an important inflection point for employers: engaging with employee representatives on AI strategy early can help anticipate employee concerns and reduce friction as new technologies are adopted. Here, we explore how AI is emerging as a key topic in labor relations in the US and Europe and offer practical guidance for employers navigating the evolving intersection of AI, employment law, and collective engagement.

Efforts in the US to Regulate AI’s Impact on Workers

There is no specific US federal law regulating AI in the workplace. An emerging patchwork of state and local legislation (e.g. in Colorado, Illinois and New York City) address the potential for bias and discrimination in AI-based tools—but do not focus on preventing displacement of employees. In March, New York became the first state to require businesses to disclose AI-related mass layoffs, indicating a growing expectation that employers are transparent about AI’s impact on workers.[1]

Some unions have begun negotiating their own safeguards to address growing concerns about the impact that AI may have on union jobs. For example, in 2023, the Las Vegas Culinary Workers negotiated a collective bargaining agreement with major casinos requiring that the union be provided advance notice, and the opportunity to bargain over, AI implementation. The CBA also provides workers displaced by AI with severance pay, continued benefits, and recall rights.

Similarly, in 2023 both the Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) negotiated agreements with the Alliance of Motion Picture and Television Producers (AMPTP) that include safeguards against AI reducing or replacing writers and actors. WGA’s contract requires studios to meet semi-annually with the union to discuss current and future uses of generative AI—giving writers a formal channel to influence how AI is deployed in their industry. The SAG-AFTRA contract requires consent and compensation for use of digital replicas powered by AI.Continue Reading Navigating Labor’s Response to AI: Proactive Strategies for Multinational Employers Across the Atlantic

Join our AI and Cyber CLE Series

If your last name starts with A-G, you are probably well aware that your (recently extended) MCLE compliance deadline is on March 30, 2025. In addition to the general credit requirement, the state of California requires all attorneys to complete:

  • At least four hours of legal ethics
  • At least two hours on competence issues
  • At least two hours on the elimination of bias in the legal profession and society. Of the two hours, at least one hour must focus on implicit bias and the promotion of bias‑reducing strategies.
  • At least one hour on technology 
  • At least one hour on civility

Continue Reading California’s CLE Compliance Deadline Is Approaching – We can help!

On May 17, 2024 Colorado Governor Polis signed the landmark Colorado AI Act (Senate Bill 24-205) into law. Colorado is now the first US state with comprehensive AI regulation, adopting a classification system like the European Union’s recent AI Act. The law will take effect February 1, 2026

The law exempts small employers (fewer than fifty full-time employees) from some of its requirements but otherwise requires companies to take extensive measures to protect Colorado residents against harms such as algorithmic discrimination.

SB 205’s Details

SB 205 requires “developers” and “deployers” of “high-risk artificial intelligence systems” to use “reasonable care” to protect Colorado resident consumers from any known or reasonably foreseeable risks of “algorithmic discrimination.” As written, the law most likely applies to both creators of high-risk AI systems, as well as employers adopting high-risk AI technologies within their organization.  Continue Reading From Brussels to Boulder: Colorado Enacts Comprehensive AI Law with Significant Obligations for Employers on the Heels of the EU AI Act

Effective September 17, employers with four or more employees in New York state must include a compensation range in all advertisements for new jobs, promotions and transfer opportunities. A pay transparency fact sheet and FAQ document are available on the NYSDOL website with additional information and guidance on the new law. 

Overlap and City