As AI adoption accelerates across workplaces, labor organizations around the world are beginning to take notice—and action. The current regulatory focus in the US centers on state-specific laws like those in California, Illinois, Colorado and New York City, but the labor implications of AI are quickly becoming a front-line issue for unions, potentially signaling a new wave of collective bargaining considerations. Similarly, in Europe the deployment of certain AI tools within the organization may trigger information, consultation, and—in some European countries—negotiation obligations. AI tools may only be introduced once the process is completed.

This marks an important inflection point for employers: engaging with employee representatives on AI strategy early can help anticipate employee concerns and reduce friction as new technologies are adopted. Here, we explore how AI is emerging as a key topic in labor relations in the US and Europe and offer practical guidance for employers navigating the evolving intersection of AI, employment law, and collective engagement.

Efforts in the US to Regulate AI’s Impact on Workers

There is no specific US federal law regulating AI in the workplace. An emerging patchwork of state and local legislation (e.g. in Colorado, Illinois and New York City) address the potential for bias and discrimination in AI-based tools—but do not focus on preventing displacement of employees. In March, New York became the first state to require businesses to disclose AI-related mass layoffs, indicating a growing expectation that employers are transparent about AI’s impact on workers.[1]

Some unions have begun negotiating their own safeguards to address growing concerns about the impact that AI may have on union jobs. For example, in 2023, the Las Vegas Culinary Workers negotiated a collective bargaining agreement with major casinos requiring that the union be provided advance notice, and the opportunity to bargain over, AI implementation. The CBA also provides workers displaced by AI with severance pay, continued benefits, and recall rights.

Similarly, in 2023 both the Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) negotiated agreements with the Alliance of Motion Picture and Television Producers (AMPTP) that include safeguards against AI reducing or replacing writers and actors. WGA’s contract requires studios to meet semi-annually with the union to discuss current and future uses of generative AI—giving writers a formal channel to influence how AI is deployed in their industry. The SAG-AFTRA contract requires consent and compensation for use of digital replicas powered by AI.Continue Reading Navigating Labor’s Response to AI: Proactive Strategies for Multinational Employers Across the Atlantic

Tune into our annual Global Employment Law webinar series as we bring the world to you.

Our Global Employment Law Fastpass webinar series is here again! Every June, we offer four regionally-focused webinars to help you stay up-to-speed on the latest employment law developments around the world. From tariffs and economic uncertainty to the use

Trade secrets give tech companies a competitive edge in a rapidly evolving landscape, where success depends on the ability to innovate. The unauthorized acquisition, use, or disclosure of trade secrets can result in significant loss and disruption, making it essential for organizations to have robust safeguards in place to protect their trade secrets. Here, we explore clear steps organizations can take to manage and mitigate risks with a focus on trade secret identification and the role of employees in trade secret protection.

Mission Critical: Protecting Tech Trade Secrets

Fast-paced developments

The technology sector continues to experience huge transformation with emerging technologies and advancements in AI. Companies are investing heavily in developing capabilities, and new services and products. Rapid innovation and desire to be first to market, has caused trade secrets to become an increasingly preferred method of protection over other types of intellectual property regimes, such as applying for a patent, which can be costly and raise complex timing considerations. Trade secrets can protect algorithms, processes, datasets, customer lists, and more. The trade secrets of companies at the forefront of AI and other tech innovation are highly valuable.

Expanding threat landscape 

Threat actors are leveraging tech advancements to steal vast amounts of company information through more sophisticated and efficient attacks. Heightened internet usage increases hacking risks from competitors, foreign governments and hacktivist groups.

AI developments currently require high computational power, concentrating progress in large tech companies. However, the competitor landscape is changing, with many start-ups and established companies building applications within the AI technology stack, alongside new players entering the AI frontier race. The demand for tech talent with the skills to drive innovation is at all-time high, making trade secret protection critical.

In the past few years, tech companies have also been especially susceptible to the public disclosure of confidential internal documents by employee activists motivated by non-monetary factors.

Legal remedies

Legal frameworks for protecting trade secrets have become more robust and varied across jurisdictions. Injunctive relief (to prevent use of trade secrets and reclaim them) is an essential tool in trade secret breach cases. Victims may also pursue financial remedies, such as damages.Continue Reading Top Strategies to Safeguard Tech Trade Secrets

On 2 February 2025 the first deadlines under the EU AI Act took effect. This included the AI literacy provisions, responsibility for which will likely be with HR teams and the ban on prohibited AI systems. What do these and other upcoming changes under the Act mean for in-scope employers?  

In this webinar, our multijurisdictional

Shortly after taking office, President Trump rescinded Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the US, identifying security, privacy and discrimination as particular areas of concern. Trump signed his own executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” directing his advisers to coordinate with the heads of federal agencies and departments, among others, to develop an “action plan” to “sustain and enhance America’s global AI dominance” within 180 days.

While we wait to see if and how the federal government intends to combat potential algorithmic discrimination and bias in artificial intelligence platforms and systems, a patchwork of state and local laws is emerging. Colorado’s AI Act will soon require developers and deployers of high-risk AI systems to protect against algorithmic discrimination. Similarly, New York City’s Local Law 144 imposes strict requirements on employers that use automated employment decision tools, and Illinois’ H.B. 3773 prohibits employers from using AI to engage in unlawful discrimination in recruitment and other employment decisions and requires employers to notify applicants and employees of the use of AI in employment decisions. While well-intentioned, these regulations come with substantial new, and sometimes vague, obligations for covered employers.

California is likely to add to the patchwork of AI regulation in 2025 in two significant ways. First, California Assemblymember Rebecca Bauer-Kahan, Chair of the Assembly Privacy and Consumer Protection Committee, plans to reintroduce a bill to protect against algorithmic discrimination by imposing extensive risk mitigation measures on covered entities. Second, the California Privacy Protection Agency’s ongoing rulemaking under the California Consumer Privacy Act will likely result in regulations restricting the use of automated decision-making technology by imposing requirements to mitigate algorithmic discrimination.Continue Reading Passage of Reintroduced California AI Bill Would Result In Onerous New Compliance Obligations For Covered Employers

  • Key laws and regulations, including recent changes and expected developments over the next year
  • Foundational data privacy obligations including information and notification requirements, data subject rights, accountability and governance measures, and responsibilities of data controllers and

From the groundbreaking mandate for paid prenatal leave to the upcoming requirement that employers disclose AI-related layoffs, 2025 is set to be a transformative year for New York employers. As you navigate the latest employment laws, keep this checklist close at hand. While it doesn’t cover every new regulation, it highlights the key changes our

On January 20, 2025, the first day of his second term, President Trump revoked Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Biden Order”), signed by President Biden in October 2023. In doing so, President Trump fulfilled a campaign pledge to roll back the Biden Order, which the 2024 Republican platform described as a “dangerous” measure. Then on January 23, 2025, President Trump issued his own Executive Order on AI, entitled Removing Barriers to American Leadership in Artificial Intelligence (the “Trump Order”). Here, we examine some of the practical implications of the repeal and replacement of executive orders by Trump and what it means for businesses.

Overview of the Executive Orders

Building on the White House’s 2022 Blueprint for an AI Bill of Rights, the Biden Order outlined a sweeping vision for the future of AI within the federal government, including seven high-level objectives: (1) Ensuring the Safety and Security of AI Technology; (2) Promoting Innovation and Competition; (3) Supporting Workers; (4) Advancing Equity and Civil Rights.; (4) Protecting Consumers, Patients, Passengers, and Students; (5) Protecting Privacy; (6) Advancing Federal Government Use of AI; and (7) Strengthening American Leadership Abroad.

The Biden Order directed various measures across the federal apparatus –imposing 150 distinct requirements on more than 50 federal agencies and other government entities, representing a genuinely whole-of-government response.

Although the bulk of the Biden Order is addressed to federal agencies, some of its provisions had potentially significant impacts on private sector entities. For example, the Biden Order directed the Commerce Department to require developers to report on the development of higher risk AI systems.  Similarly, the Biden order directed the Commerce Department to establish requirements for domestic Infrastructure as a Service (IaaS) providers to report to the government whenever they contract with foreign parties for the training of large AI models. The Biden Order also open-endedly instructed federal agencies to use existing consumer protection laws to enforce against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI—a directive various federal regulators actioned under the Biden administration.

Other than the definition of AI, the Trump Order and Biden Order share no similarities (both Orders point to the AI definition from 15 U.S.C. 9401(3), namely: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments”). The Trump Order does not contain specific directives (such as those in the Biden Order), but instead articulates the national AI policy to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The Trump Order directs a few specific roles within the administration to develop an Artificial Intelligence Action Plan within 180 days (i.e., by July 22, 2025) to achieve the policy objective articulated in the Trump Order. The Trump Order directs these same roles within the administration to review the policies, directives, regulations, orders, and other actions taken pursuant to the Biden Order and to suspend, revise, or rescind any such actions that are inconsistent with the Trump Order’s stated policy. In cases where suspension, revision, or rescission of the prior action cannot be finalized immediately, the heads of agencies are instructed to “to provide all available exemptions” in the interim.

Practical Impacts

The practical effect of the revocation of the Biden Order—and the options available under the Trump Order—will vary depending on the measure. Although there are widespread impacts from the revocation of the Biden Order’s mandates across multiple initiatives and institutions, below are those that are expected to have a significant impact on private sector entities engaged in the development or use or AI.

Reporting requirement for powerful AI models: As notedthe Biden Order directed the Department of Commerce to establish a requirement for developers to provide reports on “dual-use foundation models” (broadly, models that exhibit high levels performance at tasks that pose a serious risk to security, national economic security, national public health or safety). Pursuant to the Biden Order, the Bureau of Industry and Security’s (BIS), a Commerce Department agency, published a proposed rule to establish reporting requirements on the development of advanced AI models and computing clusters under its Defense Production Act authority, but had not issued a final rule prior to the revocation of the Biden Order. It is likely that the new administration will closely scrutinize this reporting requirement and may take action to block the adoption of the final rule if it is found to be inconsistent with the policy statement in the Trump order.Continue Reading AI Tug-of-War: Trump Pulls Back Biden’s AI Plans

Join our AI and Cyber CLE Series

If your last name starts with A-G, you are probably well aware that your (recently extended) MCLE compliance deadline is on March 30, 2025. In addition to the general credit requirement, the state of California requires all attorneys to complete:

  • At least four hours of legal ethics
  • At least two hours on competence issues
  • At least two hours on the elimination of bias in the legal profession and society. Of the two hours, at least one hour must focus on implicit bias and the promotion of bias‑reducing strategies.
  • At least one hour on technology 
  • At least one hour on civility

Continue Reading California’s CLE Compliance Deadline Is Approaching – We can help!

2024 was a ‘super year’ for elections. Half of the world’s population – some 4.7 billion people – went to the polls in 72 countries. Political shifts often lead to significant changes in employment laws. We’re here to help you prepare for the changes ahead and to stay ahead of the curve on employment law developments