Photo of Caroline Burnett

Caroline Burnett is a Knowledge Lawyer in Baker McKenzie’s North America Employment & Compensation Group. Caroline is passionate about analyzing trends in US and global employment law and developing innovative solutions to help multinationals stay ahead of the curve. Prior to joining Baker McKenzie in 2016, she had a broad employment law practice at a full-service, national firm. Caroline holds a J.D. from the University of San Francisco School of Law (2008) and a B.A. from Brown University (2002).

On March 14, 2025, the Court of Appeals for the Fourth Circuit lifted the preliminary injunction blocking key provisions of President Trump’s executive orders related to diversity, equity, and inclusion (our summary of the DEI EOs is here). This decision temporarily reinstates the enforcement of Executive Orders 14151 and 14173, pending further appellate review.

Background

As discussed here, on February 21, a Maryland district court issued a nationwide preliminary injunction, citing concerns that the EOs were likely to violate the First and Fifth Amendments by chilling free speech and due process. The preliminary injunction had blocked the federal government from forcing contractors and grantees to certify that they aren’t promoting “illegal DEI.”

The government defendants immediately filed a notice of appeal with the Fourth Circuit, while also seeking a stay of the district court’s preliminary injunction. On March 3, the district court denied their request for a stay with Judge Abelson concluding that the potential harm of the orders outweighed the administration’s policy priorities.

The Fourth Circuit’s Panel Decision

The three-judge appellate panel unanimously stayed the injunction on March 14, with all three judges writing separate concurrences. There is an undercurrent in each opinion that the injunction came too early (for it’s unclear still what types of programs the government will try to eliminate) to determine if the government’s actions will implicate the First and Fifth Amendment concerns raised by plaintiffs. Also, the court takes the government defendant’s representations that the EOs are distinctly limited in scope and apply only to conduct that violates existing federal anti-discrimination law as true.Continue Reading Fourth Circuit Allows Trump Administration to Enforce DEI EOs (For Now)

[UPDATE RE THE OMNIUS PROPOSAL HERE]

The European Union’s Corporate Sustainability Reporting Directive is a regulation requiring covered companies to disclose information on what they see as the risks and opportunities arising from social and environmental issues, and on the impact of their activities on people and the environment.

The CSRD impacts not

** UPDATE ** On March 3, 2025, the federal judge in the Maryland lawsuit denied the Trump administration’s request to stay the preliminary injunction discussed below.
The judge ruled that the administration failed to demonstrate a likelihood of success on the merits and that the injunction was necessary to prevent potential violations of free speech

Shortly after taking office, President Trump rescinded Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the US, identifying security, privacy and discrimination as particular areas of concern. Trump signed his own executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” directing his advisers to coordinate with the heads of federal agencies and departments, among others, to develop an “action plan” to “sustain and enhance America’s global AI dominance” within 180 days.

While we wait to see if and how the federal government intends to combat potential algorithmic discrimination and bias in artificial intelligence platforms and systems, a patchwork of state and local laws is emerging. Colorado’s AI Act will soon require developers and deployers of high-risk AI systems to protect against algorithmic discrimination. Similarly, New York City’s Local Law 144 imposes strict requirements on employers that use automated employment decision tools, and Illinois’ H.B. 3773 prohibits employers from using AI to engage in unlawful discrimination in recruitment and other employment decisions and requires employers to notify applicants and employees of the use of AI in employment decisions. While well-intentioned, these regulations come with substantial new, and sometimes vague, obligations for covered employers.

California is likely to add to the patchwork of AI regulation in 2025 in two significant ways. First, California Assemblymember Rebecca Bauer-Kahan, Chair of the Assembly Privacy and Consumer Protection Committee, plans to reintroduce a bill to protect against algorithmic discrimination by imposing extensive risk mitigation measures on covered entities. Second, the California Privacy Protection Agency’s ongoing rulemaking under the California Consumer Privacy Act will likely result in regulations restricting the use of automated decision-making technology by imposing requirements to mitigate algorithmic discrimination.Continue Reading Passage of Reintroduced California AI Bill Would Result In Onerous New Compliance Obligations For Covered Employers

On January 20, 2025, the first day of his second term, President Trump revoked Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Biden Order”), signed by President Biden in October 2023. In doing so, President Trump fulfilled a campaign pledge to roll back the Biden Order, which the 2024 Republican platform described as a “dangerous” measure. Then on January 23, 2025, President Trump issued his own Executive Order on AI, entitled Removing Barriers to American Leadership in Artificial Intelligence (the “Trump Order”). Here, we examine some of the practical implications of the repeal and replacement of executive orders by Trump and what it means for businesses.

Overview of the Executive Orders

Building on the White House’s 2022 Blueprint for an AI Bill of Rights, the Biden Order outlined a sweeping vision for the future of AI within the federal government, including seven high-level objectives: (1) Ensuring the Safety and Security of AI Technology; (2) Promoting Innovation and Competition; (3) Supporting Workers; (4) Advancing Equity and Civil Rights.; (4) Protecting Consumers, Patients, Passengers, and Students; (5) Protecting Privacy; (6) Advancing Federal Government Use of AI; and (7) Strengthening American Leadership Abroad.

The Biden Order directed various measures across the federal apparatus –imposing 150 distinct requirements on more than 50 federal agencies and other government entities, representing a genuinely whole-of-government response.

Although the bulk of the Biden Order is addressed to federal agencies, some of its provisions had potentially significant impacts on private sector entities. For example, the Biden Order directed the Commerce Department to require developers to report on the development of higher risk AI systems.  Similarly, the Biden order directed the Commerce Department to establish requirements for domestic Infrastructure as a Service (IaaS) providers to report to the government whenever they contract with foreign parties for the training of large AI models. The Biden Order also open-endedly instructed federal agencies to use existing consumer protection laws to enforce against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI—a directive various federal regulators actioned under the Biden administration.

Other than the definition of AI, the Trump Order and Biden Order share no similarities (both Orders point to the AI definition from 15 U.S.C. 9401(3), namely: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments”). The Trump Order does not contain specific directives (such as those in the Biden Order), but instead articulates the national AI policy to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The Trump Order directs a few specific roles within the administration to develop an Artificial Intelligence Action Plan within 180 days (i.e., by July 22, 2025) to achieve the policy objective articulated in the Trump Order. The Trump Order directs these same roles within the administration to review the policies, directives, regulations, orders, and other actions taken pursuant to the Biden Order and to suspend, revise, or rescind any such actions that are inconsistent with the Trump Order’s stated policy. In cases where suspension, revision, or rescission of the prior action cannot be finalized immediately, the heads of agencies are instructed to “to provide all available exemptions” in the interim.

Practical Impacts

The practical effect of the revocation of the Biden Order—and the options available under the Trump Order—will vary depending on the measure. Although there are widespread impacts from the revocation of the Biden Order’s mandates across multiple initiatives and institutions, below are those that are expected to have a significant impact on private sector entities engaged in the development or use or AI.

Reporting requirement for powerful AI models: As notedthe Biden Order directed the Department of Commerce to establish a requirement for developers to provide reports on “dual-use foundation models” (broadly, models that exhibit high levels performance at tasks that pose a serious risk to security, national economic security, national public health or safety). Pursuant to the Biden Order, the Bureau of Industry and Security’s (BIS), a Commerce Department agency, published a proposed rule to establish reporting requirements on the development of advanced AI models and computing clusters under its Defense Production Act authority, but had not issued a final rule prior to the revocation of the Biden Order. It is likely that the new administration will closely scrutinize this reporting requirement and may take action to block the adoption of the final rule if it is found to be inconsistent with the policy statement in the Trump order.Continue Reading AI Tug-of-War: Trump Pulls Back Biden’s AI Plans

As you plan your to-dos for the year ahead, our “2025 Top 10” will guide you through the material employment law changes ahead in the Golden State. While we have not included all new California employment laws effective 2025, we’ve highlighted the major changes our clients need to know.

Key California ChangeEmployer To-Dos
(1)Minimum

2024 was a ‘super year’ for elections. Half of the world’s population – some 4.7 billion people – went to the polls in 72 countries. Political shifts often lead to significant changes in employment laws. We’re here to help you prepare for the changes ahead and to stay ahead of the curve on employment law developments

Companies with a US workforce can expect material changes to employment laws under the Trump administration, with impacts felt across their business operations. President-elect Trump’s first term, his campaign platform, and the typical shifts in a Democratic to Republican transition provide clues about what’s to come: federal agencies, policies and rules will become more business-centered and many of the Biden-era worker-focused protections will be rolled back.

Below are four major shifts we anticipate:

(1) Significant shifts in US Department of Labor policy

The end of the DOL’s 2024 final overtime rule. On November 15, 2024, a federal judge in Texas blocked implementation of the DOL’s final rule in its entirety, thereby preventing the agency from instituting increases to the salary thresholds for the “white collar” overtime exemptions under the Fair Labor Standards Act. While the government may appeal the judge’s order before the change in administration, any such appeal is likely to be short-lived come January 2025.

Accordingly, employers can halt plans to change their compensation levels or exempt classifications in response to the now-blocked rule. If such changes have already been made, employers should consult with counsel on how best to unwind undesirable changes, if any.

A lower burden for employers to classify workers as independent contractors under federal law. Trump will likely reverse Biden’s worker-friendly contractor classification efforts, making it easier for businesses to classify workers as independent contractors, and pivoting away from the Biden administration’s 2024 DOL independent contractor rule.

Notwithstanding this easing at the federal level, employers must remember that, under US and state law, there is no single test for independent contractor classification. Many states have their own tests, which are often more stringent than federal law and that apply to state wage and hour claims. Moreover, even within the same states, different tests will apply to unemployment claims, workers’ compensation, wage and hour, and taxation.Continue Reading Back to Business: Trump’s Second Term and the Four Major Shifts Employers Should Expect

We are clearly (and thankfully) well past the pandemic, and yet demands for flexible and remote work press on. While the overall global trend of transforming the traditional 9-to-5 work model is consistent, laws governing flexible work arrangements can vary significantly by jurisdiction.

We monitor this space closely (see our previous update here) and advise multinational companies on a multitude of issues bearing on remote, hybrid and flexible arrangements, including health & safety rules, working time regulations, tax and employment benefit issues, cybersecurity and data privacy protections, workforce productivity monitoring and more.

Key recent updates around the globe (organized by region) include:

Asia Pacific

  • Australia: Right to disconnect – Working 9 to [to be determined…]?
    In August 2024, a Full Bench of the Fair Work Commission finalized the new “right to disconnect” model term, which will soon be inserted into all modern awards. Whilst we wait for the Fair Work Commission to issue its guidance on the new workplace right, here’s what you should know, and what we think you should do to prepare for the introduction of the right to disconnect

Continue Reading HR Trend Watch: Maintaining compliance while unlocking the talent rewards of flexible work arrangements

By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for “unlawfully excessive” or “disproportionate” collection. For example, the French data protection authorities recently imposed a USD $34 million fine related to a multinational e-commerce company’s use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:Continue Reading The Legal Playbook for AI in HR: Five Practical Steps to Help Mitigate Your Risk