Shortly after taking office, President Trump rescinded Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the US, identifying security, privacy and discrimination as particular areas of concern. Trump signed his own executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” directing his advisers to coordinate with the heads of federal agencies and departments, among others, to develop an “action plan” to “sustain and enhance America’s global AI dominance” within 180 days.

While we wait to see if and how the federal government intends to combat potential algorithmic discrimination and bias in artificial intelligence platforms and systems, a patchwork of state and local laws is emerging. Colorado’s AI Act will soon require developers and deployers of high-risk AI systems to protect against algorithmic discrimination. Similarly, New York City’s Local Law 144 imposes strict requirements on employers that use automated employment decision tools, and Illinois’ H.B. 3773 prohibits employers from using AI to engage in unlawful discrimination in recruitment and other employment decisions and requires employers to notify applicants and employees of the use of AI in employment decisions. While well-intentioned, these regulations come with substantial new, and sometimes vague, obligations for covered employers.

California is likely to add to the patchwork of AI regulation in 2025 in two significant ways. First, California Assemblymember Rebecca Bauer-Kahan, Chair of the Assembly Privacy and Consumer Protection Committee, plans to reintroduce a bill to protect against algorithmic discrimination by imposing extensive risk mitigation measures on covered entities. Second, the California Privacy Protection Agency’s ongoing rulemaking under the California Consumer Privacy Act will likely result in regulations restricting the use of automated decision-making technology by imposing requirements to mitigate algorithmic discrimination.Continue Reading Passage of Reintroduced California AI Bill Would Result In Onerous New Compliance Obligations For Covered Employers

In the first two days of his presidency, President Trump signed a series of executive orders aimed at dismantling diversity programs across the federal government, revoking longstanding DEI and affirmative action requirements for federal contractors, and directing public and private entities to end policies that constitute “illegal DEI discrimination.”

Suffice it to say the orders have left federal contractors, corporations, nonprofits, and other employers in the private sector grappling with what to do next. While the EOs reverberations will be felt for some time and the DEI journey for federal agencies and the private sector is likely to be a circuitous one as challenges are raised in the courts, before Congress and in the court of public opinion, employers do need to gain some traction and start the trip. In this article, we present a roadmap to consider as employers work through the impacts of the EOs on their organizations.

At the starting line: what the EOs do and don’t do

Executive orders are a powerful tool through which the President issues formal directions to the executive branch, agencies and officials on how to carry out the work of the federal government. Historically, EOs mostly addressed administrative matters, but some sought to drive substantial policy changes. While congressional approval is not required for an EO to be effective, judicial review is commonplace and also, EOs can be reversed by later administrations.

President Trump’s EOs addressing DEI do not change existing discrimination statutes, such as the bedrock prohibitions on discrimination in employment in Title VII of the Civil Rights Act of 1964. The orders do not ban or prohibit any or all private employer DEI programs. Rather, the orders direct federal agencies and deputized private citizens to root out (through investigations, enforcement actions, or False Claims Act litigation) “illegal discrimination and preferences” and, for government agencies, to take particular actions.

Similar to the situation following the US Supreme Court SFFA decision in June 2023, if your DEI programs were lawful before Trump’s inauguration – they still are. What is “illegal” under federal law today is the same as it was before Trump’s presidency. But what’s clearly different is the ferocity of the federal government’s intent and resources dedicated to scrutinizing alleged race- or sex-based preferences in the workplace, and the resulting level of scrutiny applied to DEI programs.Continue Reading A Roadmap to Trump’s DEI Executive Orders for US Employers

By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for “unlawfully excessive” or “disproportionate” collection. For example, the French data protection authorities recently imposed a USD $34 million fine related to a multinational e-commerce company’s use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:Continue Reading The Legal Playbook for AI in HR: Five Practical Steps to Help Mitigate Your Risk

SHRM reports that one in four organizations currently use AI to support HR-related activities, with adoption of the technology expanding rapidly. The compliance risks arising from generative AI use also are intensifying, with an increasing number of state and local laws restricting employer use of AI tools in the United States. And not to be outdone, substantial regulation impacting multinational employers’ use of AI is emerging in other parts of the world (e.g., the EU AI Act).

One rapidly growing use case is applicant recruiting and screening, a trend likely to continue given recent increases in remote hiring and hybrid work arrangements. AI tools can streamline talent acquisition tasks by automatically sorting, ranking, and eliminating candidates, as well as potentially drawing from a broader and more diverse pool of candidates.

Employers who use AI tools must comply with significant new (and existing) laws that focus on data protection, privacy, information security, wage and hour, and other issues. The focus of this blog, however, is the legislative efforts in the US to protect against algorithmic bias and discrimination in the workplace stemming from the use of AI tools to either replace or augment traditional HR tasks.

IL Becomes the Second State (After CO) to Target Workplace Algorithmic Discrimination

On August 9, 2024, Gov. Pritzker signed H.B. 3773, making it unlawful for employers to use AI that has the effect of discriminating against employees on the basis of protected class in recruitment, hiring, promotion, discipline, termination and other terms, privileges or conditions of employment. The law, effective January 1, 2026, also prohibits employers from using ZIP codes as a stand-in or proxy for protected classes.

Like Colorado, Illinois’ new law also contains a notice requirement: employers must notify applicants and employees when using AI with respect to “recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.”Continue Reading Illinois Joins Colorado and NYC in Restricting Generative AI in HR (Plus a Quick Survey of the Legal Landscape Across the US and Globally)

In June, we offered our annual Global Employment Law webinar series sharing expert insights on the business climate in major markets around the world for US multinational employers. Baker McKenzie attorneys from over 20 jurisdictions outlined the key new employment law developments and trends that multinationals need to know in four 60-minute sessions.

ICYMI: click below to hear updates for the Americas, Asia Pacific, Europe and the Middle East and Africa and contact a member of our team for a deeper dive on any of the information discussed.


Session 1: The Americas 

Presenters: Andrew Shaw, Clarissa Lehmen*, Daniela Liévano Bahamón, Benjamin Ho, Liliana Hernandez-Salgado and Matías Gabriel Herrero

Click here to watch the video.

*Trench Rossi Watanabe and Baker McKenzie have executed a strategic cooperation agreement for consulting on foreign law.


Continue Reading Summer Replay: Tune In To Our Global Employment Law Update Series (Recordings Linked!)

Last week, a unanimous US Supreme Court held that an employee need only show “some harm” from a change in the terms and conditions of employment, rather than a “significant” employment disadvantage, to assert a claim for discrimination under Title VII. The decision resolves a circuit split over the showing required for discrimination claims based on changes less drastic than demotions, terminations, or pay reductions, and underscores the continued importance of taking a thoughtful approach to any change in the terms and conditions of an employee’s employment.Continue Reading Less is More: SCOTUS Shifts Title VII Threshold to “Some” Harm (Though Plaintiffs Must Still Show Discriminatory Intent)

You’re not alone in wondering where the Equal Employment Opportunity Commission’s final regulations to implement the Pregnant Workers Fairness Act are. In fact, they are well past their due date.

How it started

The PWFA became effective on June 27, 2023. In August 2023, the EEOC published proposed regulations to implement the PWFA. (We outlined the proposed regulations in our blog here, and about the PWFA here). The public comment period for the proposed regulations closed October 10, 2023, and the proposed regulations were delivered to the Office of Information and Regulatory Affairs (“OIRA”) on December 27, 2023 for review.

How it is going

However, to date, no final regulations have been issued, despite the PWFA’s requirement that the EEOC issue regulations by December 29, 2023. The regulations, once finalized, will provide clarity for employers implementing policies and practices to comply with the PWFA. For instance, the proposed regulations outline a nonexhaustive list of what the EEOC considers potential accommodations under the PWFA, including job restructuring and part-time or modified work schedules.

However, even without final regulations in place, employers are required to meet the PWFA’s mandates. The proposed regulations can still be used to offer insight into how the EEOC believes the PWFA should be interpreted.Continue Reading Pregnant Pause: The EEOC’s Delay In Issuing Final Regs For The PWFA Should Not Delay Compliance

Special thanks to Celeste Ang and Stephen Ratcliffe.

We launched the seventh annual edition of The Year Ahead: Global Disputes Forecast, a research-based thought leadership surveying 600 senior legal and risk leaders from large organizations around the world and highlights key issues we anticipate to be crucial for disputes for this year.

Earlier this year, many of you tuned into our 2023 – 2024 Employer Update webinars to plant seeds for success for the year ahead.

Now, to ensure your compliance efforts are blooming, we’re sharing detailed checklists to help you ensure you’re ticking all the boxes!