[UPDATE RE THE OMNIUS PROPOSAL HERE]

The European Union’s Corporate Sustainability Reporting Directive is a regulation requiring covered companies to disclose information on what they see as the risks and opportunities arising from social and environmental issues, and on the impact of their activities on people and the environment.

The CSRD impacts not

Shortly after taking office, President Trump rescinded Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the US, identifying security, privacy and discrimination as particular areas of concern. Trump signed his own executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” directing his advisers to coordinate with the heads of federal agencies and departments, among others, to develop an “action plan” to “sustain and enhance America’s global AI dominance” within 180 days.

While we wait to see if and how the federal government intends to combat potential algorithmic discrimination and bias in artificial intelligence platforms and systems, a patchwork of state and local laws is emerging. Colorado’s AI Act will soon require developers and deployers of high-risk AI systems to protect against algorithmic discrimination. Similarly, New York City’s Local Law 144 imposes strict requirements on employers that use automated employment decision tools, and Illinois’ H.B. 3773 prohibits employers from using AI to engage in unlawful discrimination in recruitment and other employment decisions and requires employers to notify applicants and employees of the use of AI in employment decisions. While well-intentioned, these regulations come with substantial new, and sometimes vague, obligations for covered employers.

California is likely to add to the patchwork of AI regulation in 2025 in two significant ways. First, California Assemblymember Rebecca Bauer-Kahan, Chair of the Assembly Privacy and Consumer Protection Committee, plans to reintroduce a bill to protect against algorithmic discrimination by imposing extensive risk mitigation measures on covered entities. Second, the California Privacy Protection Agency’s ongoing rulemaking under the California Consumer Privacy Act will likely result in regulations restricting the use of automated decision-making technology by imposing requirements to mitigate algorithmic discrimination.Continue Reading Passage of Reintroduced California AI Bill Would Result In Onerous New Compliance Obligations For Covered Employers

  • Key laws and regulations, including recent changes and expected developments over the next year
  • Foundational data privacy obligations including information and notification requirements, data subject rights, accountability and governance measures, and responsibilities of data controllers and

On January 20, 2025, the first day of his second term, President Trump revoked Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Biden Order”), signed by President Biden in October 2023. In doing so, President Trump fulfilled a campaign pledge to roll back the Biden Order, which the 2024 Republican platform described as a “dangerous” measure. Then on January 23, 2025, President Trump issued his own Executive Order on AI, entitled Removing Barriers to American Leadership in Artificial Intelligence (the “Trump Order”). Here, we examine some of the practical implications of the repeal and replacement of executive orders by Trump and what it means for businesses.

Overview of the Executive Orders

Building on the White House’s 2022 Blueprint for an AI Bill of Rights, the Biden Order outlined a sweeping vision for the future of AI within the federal government, including seven high-level objectives: (1) Ensuring the Safety and Security of AI Technology; (2) Promoting Innovation and Competition; (3) Supporting Workers; (4) Advancing Equity and Civil Rights.; (4) Protecting Consumers, Patients, Passengers, and Students; (5) Protecting Privacy; (6) Advancing Federal Government Use of AI; and (7) Strengthening American Leadership Abroad.

The Biden Order directed various measures across the federal apparatus –imposing 150 distinct requirements on more than 50 federal agencies and other government entities, representing a genuinely whole-of-government response.

Although the bulk of the Biden Order is addressed to federal agencies, some of its provisions had potentially significant impacts on private sector entities. For example, the Biden Order directed the Commerce Department to require developers to report on the development of higher risk AI systems.  Similarly, the Biden order directed the Commerce Department to establish requirements for domestic Infrastructure as a Service (IaaS) providers to report to the government whenever they contract with foreign parties for the training of large AI models. The Biden Order also open-endedly instructed federal agencies to use existing consumer protection laws to enforce against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI—a directive various federal regulators actioned under the Biden administration.

Other than the definition of AI, the Trump Order and Biden Order share no similarities (both Orders point to the AI definition from 15 U.S.C. 9401(3), namely: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments”). The Trump Order does not contain specific directives (such as those in the Biden Order), but instead articulates the national AI policy to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The Trump Order directs a few specific roles within the administration to develop an Artificial Intelligence Action Plan within 180 days (i.e., by July 22, 2025) to achieve the policy objective articulated in the Trump Order. The Trump Order directs these same roles within the administration to review the policies, directives, regulations, orders, and other actions taken pursuant to the Biden Order and to suspend, revise, or rescind any such actions that are inconsistent with the Trump Order’s stated policy. In cases where suspension, revision, or rescission of the prior action cannot be finalized immediately, the heads of agencies are instructed to “to provide all available exemptions” in the interim.

Practical Impacts

The practical effect of the revocation of the Biden Order—and the options available under the Trump Order—will vary depending on the measure. Although there are widespread impacts from the revocation of the Biden Order’s mandates across multiple initiatives and institutions, below are those that are expected to have a significant impact on private sector entities engaged in the development or use or AI.

Reporting requirement for powerful AI models: As notedthe Biden Order directed the Department of Commerce to establish a requirement for developers to provide reports on “dual-use foundation models” (broadly, models that exhibit high levels performance at tasks that pose a serious risk to security, national economic security, national public health or safety). Pursuant to the Biden Order, the Bureau of Industry and Security’s (BIS), a Commerce Department agency, published a proposed rule to establish reporting requirements on the development of advanced AI models and computing clusters under its Defense Production Act authority, but had not issued a final rule prior to the revocation of the Biden Order. It is likely that the new administration will closely scrutinize this reporting requirement and may take action to block the adoption of the final rule if it is found to be inconsistent with the policy statement in the Trump order.Continue Reading AI Tug-of-War: Trump Pulls Back Biden’s AI Plans

By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for “unlawfully excessive” or “disproportionate” collection. For example, the French data protection authorities recently imposed a USD $34 million fine related to a multinational e-commerce company’s use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:Continue Reading The Legal Playbook for AI in HR: Five Practical Steps to Help Mitigate Your Risk

On May 17, 2024 Colorado Governor Polis signed the landmark Colorado AI Act (Senate Bill 24-205) into law. Colorado is now the first US state with comprehensive AI regulation, adopting a classification system like the European Union’s recent AI Act. The law will take effect February 1, 2026

The law exempts small employers (fewer than fifty full-time employees) from some of its requirements but otherwise requires companies to take extensive measures to protect Colorado residents against harms such as algorithmic discrimination.

SB 205’s Details

SB 205 requires “developers” and “deployers” of “high-risk artificial intelligence systems” to use “reasonable care” to protect Colorado resident consumers from any known or reasonably foreseeable risks of “algorithmic discrimination.” As written, the law most likely applies to both creators of high-risk AI systems, as well as employers adopting high-risk AI technologies within their organization.  Continue Reading From Brussels to Boulder: Colorado Enacts Comprehensive AI Law with Significant Obligations for Employers on the Heels of the EU AI Act

Special thanks to co-authors Priscila Kirchhoff* and Tricia Oliveira*.

In July, Brazil passed a new Gender Pay Gap law (effective immediately) that requires companies with more than 100 employees — for the first time — to publish a report on salary transparency and compensation criteria (a ‘Salary Transparency Report’) every six months. The

On January 1, 2024, businesses must post updated Privacy Policies under the California Consumer Privacy Act (CCPA), which requires annual updates of disclosures and fully applies in the job applicant and employment context since January 1, 2023.

With respect to job applicants and employees, businesses subject to the CCPA are required to:

  1. Issue detailed privacy notices with prescribed disclosures, terminology, and organization;
  2. Respond to data subject requests from employees and job candidates for copies of information about them, correction, and deletion;
  3. Offer opt-out rights regarding disclosures of information to service providers, vendors, or others, except to the extent they implement qualified agreements that contain particularly prescribed clauses; and
  4. Offer opt-out rights regarding the use of sensitive information except to the extent they have determined they use sensitive personal information only within the scope of statutory exceptions.

If employers sell, share for cross-context behavioral advertising, or use or disclose sensitive personal information outside of limited purposes, numerous additional compliance obligations apply. For more: see also our related previous post: Employers Must Prepare Now for New California Employee Privacy Rights.

Key recommendations to heed now

Continue Reading Looking ahead to 2024: California privacy law action items for employers

It is an unprecedented time for California companies’ privacy law obligations. The California Privacy Rights Act (CPRA) took effect on January 1, 2023 with a twelve-month look-back that also applies to the personal data of employees and business contacts. The California Privacy Protection Agency recently finalized regulations and has kicked off a new phase of rulemaking including on