On April 27, 2026, a federal court paused enforcement of Colorado’s Artificial Intelligence Act (SB 24-205), placing one of the country’s most comprehensive state AI laws on hold while lawmakers reconsider its timing and scope. The order prevents the state from initiating enforcement actions during the pendency of the litigation, effectively freezing the law just weeks before its anticipated June 30, 2026 effective date.

This development is neither a repeal nor a permanent delay. Instead, it leaves employers in a familiar position—navigating a period of legal uncertainty while continuing to operate against a rapidly evolving regulatory backdrop. Importantly, even if the Colorado law is ultimately blocked or significantly revised, employers should not view the pause as a signal to deprioritize AI governance. As discussed below, the legal and regulatory risks associated with AI in employment remain very much in force.

Background

With the statute’s effective date approaching, a leading AI developer filed suit in April seeking declaratory and injunctive relief, challenging the constitutionality of several provisions of the Act. Shortly thereafter, the US Department of Justice intervened, arguing that aspects of the law impermissibly compel AI systems to adopt state‑defined viewpoints. The DOJ’s intervention marks the administration’s first litigation effort aimed at limiting state‑level AI regulation.

Continue Reading AI Regulation on Hold in Colorado—But Employer Risk Isn’t

By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for “unlawfully excessive” or “disproportionate” collection. For example, the French data protection authorities recently imposed a USD $34 million fine related to a multinational e-commerce company’s use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:

Continue Reading The Legal Playbook for AI in HR: Five Practical Steps to Help Mitigate Your Risk