Shortly after taking office, President Trump rescinded Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the US, identifying security, privacy and discrimination as particular areas of concern. Trump signed his own executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” directing his advisers to coordinate with the heads of federal agencies and departments, among others, to develop an “action plan” to “sustain and enhance America’s global AI dominance” within 180 days.

While we wait to see if and how the federal government intends to combat potential algorithmic discrimination and bias in artificial intelligence platforms and systems, a patchwork of state and local laws is emerging. Colorado’s AI Act will soon require developers and deployers of high-risk AI systems to protect against algorithmic discrimination. Similarly, New York City’s Local Law 144 imposes strict requirements on employers that use automated employment decision tools, and Illinois’ H.B. 3773 prohibits employers from using AI to engage in unlawful discrimination in recruitment and other employment decisions and requires employers to notify applicants and employees of the use of AI in employment decisions. While well-intentioned, these regulations come with substantial new, and sometimes vague, obligations for covered employers.

California is likely to add to the patchwork of AI regulation in 2025 in two significant ways. First, California Assemblymember Rebecca Bauer-Kahan, Chair of the Assembly Privacy and Consumer Protection Committee, plans to reintroduce a bill to protect against algorithmic discrimination by imposing extensive risk mitigation measures on covered entities. Second, the California Privacy Protection Agency’s ongoing rulemaking under the California Consumer Privacy Act will likely result in regulations restricting the use of automated decision-making technology by imposing requirements to mitigate algorithmic discrimination.Continue Reading Passage of Reintroduced California AI Bill Would Result In Onerous New Compliance Obligations For Covered Employers

By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for “unlawfully excessive” or “disproportionate” collection. For example, the French data protection authorities recently imposed a USD $34 million fine related to a multinational e-commerce company’s use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:Continue Reading The Legal Playbook for AI in HR: Five Practical Steps to Help Mitigate Your Risk

Just after the fireworks’ finale, New York City’s Department of Consumer and Worker Protection will begin enforcing its new ordinance regulating the use of automation and artificial intelligence in employment decisions. The DCWP recently issued a Notice of Adoption of Final Rule establishing that enforcement efforts will begin July 5, 2023.

Here are three reasons this matters

  1. The new law requires time-sensitive, significant actions (read: audits, notices and public reporting) from employers using automated employment decisions tools to avoid civil penalties;
  2. Company compliance will require a cross-functional response immediately, so it’s time to get your ducks in a row; and
  3. Since the City’s law is (mostly) first-of-its-kind, it is likely a harbinger of things to come for employers across the country and it could be used as a framework in other cities and states.

The law in a nutshell

Local Law 144 prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates. Violations of the provisions of the law are subject to a civil penalty.Continue Reading Enforcement of New York City’s Artificial Intelligence Rule Begins July 5, 2023: Here’s What Employers Need to Know