SHRM reports that one in four organizations currently use AI to support HR-related activities, with adoption of the technology expanding rapidly. The compliance risks arising from generative AI use also are intensifying, with an increasing number of state and local laws restricting employer use of AI tools in the United States. And not to be outdone, substantial regulation impacting multinational employers’ use of AI is emerging in other parts of the world (e.g., the EU AI Act).
One rapidly growing use case is applicant recruiting and screening, a trend likely to continue given recent increases in remote hiring and hybrid work arrangements. AI tools can streamline talent acquisition tasks by automatically sorting, ranking, and eliminating candidates, as well as potentially drawing from a broader and more diverse pool of candidates.
Employers who use AI tools must comply with significant new (and existing) laws that focus on data protection, privacy, information security, wage and hour, and other issues. The focus of this blog, however, is the legislative efforts in the US to protect against algorithmic bias and discrimination in the workplace stemming from the use of AI tools to either replace or augment traditional HR tasks.
IL Becomes the Second State (After CO) to Target Workplace Algorithmic Discrimination
On August 9, 2024, Gov. Pritzker signed H.B. 3773, making it unlawful for employers to use AI that has the effect of discriminating against employees on the basis of protected class in recruitment, hiring, promotion, discipline, termination and other terms, privileges or conditions of employment. The law, effective January 1, 2026, also prohibits employers from using ZIP codes as a stand-in or proxy for protected classes.
Like Colorado, Illinois’ new law also contains a notice requirement: employers must notify applicants and employees when using AI with respect to “recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.”
Unlike Colorado and New York City’s anti-discrimination AI laws, Illinois’ amendment to their Human Rights Act does not require bias or impact assessments. Another Illinois bill, H.B. 5116, would require employers who use automated decision-making tools to conduct annual impact assessments to identify potential adverse impacts and outline safeguards put in place to address any predictable risks. But that bill has not moved forward towards enactment.
Surveying the US Landscape
Employers using AI in the workplace must be mindful of both existing employment-related laws and new guidance and legislation – from the Biden administration to city governments – addressing concerns about potential bias and discrimination.
Existing Anti-Discrimination Laws
Existing federal anti-discrimination laws like Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, and similar state and local laws, prohibit discrimination against individuals in the terms or conditions of employment based on their membership in one or more protected classes. These existing employment laws apply regardless of whether an employer’s recruiting and hiring tasks are performed by human employees or by AI-powered tools.
2023 Guidance from the EEOC on Using AI Tools in HR Decision-Making
Although no comprehensive federal legislation has passed on this topic, several federal agencies have weighed in. For instance, in May 2023, the federal Equal Employment Opportunity Commission issued technical assistance addressing AI and algorithmic decision-making tools and the potential for those tools to result in illegal discrimination under Title VII (see EEOC Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964). The guidance focuses on how use of AI tools can violate Title VII under a disparate impact analysis.
While not binding, the EEOC guidance signals that the agency is on alert for potential discrimination issues based on an employer’s use of AI. It encourages employers to do a self-audit of their employment practices (including the use of AI in recruiting) to look for adverse impacts. The release of the guidance coincided with a joint statement from the Department of Justice, the Federal Trade Commission, and the Consumer Financial Protection Bureau stating each agency’s view that its respective enforcement authority applies to AI systems.
2024 Joint Statement on Fairness, Justice, and Equality in AI Systems
In April 2024, ten federal agencies, including the EEOC and the DOL, issued a Joint Statement on the Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems. The statement recognizes the potential for discrimination, bias, and other harmful outcomes from the use of AI and clarifies that the agencies’ respective enforcement authorities apply equally to AI systems like other practices. The agencies “reiterate [their] resolve to monitor the development and use of automated systems and promote responsible innovation…[and] pledge to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
State and Local Regulation Fill the Void
We are seeing more fruitful attempts at regulation at the state and local level. Here are several key city and state employment laws to know (in order of effective date):
Illinois Artificial Intelligence Video Interview Act (effective January 1, 2020):
This “first-of-its-kind” law regulates employer use of AI to analyze video interviews. For example, before conducting an AI-based video interview, employers must:
- Notify the applicant that AI may be used to analyze the video interview and assess the applicant’s fitness for the position;
- Inform the applicant how the AI works and what general characteristics it uses to evaluate applicants; and
- Get the applicant’s consent to be evaluated by the AI program.
An amendment to the law (effective January 1, 2022) requires employers who rely exclusively on AI analyses of video interviews to determine which applicant to select for in-person interviews to collect and report the race and ethnicity of applicants who:
- Are offered an in-person interview;
- Are not offered an in-person interview; and who
- Are hired.
Employers must report that demographic data to the Department of Commerce and Economic Opportunity annually by December 31.
Maryland Use of Facial Recognition Technology (effective October 1, 2020):
H.B. 1202 prohibits employers from using facial recognition technology during pre-employment job interviews without the applicant’s consent. To use facial recognition services in interviewing employees, employers must obtain an applicant’s written consent and waiver that states the applicant’s name, the date of the interview, that the applicant consents to the use of facial recognition during the interview and that the applicant has read the waiver.
New York City Automated Employment Decision Tools Law (enforcement began July 5, 2023):
NYC’s AEDT law imposes strict requirements on employers that use automated employment decision tools to conduct or assist with hiring or promotion decisions in NYC. The law prohibits the use of automated decision tools (as defined) unless:
- The tool has been the subject of an independent bias audit (as defined) not more than one year before using the tool; and
- The employer or employment agency makes publicly available on its website a summary of the most recent bias audit results and the tool’s distribution date before using it.
Among other things, employers using an AEDT also must notify any employee or candidate residing in NYC who has applied for employment at least ten business days before using the tool of the following:
- That the employer or employment agency is using an AEDT to assess or evaluate the individual; and
- The job qualifications and characteristics that the tool uses to assess the candidate or employee.
Practice pointer: the threshold question for employers with respect to Local Law 144 is whether they are using an AEDT to make a covered employment decision such that they are subject to the law’s audit and notice requirements. This requires analyzing things like how the tool generates the output (e.g. is it derived from machine learning or another computational process) and how the output impacts a covered employment decision (e.g. does the output “substantially assist or replace discretionary decision making”). Our team helps companies determine whether their tools may be covered by Local Law 144.
Colorado Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (effective February 1, 2026):
In May 2024, Colorado became the first state to pass comprehensive AI legislation requiring developers and deployers of so-called “high-risk” AI systems “to use reasonable care to avoid algorithmic discrimination in the high-risk system.” The law creates a rebuttable presumption that a developer or deployer used reasonable care if they meet specific compliance obligations detailed in the Act.
The Colorado AI Act is similar to the EU AI Act, for example, in applying a risk-based approach to regulating AI. However, there also are several differences, such as the Colorado AI Act’s more limited territorial scope and more extensive requirements for deployers of high-risk AI systems. Further detail on Colorado’s Senate Bill 24-205 can be found here.
What’s Next Around the US (and the World!)
At least ten other states are currently considering legislation regulating the use of AI in HR decision-making including: California, Connecticut, Hawaii, Massachusetts, New Jersey, New York, Oklahoma, Rhode Island, Vermont and Washington. We are monitoring closely and will report updates here.
As referenced above, the EU AI Act (published July 12, 2024) and it will have a significant impact on employers who use, or plan to use, AI systems in their operations, recruitment, performance evaluation, talent management and workforce monitoring.
Key point about the global reach of the EU AI Act: the new rules can apply to businesses outside of the EU, for example where they place on the market or put into service AI in the EU or where the output is used in the EU. Employers that use AI systems that the law considers “high risk” must meet strict obligations related to transparency, monitoring, training and reporting, and it bans the use of AI systems considered harmful or discriminatory effective February 2, 2025. For more key takeaways for HR on the EU AI Act, click here.
The AI legal landscape in Asia Pacific is also evolving significantly. To review Baker McKenzie’s comparison of the AI regulatory approach across 12 APAC jurisdictions, click here.
Best Practices for US Employers
If your organization is deploying AI tools to augment HR decision-making, we recommend consulting with your Baker McKenzie attorney. In addition, here are several action items to prioritize:
- Review existing AI governance to confirm it conforms to a standardized risk management framework to identify and mitigate the risks of algorithmic discrimination. This should be an iterative process that is regularly reviewed and updated.
- Always keep a human in the loop. Be sure to train HR staff and managers on the proper use of AI when it comes to making hiring or employment-related decisions.
- Implement ongoing monitoring. Establish processes for detecting and mitigating algorithmic bias arising from use of AI systems.
- Prepare documentation as may be required by applicable law.
- Vet your AI vendors carefully. Work with counsel on third-party vendor risk management, including careful consideration of AI-specific risks to address in your vendor contracts.