The Equal Opportunity Employment Commission (EEOC) has released new guidance for employers on the use of artificial intelligence (AI) in employment, this time with a focus on adverse impact under Title VII. On May 18, 2023, the EEOC released “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (the “guidance”), expanding its Artificial Intelligence and Algorithmic Fairness Initiative and adding to its May 2022 guidance on disability discrimination in connection with the use of AI tools in employment.
The new (limited) guidance
The guidance’s scope is limited to the assessment of whether an employer’s “selection procedures”—the procedures it uses to make employment decisions such as hiring, promotion, and firing—have an adverse impact under Title VII. In the guidance, the EEOC acknowledges that while many employers routinely monitor more traditional decision-making procedures for adverse impact, employers may have questions about whether and how to monitor the newer algorithmic decision-making tools–which Q&As in the guidance address.
The guidance:
- Alerts employers to examples of software used in the employment process that may utilize the type of “algorithmic decision-making” employers may not yet routinely monitor, including resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; and “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.
- Highlights issues typically involved in Title VII disparate impact cases arising from seemingly-neutral tests or selection procedures.
- Provides information (via Q&As) to help employers determine whether and how to monitor newer algorithmic decision-making tools, including the following concerns, among others:
- Whether employers can assess their use of an algorithmic decision-making tool for adverse impact in the same way that they assess more traditional selection procedures for adverse impact (the EEOC says yes, but provides only a simple example that does not address all of the potential complexities that may be at play when sophisticated artificial intelligence tools are utilized);
- Whether an employer is responsible under Title VII for its use of algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor (the EEOC says in many cases, yes); and
- Whether an employer can adjust an algorithmic decision-making tool or use a different selection device if it discovers in the process of developing the tool that its use would have an adverse impact (the EEOC says yes, and suggests in some cases the employer may be required to take such steps).
Employer takeaways
- While the EEOC guidance is not binding, the EEOC encourages employers to conduct self-analyses on an ongoing basis to determine whether their employment practices have an adverse impact under Title VII.
- Employers should also consult with their employment counsel to proactively address any questions or concerns regarding the use of algorithmic decision-making tools used in employment decisions, and to ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws.
- And considering the EEOC’s recent joint statement with the Department of Justice, Federal Trade Commission, and Consumer Financial Protection Bureau on enforcement efforts against discrimination and bias in automated systems, employers should keep a look out for possible forthcoming guidance from the EEOC and other federal agencies.