Shortly after taking office, President Trump rescinded Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the US, identifying security, privacy and discrimination as particular areas of concern. Trump signed his own executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” directing his advisers to coordinate with the heads of federal agencies and departments, among others, to develop an “action plan” to “sustain and enhance America’s global AI dominance” within 180 days.
While we wait to see if and how the federal government intends to combat potential algorithmic discrimination and bias in artificial intelligence platforms and systems, a patchwork of state and local laws is emerging. Colorado’s AI Act will soon require developers and deployers of high-risk AI systems to protect against algorithmic discrimination. Similarly, New York City’s Local Law 144 imposes strict requirements on employers that use automated employment decision tools, and Illinois’ H.B. 3773 prohibits employers from using AI to engage in unlawful discrimination in recruitment and other employment decisions and requires employers to notify applicants and employees of the use of AI in employment decisions. While well-intentioned, these regulations come with substantial new, and sometimes vague, obligations for covered employers.
California is likely to add to the patchwork of AI regulation in 2025 in two significant ways. First, California Assemblymember Rebecca Bauer-Kahan, Chair of the Assembly Privacy and Consumer Protection Committee, plans to reintroduce a bill to protect against algorithmic discrimination by imposing extensive risk mitigation measures on covered entities. Second, the California Privacy Protection Agency’s ongoing rulemaking under the California Consumer Privacy Act will likely result in regulations restricting the use of automated decision-making technology by imposing requirements to mitigate algorithmic discrimination.
Even in the absence of new legislation expressly targeting algorithmic discrimination, existing federal anti-discrimination laws like Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, and similar state and local laws, prohibit discrimination against individuals in the terms or conditions of employment based on their membership in one or more protected classes. These laws apply regardless of whether an employer’s recruiting and hiring tasks are performed by human employees or by AI-powered tools.
Anticipated California Bill on Algorithmic Discrimination
Assemblymember Bauer-Kahan recently signaled that in 2025 she intends to reintroduce legislation to protect against algorithmic discrimination. This will mark her third attempt regulate automated decision tools, following bills previously introduced in January 2023 and February 2024.
Assembly Bill 2930 (introduced last February) sought to impose requirements on developers and deployers of automated decision systems that make or play a “substantial” role in making a “consequential decision.” In relevant part, the bill defined “consequential decision” has one that has a “legal, material, or similarly significant effect” on an individual’s employment with respect to pay or promotion, hiring or termination, or otherwise impacting material terms or conditions of employment. AB 2930 passed the California State Assembly and underwent several amendments in the Senate. Ultimately Bauer-Kahn pulled the legislation before it could reach Governor Newsom’s desk, so it was never subject to a vote prior to becoming inactive.
AB 2930 included requirements for developers and deployers of automated decision systems to provide workers or applicants with advance notice of AI use, to regularly perform detailed impact assessments on the systems, and to create robust governance programs to mitigate risks of algorithmic discrimination. It also obligated required deployers to give individuals opportunities to correct any incorrect personal data that the system processed in making or contributing to consequential decisions, and to accommodate opt-out requests.
It defined “algorithmic discrimination” broadly to mean “the condition in which an automated decision system contributes to unlawful discrimination, including differential treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state or federal law.”
Bauer-Kahan says that the 2025 version of the bill will address a lack of clarity in some of the key definitions of AB 2930. One of the chief concerns of AB 2930 was how exactly HR professionals were to comply with some of its vague requirements. It remains to be seen if this next attempt is more practical.
Bauer-Kahan has also stated that the newest version will mainly rely on government enforcement. AB 2930 authorized the California Civil Rights Department to bring civil actions to enforce the act. We are waiting to see how the enforcement mechanism of the next anti-discrimination AI bill differs.
CPPA’s Proposed ADMT and Risk Assessment Regulations
In November 2024, the CPPA published proposed regulations that would, if enacted in their current form, require covered businesses to conduct a risk assessment before using ADMT for a significant decision concerning a California resident or for extensive profiling. “ADMT” would mean any technology (including AI) that processes personal information and uses computation to execute a decision, replace human decision-making, or substantially facilitate human decision-making. A “significant decision” would mean, among others, a decision that results in access to, or the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or independent contracting opportunities or compensation, healthcare services, or essential goods or services. “Extensive profiling” would include profiling California residents in work and educational contexts, in public, or for behavioral advertising.
Businesses conducting risk assessments must weigh the benefits of the proposed activity against its risks, including whether and the extent to which the activity would involve “discrimination upon the basis of protected classes that would violate federal or state antidiscrimination law.” If a covered business determines that the proposed activity would involve unlawful discrimination, the business must refrain from the proposed activity until it can comply with applicable laws. If a business uses ADMT for a significant decision, for extensive profiling, or for certain ADMT training purposes concerning a California resident, the proposed regulations would impose various duties on the business, including giving California residents information about the proposed use and allowing them to opt-out of such use, subject to certain exceptions.
Even if a business does not use ADMT, the proposed regulations would require businesses that process “sensitive personal information,” outside of certain defined employment functions and CCPA-exempt activities, to conduct risk assessments and document how they mitigate unlawful discrimination. The CCPA’s definition of “sensitive personal information” overlaps with some protected classifications under human rights laws, such as racial origin, religious beliefs, and sexual orientation, and includes other data categories such as precise geolocation and social security numbers.
Notably, in the employment context, these proposed rules and regulations both overlap with, and create additional obligations not found in AB 2930. For example, both include onerous impact assessment requirements, whereas extensive obligations regarding the processing of sensitive personal data are only found in the proposed CCPA regulations.
What Companies Should Do to Prepare
Companies who deploy AI systems in the HR/employment space should not wait for enactment of algorithmic discrimination laws in California, since passage of some form of legislation is highly likely in 2025, and discrimination is independently unlawful. As a best practice, companies should – in a privileged setting led by the legal function – inventory and audit the AI systems they are developing or deploying, identify the systems that could result in different treatment of individuals based on protected classifications, and evaluate and mitigate the risks of unlawful discrimination. Examples of safeguards that can mitigate the risks of algorithmic discrimination include compiling balanced training datasets, adjusting algorithms to account for detected biases, performing thorough red-teaming and quality assurance testing, maintaining human oversight in the decision-making processes and leaving any final employment-related decisions to a human. Employers should prepare privileged advisory memoranda to account for and remedy compliance gaps and take action to improve risk assessment and transparency. Baker McKenzie’s cross-functional AI Group will continue to monitor developments in both legislative and regulatory arenas.