California lawmakers and regulatory bodies are taking significant steps to regulate the use of artificial intelligence (AI) in making consequential decisions, such as employment decisions, to prevent algorithmic discrimination. The California Civil Rights Council has proposed amendments to the Fair Employment and Housing Act (FEHA), specifically targeting employment discrimination due to automated decision systems. Concurrently, the California Legislature continues to debate AB 2930, a broad measure addressing the use of AI across various sectors. These efforts aim to ensure that technological advancements do not perpetuate existing biases or create new forms of discrimination.
California Civil Rights Council Proposed Rules
The California Civil Rights Council has proposed amendments to the FEHA in response to growing concerns about algorithmic bias in employment practices. These amendments aim to modernize employment practices and align with broader efforts, such as the White House’s Blueprint for an AI Bill of Rights and the EEOC’s guidelines on algorithmic fairness.
Definition and Scope of AI
Under the proposed amendments, an “automated decision system” is defined as a computational process that screens, evaluates, categorizes, recommends, makes, or facilitates decisions impacting applicants or employees. This includes systems using machine learning, algorithms, statistics, or other data processing or AI techniques. The amendments cover activities such as computer-based tests, targeted job advertisements, resume screening, and online interview analysis, among others.
Who Is Affected?
The proposed rules apply to organizations that regularly pay five or more individuals for work or services, including employers’ agents and employment agencies. This broad definition casts a wide net of coverage and accountability for discriminatory practices arising from the use of automated decision systems.
Employer Impact
Employers using automated decision systems must ensure these systems do not result in adverse impacts or disparate treatment based on characteristics protected under FEHA. Employers may be held liable for discrimination resulting from these systems. However, they can defend their use by demonstrating that the criteria were job-related, necessary for business and that no less discriminatory alternatives were available. Employers must also conduct anti-bias testing and retain relevant records for at least four years.
Consideration of Criminal History
The amendments clarify the role of automated decision systems in considering an applicant’s criminal history. Employers must comply with the same rules as human-based inquiries, including not assessing criminal history until after a conditional job offer has been extended and providing applicants with the generated reports and assessment criteria.
Record-Keeping Obligations
The proposed rules require retaining records related to the training, operation, and outputs of automated decision systems for at least four years. This includes data used by third parties providing such systems, ensuring transparency and accountability throughout the employment process.
Public Hearing
On July 18, the California Civil Rights Council held a public hearing at the University of California Berkeley School of Law to discuss the proposed amendments to the Fair Employment and Housing Act regarding automated decision systems. During the hearing, testimony urged the Council to refine the definitions and scope of the proposed rules, particularly concerning employment agencies, the ambiguous concept of “screening,” and the distinction between simple automation that drives processes and automation that makes decisions. The Council assured attendees that all written and oral testimonies would be carefully considered as they revise the proposed rules. Additional information and updates will be posted on the Council’s website.
California AB 2930
California AB 2930 seeks to regulate the use of AI across various industries to combat “algorithmic discrimination.” The bill defines “algorithmic discrimination” as unjustified differential treatment or impacts disfavoring individuals based on protected characteristics.
Scope and Impact Assessments
AB 2930 targets “automated decision tools” that make “consequential decisions” impacting areas such as employment, education, housing, healthcare, financial services, and more. By January 1, 2026, employers and developers must perform annual impact assessments to analyze potential adverse impacts and implement safeguards to address risks of algorithmic discrimination.
Notice Requirements
Employers using automated decision tools must notify individuals subject to consequential decisions, providing a statement of purpose, contact information, and a plain language description of the tool. If decisions are solely based on the automated tool’s output, employers must accommodate requests for alternative selection processes, if feasible.
Governance Programs
Employers are required to establish governance programs to address the risks of algorithmic discrimination. This includes designating responsible employees, implementing safeguards, conducting annual reviews, and maintaining impact assessment results for at least two years. Smaller employers with fewer than 25 employees are exempt unless their systems impact more than 999 people annually.
Policy Disclosure Requirements
Employers and developers must publicly disclose policies summarizing the types of automated decision tools used and how they manage risks of algorithmic discrimination.
Civil Liability
Individuals can bring civil actions against employers for violations of AB 2930, potentially receiving compensatory damages, declaratory relief, and attorney’s fees. Public attorneys may also bring civil actions for violations.
Legislative History
California AB 2930 was introduced in February 2024. The bill successfully passed the Assembly in May and advanced through the Senate’s Judiciary Committee in July, subsequently being sent to the Appropriations Committee. The Legislature is currently on Summer Recess, scheduled to reconvene on August 5. Lawmakers have until August 31, the last day for each house to pass bills, to finalize AB 2930.
Parting Thoughts
California’s regulatory approach through the California Civil Rights Council’s proposed rules and AB 2930 aims to prevent algorithmic discrimination in employment and other consequential decision-making areas. As both measures remain viable, employers must remain vigilant, assess their ability to comply with these evolving regulations, especially considering the broad definitions of tools that may be regarded as AI, and stay informed.