Can you hire faster without increasing our legal exposure?

Can you hire faster without increasing our legal exposure?
Can you hire faster without increasing our legal exposure?

The honest answer in 2026 is: yes, but only if you treat the tool with the same compliance rigour you apply to every other component of your screening programme. The efficiency gains are real. The bias risks are real. The legal exposure is real. These things coexist.

Lets start with an example.

Derek Mobley applied for more than one hundred jobs at companies using Workday’s AI-powered hiring software. He was rejected every time.

In his lawsuit — Mobley v. Workday, filed in a California federal court — he argues that this wasn’t bad luck. He argues that Workday’s algorithmic screening system systematically deprioritised his applications because he is Black, over the age of 40, and has a disability. He claims the system relied on historical hiring data that encoded existing biases, and that it amplified those biases at scale, filtering out candidates like him before a human recruiter ever saw their name.

In January 2024, Workday moved to have the case dismissed. The court declined and allowed the case to proceed to the fact-finding stage. In early 2026, the case is heading toward significant rulings that employment law practitioners are describing as potentially landmark.

The reason it matters for every employer using AI in hiring has nothing to do with Workday specifically. It lies in a sentence from the court’s reasoning: employment bias laws can plausibly reach the software vendor.

That was new. And it opened a question that goes in both directions: if the vendor can be held liable, does that reduce the employer’s exposure? The answer, from the EEOC and from the weight of ongoing litigation, is no.


The Liability That Doesn’t Transfer

The dominant assumption in enterprise HR technology procurement has been that using a third-party tool outsources some portion of the legal risk. The vendor built it, trained it, maintains it. If the tool does something discriminatory, the argument goes, the vendor is responsible.

This assumption is legally incorrect and has been for years. The EEOC made its position clear in guidance published in 2023: employers remain fully responsible under Title VII of the Civil Rights Act when AI-driven tools produce discriminatory outcomes, regardless of who designed the tool or how it was marketed. If an algorithm creates a disparate impact on a protected group — women, racial minorities, candidates over 40, people with disabilities — the employer who deployed it bears liability.

The legal concept here is disparate impact: a neutral-seeming policy or practice that disproportionately disadvantages a protected group. Disparate impact doesn’t require intent. An algorithm trained on historical hiring data from an organisation that, in the past, predominantly hired white men under 35 will, absent correction, tend to favour profiles that resemble those of previous successful hires. It will do this without anyone instructing it to discriminate. It will do it at speed and at scale.

The employer who buys the tool inherits that outcome.


The Growing Case File

Mobley v. Workday is the most prominent case in the current wave, but it is not alone. A review of active litigation and enforcement activity as of early 2026 makes the trend unmistakable.

In a case against Aon, the ACLU filed an FTC complaint challenging three AI hiring tools — ADEPT-15, vidAssess-AI, and gridChallenge — alleging they discriminate against people with disabilities and certain racial groups. The ACLU also alleged that Aon’s marketing of these tools as “bias-free” was deceptive.

Harper v. Sirius XM Radio, filed in August 2025, alleges that Sirius XM’s AI hiring tool discriminated against a Black applicant by relying on historical data that perpetuated the company’s past biases. The plaintiff pursues both disparate treatment and disparate impact theories under Title VII and Section 1981.

The Eightfold AI class action, filed January 2026, takes a different but related angle — arguing that the platform’s data collection practices constitute a violation of the Fair Credit Reporting Act, regardless of discriminatory outcome.

Law360 named AI hiring bias as one of its top employment law stories to watch in 2026. The EEOC has made algorithmic fairness an enforcement priority under its current leadership.

The direction of travel is clear. The legal frameworks are catching up to the technology, and they are doing so in ways that create direct exposure for employers — not just for the vendors selling the tools.


What “Disparate Impact” Means in Practice

Understanding disparate impact is essential to understanding your exposure.

Disparate impact occurs when an employment practice, neutral on its face, has a statistically significant adverse effect on members of a protected class, and the employer cannot demonstrate that the practice is job-related and consistent with business necessity. The classic example is a physical test that screens out women at higher rates than men for a role where the physical requirement isn’t actually necessary. The test isn’t designed to discriminate. The discrimination is the outcome.

AI hiring tools create disparate impact risk through several mechanisms:

Biased training data. If a model is trained on historical hiring data from an organisation where the successful hire population was not demographically representative, the model will learn to favour profiles that resemble past successful hires. Historical bias becomes algorithmic policy.

Proxy discrimination. Models can learn to use seemingly neutral inputs — zip code, educational institution, gap years in employment history, language patterns in applications — as proxies for protected characteristics. A model that penalises employment gaps, for example, may disproportionately disadvantage women who took career breaks for caregiving. A model that penalises certain university names may create racial disparities.

Inaccessible evaluation formats. Automated video interview tools that use facial expression and vocal analysis may systematically disadvantage candidates with certain disabilities or neurodivergent traits. Without accommodation mechanisms, these tools may violate the ADA.

Compounding effects. Most AI hiring pipelines involve multiple tools — an ATS for initial screening, an AI scorer for resume ranking, a predictive tool for likelihood-to-succeed assessment, perhaps a video interview analyser. Each introduces its own bias risk, and the effects compound across the pipeline.


What the Law Now Requires

The regulatory framework for AI in employment decisions is a patchwork, but it is growing more demanding in every direction.

Federal level. The EEOC’s guidance on AI and algorithmic fairness establishes that employers must evaluate automated tools for adverse impact and cannot rely on vendor claims of bias-mitigation as a substitute for their own assessment. The Fair Credit Reporting Act, as the Eightfold case argues, may apply to AI tools that assemble candidate data for employment purposes. Where AI tools produce adverse action, FCRA adverse action procedures may be required.

New York City Local Law 144. Effective since July 2023, NYC requires employers and employment agencies that use automated employment decision tools to commission an annual independent bias audit, publish the results, and notify candidates that such a tool is being used. The law applies to hiring and promotion decisions for roles located in New York City. It is currently the most prescriptive AI hiring regulation in the US.

Colorado AI Act. The Colorado Artificial Intelligence Act, with full implementation delayed until 30 June 2026, classifies employment-related AI systems as “high risk” and requires developers and deployers to conduct impact assessments, implement risk management programmes, and notify applicants when AI is used in consequential decisions about them.

California. California’s Civil Rights Council regulations extend existing anti-discrimination law explicitly to AI systems. Employers must avoid deploying AI that screens out applicants based on protected characteristics, and must maintain records of automated decision-making data for four years.

Illinois. Illinois bans employers from using AI in ways that discriminate against job candidates and requires notification when AI is involved in hiring decisions.

The regulatory picture is moving in one direction: more disclosure, more auditability, more accountability, and the explicit rejection of the idea that an employer can deploy an algorithmic tool and remain ignorant of its outcomes.


Building a Defensible Process: The Practical Framework

For HR and compliance teams, the question is not whether to use technology in hiring — it is how to use it in a way that is legally defensible and genuinely fair. Here is the framework that matters.

Conduct a pre-deployment vendor due diligence audit. Before deploying any AI hiring tool, demand from the vendor a detailed explanation of what data the model uses, how it was trained, what bias-testing it has undergone, and what disparate impact analysis is available. Ask specifically for demographic breakdown of outcomes in the vendor’s testing — not just claims of “bias mitigation.”

Run your own adverse impact analysis. Don’t rely on the vendor’s self-reporting. Once the tool is in use, track outcomes by demographic group. Are candidates from particular racial groups, age brackets, or with certain employment patterns advancing at lower rates than others? If the data shows a pattern, you have an obligation to investigate and address it.

Build human review into consequential decisions. Every jurisdiction moving toward AI hiring regulation, and the EEOC’s own guidance, emphasises the importance of human oversight at key decision points. Automated tools should inform human decisions — they should not substitute for them entirely. Document that human review is occurring.

Establish an adverse action procedure that covers AI outputs. If a candidate’s application is not advanced and an AI tool’s assessment contributed to that outcome, your adverse action procedures should reflect that. In jurisdictions where FCRA applies, this means pre-adverse action notice, provision of the relevant information, and a dispute period.

Review and update vendor contracts. Your contract with an AI hiring tool vendor should address: what data the vendor collects and how it is handled; whether the vendor indemnifies the employer for claims arising from the tool’s outputs; what audit rights the employer has over the model; and the vendor’s obligations to maintain and improve bias testing over time. Most current contracts are inadequate on all of these points.

Stay ahead of state law deadlines. If you have employees or candidates in New York City, Colorado, California, or Illinois, specific legal obligations already exist or are coming into effect this year. A compliance calendar that tracks these deadlines is not optional infrastructure — it is a basic requirement.


The Question Behind the Question

When companies ask whether they should use AI in hiring, they are often really asking: can we hire faster without increasing our legal exposure?

The honest answer in 2026 is: yes, but only if you treat the tool with the same compliance rigour you apply to every other component of your screening programme. The efficiency gains are real. The bias risks are real. The legal exposure is real. These things coexist.

The employers who are going to find themselves on the wrong side of litigation are not, in most cases, the ones who acted with discriminatory intent. They are the ones who bought a tool, trusted the vendor’s claims about fairness, never looked at their own demographic outcome data, and assumed that because the algorithm decided, they were somehow insulated from the consequences.

The algorithm is not a liability shield. It is a liability.

Understanding that clearly — and building your hiring process accordingly — is the work of 2026.

Scroll to Top