On 20 January 2026, two job applicants filed a class-action lawsuit in a California state court against Eightfold AI Inc. — one of the world’s most widely deployed AI hiring platforms, used by enterprises across multiple continents to screen and score job candidates.
The allegation wasn’t discrimination, at least not primarily. It was something more structurally significant: that Eightfold is a consumer reporting agency under the Fair Credit Reporting Act, and that by compiling personal data on job applicants — their social media activity, location data, browsing behaviour, device data, LinkedIn profiles — without their knowledge, and using that data to assess their “likelihood of success” for employment, it has been running background checks on millions of candidates without their consent and without their knowledge.
If that argument holds, it doesn’t just create problems for Eightfold. It creates problems for every employer who has been using the tool.
What the FCRA Actually Says
The Fair Credit Reporting Act has governed how background checks work in the US since 1970. Most HR professionals understand its practical requirements: get written consent before running a background check, provide pre-adverse action notice if you plan to reject someone based on the results, give them a copy of the report and a chance to dispute it.
What fewer people think carefully about is what triggers those obligations in the first place — specifically, the definition of a “consumer reporting agency.”
Under the FCRA, a consumer reporting agency is any person that, for monetary fees or on a cooperative nonprofit basis, regularly assembles or evaluates consumer information for the purpose of furnishing “consumer reports” to third parties. A consumer report, in turn, is any written, oral, or other communication that bears on a consumer’s creditworthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living — and is used, or expected to be used, for employment purposes.
Read that definition again with an AI hiring tool in mind.
A platform that collects data from LinkedIn profiles, infers location history, tracks browsing behaviour, analyses professional trajectory, and uses all of it to generate a score predicting how likely a candidate is to succeed in a given role is, on the face of it, assembling consumer information about personal characteristics and mode of living and delivering it for employment purposes. That is, arguably, exactly what the FCRA was designed to regulate.
Why This Case Is Different From Previous AI Hiring Litigation
The dominant narrative around AI in hiring has focused on discrimination — algorithms that disadvantage protected groups due to biased training data. The Mobley v. Workday case, the ACLU challenge to Aon’s hiring tools, the Harper v. Sirius XM lawsuit filed in 2025 — these are all discrimination cases, argued under Title VII, the ADEA, or the ADA.
The Eightfold lawsuit is arguing something different and, in some ways, more dangerous for the industry. It isn’t primarily about whether the algorithm discriminates. It’s about whether the data collection and scoring process itself violates federal consumer protection law — regardless of outcome.
This matters because FCRA violations don’t require proof of discriminatory intent or disparate impact. They require proof that an employer failed to follow procedural obligations: disclosure, authorisation, adverse action notice. Those are binary — either you did them or you didn’t. And the reality is that most employers using AI talent platforms haven’t done any of them, because they never thought they needed to.
The proposed class action seeks to represent all individuals whose data was collected and processed by Eightfold without proper FCRA disclosures. If the US workforce adoption of AI hiring tools is as extensive as reported — the World Economic Forum estimated in 2025 that around 88% of companies are already using some form of AI in candidate screening — the potential class size is enormous.
The Compliance Gap No One Is Talking About
Here is the uncomfortable practical reality: most AI hiring platforms operate in a legal grey zone that employers have been quietly happy to ignore.
When an employer instructs a traditional background screening company to run a criminal record check or verify employment history, the FCRA framework is well understood. The employer knows they’re running a background check. The candidate is informed and consents. Adverse action procedures are followed if the results are used to decline a hire.
AI talent platforms were not positioned this way. They were sold as recruitment efficiency tools — resume parsers, candidate ranking engines, talent intelligence platforms. The marketing framed them as technology that helps recruiters work faster, not as entities assembling consumer reports. Employers bought the framing.
The Eightfold lawsuit challenges whether that framing is legally accurate.
Eightfold’s platform, according to the complaint, builds profiles by aggregating candidates’ publicly available social media data, professional history from platforms like LinkedIn, location signals, and behavioural data. It uses this to generate a predictive score — the candidate’s “likelihood of success” — which influences whether they appear in searches, whether they’re surfaced to recruiters, and ultimately whether they advance in the process.
Candidates have no visibility into what data is being used. They have no ability to access their own profile. They have no mechanism to correct errors. They were never provided with a stand-alone written disclosure. They never gave written authorisation for the data collection.
Under a standard background check, all of those things are legally required.
What Employers Are Actually Exposed To
If courts determine that AI talent platforms constitute consumer reporting agencies — or that employers using them are required to treat their outputs as consumer reports — the compliance obligations that flow from that are substantial.
Disclosure and authorisation. Before any data is collected or any assessment is generated, the employer must provide the candidate with a stand-alone written disclosure explaining that a consumer report may be obtained for employment purposes. The candidate must give written authorisation. A clause buried in a terms of service does not satisfy this requirement.
Accuracy obligations. Consumer reporting agencies have obligations to follow “reasonable procedures to assure maximum possible accuracy” in the information they provide. An AI model trained on biased historical data that generates inaccurate assessments of individuals would have a difficult time meeting this standard.
Adverse action procedures. If an employer decides not to proceed with a candidate based in any part on the AI platform’s output, they are required to provide a pre-adverse action notice, a copy of the consumer report, a description of the candidate’s rights, and a reasonable period for the candidate to dispute the information before the decision is finalised.
Candidate rights. Individuals have the right under the FCRA to know what a consumer reporting agency holds about them and to dispute inaccurate or incomplete information. If AI platforms are consumer reporting agencies, candidates whose profiles contain errors — and given the scale of automated data collection, errors are inevitable — have a right to correction that is currently not being honoured.
The litigation is in its early stages and the claims have not yet been tested at trial. But employers who use AI hiring tools should not be waiting for a court ruling to start asking these questions.
The Practical Audit HR Teams Should Run Now
The first step is understanding what your AI hiring tools actually do. Not at the level of the marketing deck — at the level of the data flows. The questions your legal and HR teams should be able to answer are:
What data sources does the platform access? Public social media, professional databases, behavioural tracking, inferred attributes? If the platform is pulling information about candidates from sources beyond what they explicitly submitted, you need to understand that clearly.
On what basis are candidates scored? What factors feed into the scoring model? Is the model’s methodology documented and auditable? Can the vendor explain, in terms your legal team can evaluate, how a candidate’s score is generated?
Have candidates been informed and consented? Has your hiring process included a stand-alone disclosure explaining that this tool may be used? Has written authorisation been obtained? If the answer is no, that gap needs to be addressed immediately.
What are your adverse action procedures? If a candidate is not advanced in the process and the AI platform’s output contributed to that decision, are FCRA-compliant adverse action procedures being followed? This includes pre-adverse action notice, provision of the report or profile, and a dispute period.
What does your vendor contract say about liability? If the tool is later found to constitute a consumer reporting agency, who bears responsibility — the vendor, the employer, or both? The answer to this question almost certainly lies in a contract clause that nobody has read carefully enough.
The companies that will weather the current wave of AI hiring litigation are not the ones that used the fewest AI tools. They are the ones who treated their AI tools with the same compliance rigour they applied to their background screening programmes.
The Bigger Picture
The Eightfold lawsuit is one case, and it may not succeed on every claim. But it represents a broader, accelerating trend: the legal infrastructure around AI in employment is being built, case by case, in real time.
The EEOC has flagged AI as a top enforcement priority. Multiple states have passed or are passing laws requiring bias audits, candidate notifications, and human oversight of automated hiring decisions. New York City’s Local Law 144 has been in effect since 2023. Colorado’s AI Act takes effect June 2026. California is expanding its anti-discrimination framework to cover algorithmic decision-making explicitly.
The question facing HR and compliance teams is not whether regulation of AI hiring tools is coming. It has arrived. The question is whether your current processes are defensible in the legal environment that already exists today — not the one that existed when you first deployed the tool.







