Mediator’s Minute: FCRA in the AI Era – The New Frontier in Employment Class Actions?
- Shireen Wetmore

- Mar 13
- 6 min read

The 90s are back in fashion and the Fair Credit Reporting Act is staging its own comeback as well. In what appears to be a first, plaintiffs in Contra Costa have filed a class action complaint alleging violations of the Fair Credit Reporting Act (“FCRA” or the “Act”) and the California Investigative Consumer Reporting Agencies Act (“ICRAA”) resulting from the use of artificial intelligence in potential employers’ applicant screening processes. The matter, entitled Erin Kistler, et al. v. Eightfold AI Inc., case no. 3:26-cv-01768-LB (2026), was originally filed on January 20, 2026 and removed to federal court on March 2, 2026. A copy of the original complaint is available here. As of this writing, it remains pending in the Northern District of California and has yet to receive a response from the defendant. The hook? Plaintiffs allege that the AI company is acting as a consumer reporting agency as defined under both statutes when gathering, analyzing, and reporting data on individual applicants and their suitability for a given position.
A (Potential) New Frontier in AI Class Action Litigation
A lot has been written about the potential for bias when artificial intelligence (“AI”) is used in employment decisions. By now, many are aware that AI may be used in everything from resume screening to interviews to termination decisions. Cases are wending their way through the courts to determine whether and how different AI-enabled tools may discriminate against applicants and employees alike. To date, these cases and various legislatures have focused on anti-discrimination statutes, privacy regulations, and the burgeoning body of AI-specific laws intended to address the rapidly expanding ways in which existing laws lack the framework or context for an AI-enabled world.
Now, a new frontier in AI-related employment litigation may be emerging. In Kistler, et al. v. Eightfold AI Inc., the plaintiffs take a new approach to AI litigation. Plaintiffs allege that AI was used in evaluating applicants for employment purposes in a manner that violated both the FCRA and ICRAA because, they allege, the method through which the defendant gathered and used data on applicants made the company a Consumer Reporting Agency (“CRA”) under those statutes. Plaintiffs further allege that the information gathered and used constituted a “consumer report” under those statutes, triggering a host of long-standing obligations for the CRA. Left unaddressed in the complaint, but certainly hinted at, are the obligations of the companies using the defendant’s services.
Standalone Disclosures, Standing, and Statutory Damages
The FCRA broadly defines a “consumer report” as “any written, oral, or other communication of any information by a consumer reporting agency bearing on a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer’s eligibility for
(A) credit or insurance to be used primarily for personal, family, or household purposes;
(B) employment purposes; or
(C) any other purpose authorized under section 604 [§ 1681b].” 15 U.S.C. § 1681a(d)(1).
Consumer reporting agencies have several duties under the Act to ensure that consumer reports are being used for proper purposes and to disclose certain information to consumers upon request. For example, CRAs are required to obtain certification from users (i.e., the companies using the reports) that the users have provided notice in a so-called standalone disclosure and obtained written authorization from the consumer (i.e., the applicant for employment) for the procurement of the consumer report. 15 U.S.C. § 1681b(b)(1). There are also requirements regarding what information must be produced, and when, upon request from the consumer, regarding the data used in preparing the consumer report. 15 U.S.C. § 1681g. Extensive litigation has resulted in detailed precedent outlining the sufficiency of the required “standalone disclosure” and written authorization that satisfy these requirements.
Those who remember the 90s may have practiced long enough to also remember the fervor over the matter of Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016). In Spokeo, Inc. v. Robins, the United States Supreme Court addressed the issue of standing in FCRA matters and the need for plaintiffs to show concrete injury even in the face of Congressionally defined harms, such as the failure to notify consumers that a consumer report may be used for an employment purpose or the failure to provide accurate information in a consumer report. In that case, the plaintiff alleged that inaccurate information had been included in a consumer report about him, including information that he had a graduate degree, when he did not. Id. at 1546. The Court was asked to consider the harm of an “informational injury.” Justice Alito, writing for the Court, mused that “not all inaccuracies cause harm or present any material risk of harm. An example that comes readily to mind is an incorrect zip code. It is difficult to imagine how the dissemination of an incorrect zip code, without more, could work any concrete harm.” Id. at 1550. Again, extensive litigation has created an entire body of law exploring the limits of this standing requirement in the FCRA context.
Now, these heavily litigated principles will be applied to AI models. How will that information—the scope of the information used and the algorithms that produced it—be disclosed in compliance with the FCRA or ICRAA? How will consumers be able to analyze the information to be able to dispute inaccurate information, let alone correct those errors and potentially reverse or avoid an adverse action? There are many, many tough questions that parties and courts will face in applying the law to these novel resources.
An important factor in these cases is that the statutes allow for actual or statutory damages, which in many cases can be well in excess of any seemingly de minimus actual harm, along with attorneys fees and costs. See 15 U.S.C. §§ 1681n, 1681o; Cal. Civ. Code § 1786.50. These statutes were intentionally designed to incentivize private litigation to ensure compliance. There is plenty of language within the statutes to debate what exactly a CRA must disclose or produce and whether full disclosure may actually be too much in this AI-enabled world. How might other parties’ privacy rights come into play in explaining the conclusions drawn by these models? Time, and lots of litigation, will tell.
Are Companies Next?
Interestingly, the plaintiffs in Kistler, et al. v. Eightfold AI Inc. did not elect to include doe defendants, as permitted in California, indicating that they do not intend to bring into the case any of the many companies identified in the complaint as using Eightfold AI’s products. The plaintiff elected only to target the CRA. This leaves the door open for future, separate suits against the companies who utilized Eightfold AI in their applicant screening processes, should the court find for plaintiffs that the information does constitute a consumer report. More broadly, this also suggests a new area in which plaintiffs will seek to identify those companies using AI to gather and evaluate data on applicants and potentially bring suit if and when those companies fail to provide the requisite standalone disclosure and other obligatory notices and procedures under applicable statutes like the FCRA and its California corollary, ICRAA.
In My FCRA Era (Again)
Spokeo and its progeny FCRA cases are near and dear to my heart. over a decade ago, these cases made up a significant portion of my work portfolio and I thoroughly enjoyed those cases. Resolution of FCRA class actions can involve much more than monetary settlements. It provides an opportunity to get creative around employer hiring models and even to facilitate industry-wide change. This is a fascinating area of the law, that delves deep into issues of standing, what it means to give a “clear and conspicuous” disclosure, and all the ways in which data collected and disseminated about individuals can both help and hurt employment prospects. Yet it is also a microcosm of the broader attempt by our lawmakers to address big questions like what information bears on a person’s character, reputation, or mode of living? How can these laws protect consumers while providing clear, actionable obligations on the preparers and users of consumer reports? There are no simple answers to these questions. AI and the tools being generated daily to take advantage of its resources are expanding the type of information that is possible to obtain and the ways in which that ever-growing pool of data can be analyzed. It will be up to litigants and the courts to determine how AI’s contributions will be treated in the context of consumer reports, the FCRA, and similar state laws.
With the earlier, Spokeo-era FCRA cases, consumer reporting agencies were early targets for class action litigation. Through the years, as sophistication of the implication of the reports themselves grew, and more and more CRAs and the companies who use their services were finding themselves on the defense end of a lawsuit, CRAs and companies began to tailor their notices, sharpen their practices around adverse actions, and hone their background screening processes to comply with the increasingly sharpened requirements of the FCRA and ICRAA. In this AI-enabled era, the FCRA and state law corollaries may once again be critical tools in defining when and how various types of information are used in evaluating individuals for employment purposes.
Shireen Wetmore is a mediator specializing in complex employment matters and can be reached for questions, comment, or booking at www.shireenwetmoremediation.com.
This article is for informational purposes only and does not constitute legal advice.



