Workday AI Hiring Class Action: What Employers Must Know

Workday AI Hiring Class Action: What Employers Must Know

April 15, 2026 · Martin Bowling

The first nationwide AI hiring class action is real

A federal court has cleared the way for a nationwide collective action against Workday, the HR software vendor used by roughly half of the Fortune 500 — including many small and mid-sized employers that rely on its applicant tracking and AI screening tools. The lawsuit, Mobley v. Workday, alleges the company’s AI-powered hiring platform systematically filtered out applicants over the age of 40, in violation of the Age Discrimination in Employment Act (ADEA).

This is not a hypothetical case about what could go wrong with AI in hiring. It is the first major class action in which AI-driven screening is itself the defendant, and the fallout reaches every business that uses an outside vendor to decide which resumes a human ever sees.

If your business uses AI-assisted recruiting — even indirectly, through an ATS or a LinkedIn-style ranking feature — the Mobley v. Workday ruling is a warning shot. Here is what happened, what it means, and what to do before the next round of rulings lands.

What the Mobley v. Workday ruling actually said

The timeline

  • 2023: Derek Mobley, a Black man over 40 who self-identifies as having anxiety and depression, sued Workday after being rejected for more than 100 jobs at companies that used the vendor’s AI screening.
  • May 2025: Judge Rita Lin of the U.S. District Court for the Northern District of California granted preliminary certification under the ADEA, allowing Mobley to notify other applicants aged 40 and up who were rejected through Workday’s system since September 24, 2020.
  • March 2026: The court rejected Workday’s attempt to dismiss the disparate-impact claim, though it trimmed certain California state and individual disability claims, prompting plaintiffs to file an amended complaint. See the HR Dive coverage.
  • March 7, 2026: Deadline for eligible applicants to opt into the collective action.
  • April 2026: The case now proceeds as the first nationwide collective action alleging AI hiring discrimination, with an estimated class of tens of thousands of older applicants, per the Fisher Phillips summary.

The most important piece of the case is not the headline. It is the legal theory the court accepted.

Workday argued it was just a software vendor — the employers made the decisions, so the employers should face any discrimination claims. The court disagreed. In an earlier ruling summarized by Seyfarth Shaw, the judge held that an AI vendor can be treated as an agent of the employer — and therefore directly liable for employment discrimination under federal civil rights law.

That shifts the liability landscape for every employer using third-party AI hiring tools. The vendor is not a shield. Both the vendor and the employer can be on the hook.

Why this matters for small businesses

You are almost certainly using AI in hiring already

If you post a job on LinkedIn, Indeed, or ZipRecruiter, AI is ranking and filtering candidates for you. If you use an applicant tracking system — Workday, Greenhouse, Lever, BambooHR, Paradox, HireVue — AI is scoring candidates against your job description. Many of those tools now include personality assessments, video interview scoring, and predictive models that rank who is “most likely to succeed” in the role.

Most small businesses never audit these systems. They trust the vendor. That is exactly the trust Mobley v. Workday is now testing in court.

Disparate impact does not require intent

Federal employment law prohibits both intentional discrimination and practices that are neutral on their face but disproportionately harm a protected group — that is the “disparate impact” standard. An AI model does not need to know an applicant’s age to discriminate against older workers. It can learn proxies: graduation year, employment gaps, specific software systems in the resume, even writing style.

Once a court finds disparate impact, the employer has to prove the practice is job-related and consistent with business necessity — a tough standard. With AI, the employer often cannot even explain how the model reached a decision, much less defend it as job-related. The Society for Human Resource Management has warned employers that opaque models make this burden nearly impossible to meet without proactive bias testing.

The regulatory wave is already building

Mobley is not happening in a vacuum. Several federal and state actions converge on the same point:

  • The EEOC has issued technical guidance making clear that Title VII and the ADEA apply to AI-driven selection procedures.
  • New York City requires employers to audit automated employment decision tools annually under Local Law 144.
  • Illinois, Maryland, Colorado, and California have passed or expanded laws regulating AI in hiring, including notice, consent, and bias audit requirements.
  • Federal enforcement is shifting from guidance to litigation posture — the EEOC has already settled its first AI discrimination case, iTutorGroup, for age-based rejection of older tutors.

For small employers — especially in Appalachian states like West Virginia, Tennessee, and Kentucky that have not yet passed their own AI hiring laws — the pattern is clear. Federal civil rights law already covers this, and state-level rules are tightening fast.

What small employers should do now

You do not need a seven-figure compliance budget to reduce your exposure. You need a documented process. Here is the practical short list.

1. Inventory your AI hiring tools

Write down every system that touches a candidate before a human does: job board algorithms, ATS resume parsers, skills assessments, video interview scoring tools, personality tests, predictive hire scores. If you do not know whether a tool uses AI, ask the vendor in writing.

2. Ask vendors for their bias testing documentation

Any serious vendor should be able to hand you a recent disparate impact analysis broken down by race, sex, age (40+), and disability status. If they cannot — or if they say it is proprietary — treat that as a red flag. The Mobley ruling means their failure to test is now your liability exposure.

3. Keep humans in the loop on every rejection

Fully automated rejections are the highest-risk pattern. Route AI recommendations to a human reviewer who can catch obvious bias signals (an older candidate with strong credentials filtered out, for example) and document the reasoning before the “no” is sent.

4. Document your job criteria

If an AI tool rejects a candidate and later gets challenged, you will need to show the rejection tied to legitimate, documented, job-related criteria — not a black-box score. Write clear requirements, tie them to actual job duties, and keep the paper trail.

5. Monitor outcomes quarterly

Track the demographic makeup of applicants, semi-finalists, and hires. If your AI-assisted pipeline rejects older workers, Black candidates, women, or people with disabilities at materially higher rates than the applicant pool, you have a disparate-impact problem whether you intended one or not.

6. Treat AI governance as an HR policy

Most small employers have anti-harassment policies, background check policies, and social media policies. Add one for AI-assisted hiring: which tools are approved, who audits them, how candidates are notified, how appeals work. SCORE and the U.S. Small Business Administration both publish small business HR policy templates that make a good starting point.

Our take

The Mobley v. Workday ruling closes the door on the most convenient excuse in AI hiring: “the vendor handles it.” For small businesses in particular — where HR is often one person wearing four hats — the temptation to outsource screening to a tool that “just works” is enormous. That tool is now a named defendant alongside its customers.

The right response is not to abandon AI in hiring. Used well, AI can surface candidates small employers would otherwise miss, speed up response times, and reduce some forms of human bias. The wrong response is to deploy it without guardrails and hope the vendor’s lawyers have done the work.

If your business runs on lean HR and you want a sanity check on the AI tools already in your stack — or help designing a simple governance policy before the next ruling lands — get in touch. We help small employers across Appalachia think through AI compliance without turning it into a legal science project. If you are rethinking whether AI belongs in your hiring workflow at all, our AI consulting service is a good place to start.

Sources and further reading

AI Tools Industry News Small Business Automation