The named plaintiff, Derek Mobley, is a Black man over the age of 40 who suffers from anxiety and depression. He alleges that he applied for 80-100 positions since 2018 that use Workday as a screening tool and has been denied every time despite his qualifications.
Mobley claims that Workday's artificial intelligence unlawfully favors applicants outside of protected classes through its reliance on algorithms and inputs created by humans conscious and unconscious biases.
Bloomberg quotes a Workday spokesperson on the commitment to its "trustworthy AI" along with its "risk-based review process … to help mitigate any unintended consequences, as well as extensive legal reviews to help ensure compliance with regulations."
Mr. Mobley isn't the only one challenging the validity of AI under our workplace discrimination laws. Earlier this year the EEOC held a public hearing exploring the risks and benefits of using AI in employment decisions. The hearing was part of the EEOC's ongoing AI and Algorithmic Fairness Initiative, which is working to ensure that the use of software, including AI used in hiring and other employment decisions, comply with our workplace discrimination laws. Heck, the EEOC will even be at SXSW, presenting Is AI the New HR? Protecting Civil Rights at Work.
If the plaintiffs lawyers are watching, and the government regulators are watching, then you need to know the implications of the AI tools you are using in your business relating to hiring, firing, compensation, and other employee-facing issues. If you blindly trust AI to "do the right thing" by your applicants and employees, to paraphrase Radio Raheem, you're taking a big risk that your company will get KOed by the left hand of litigation.