Can an HR software vendor be held liable for the alleged discriminatory hiring decisions of its customers? According to one federal court, the answer is yes.
Derek Mobley — a Black man over the age of 40 who suffers from anxiety and depression — alleges that he applied for 80-100 positions since 2018 that use Workday as a screening tool … and has been rejected every single time despite his qualifications.
Mobley claims that Workday's artificial intelligence unlawfully favors applicants outside of protected classes through its reliance on algorithms and inputs influenced by conscious and unconscious biases.
Last week, the federal judge hearing Mobley's claim rejected Workday's efforts to dismiss the lawsuit on the basis that it was not Mobley's "employer" and thus the workplace anti-discrimination laws do not cover its actions in this context.
The court, however, held that Workday could be liable for its customers' discriminatory acts because those laws define "employer" to include "any agent of" an employer. Thus, Workday could be liable as an agent by recommending or rejecting candidates to its customers through its AI tools if those decisions were discriminatory.
The court expressed further concern that excluding third-party AI tools from agency liability would undermine the purpose of the workplace anti-discrimination laws. That exclusion would allow employers to escape discrimination liability by the mere delegation to a third-party. That lack of accountability, the court said, would lead to a "startling result."
In Ohio, this issue would not be open for debate, since our state workplace discrimination law makes "aiding and abetting" discrimination its own independent discriminatory act. The federal statutes, however, don't have similar aiding and abetting language. This result, therefore, is a creative way to attack a novel issue, and continues further monitoring as courts grapple with adapting 20th century laws to this 21st century issues AI now presents.
Mobley claims that Workday's artificial intelligence unlawfully favors applicants outside of protected classes through its reliance on algorithms and inputs influenced by conscious and unconscious biases.
Last week, the federal judge hearing Mobley's claim rejected Workday's efforts to dismiss the lawsuit on the basis that it was not Mobley's "employer" and thus the workplace anti-discrimination laws do not cover its actions in this context.
The court, however, held that Workday could be liable for its customers' discriminatory acts because those laws define "employer" to include "any agent of" an employer. Thus, Workday could be liable as an agent by recommending or rejecting candidates to its customers through its AI tools if those decisions were discriminatory.
The court expressed further concern that excluding third-party AI tools from agency liability would undermine the purpose of the workplace anti-discrimination laws. That exclusion would allow employers to escape discrimination liability by the mere delegation to a third-party. That lack of accountability, the court said, would lead to a "startling result."
In Ohio, this issue would not be open for debate, since our state workplace discrimination law makes "aiding and abetting" discrimination its own independent discriminatory act. The federal statutes, however, don't have similar aiding and abetting language. This result, therefore, is a creative way to attack a novel issue, and continues further monitoring as courts grapple with adapting 20th century laws to this 21st century issues AI now presents.