Consider this scenario. "You're a delivery driver and your employer's tracking algorithm determines you’re not performing up to its standards — and then sends you an email to let you know you've been fired without any warning or opportunity to speak to a human being." According to Senator Bob Casey, it is this example, along with others, that caused him to draft the the "No Robot Bosses Act."
If enacted, it would add protections for job applicants and employees related to automated decision systems and would require employers to disclose when and how these systems are being used.
Specifically, this bill would:
🤖 Prohibit employers from relying exclusively on AI in employment-related decisions.
🤖 Require pre-use testing and validation of AI for discrimination and biases.
🤖 Require employers to train individuals or entities on the proper operation of AI.
🤖 Mandate employers to provide independent, human oversight of AI outputs before using them to aid an employment-related decision.
🤖 Require disclosures to employees on the use of AI, its inputs and outputs, and employees' related rights.
🤖 Establish the Technology and Worker Protection Division at the Department of Labor to regulate the use of workplace automated decision systems.
"Right now, there is nothing stopping a corporation from using artificial intelligence to hire, manage, or even fire workers without the involvement of a human being," said Senator Casey in discussing this legislation. "As robot bosses become more prevalent in the workplace, we have an obligation to protect working families from the dangers of employers misusing and abusing these novel technologies."
He's not wrong. AI poses an existential threat to the workplace. Existing anti-discrimination laws already protect employees from the implicit biases in the data upon which AI systems rely to aid hiring and firing decisions. There is nothing, however, regulating AI itself as a decision-making tool or requiring a human presence in the process. Thus, even to this management-side employment lawyer the No Robot Bosses Act makes a lot of practical sense.
It's also a baby step towards the necessary regulation we need to protect our workplaces from emerging AI technologies. Generative AI (such as ChatGPT) is in its infancy, but it's maturing quickly. I do not for a minute trust the tech companies to develop this technology responsibly. There's too much money to be made. We've all seen the movies (hello HAL and Skynet) and know how they end. I don't want to live those endings, and yet I have a legitimate fear that it's exactly what will happen if we don't systemically legislate this tech before it's too late.