Connect with us

Gadgets

AI workers demand stronger whistleblower protections in open letter

Published

on

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic has signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to choose between signing an aggressive non-disparagement agreement or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman said that he had been genuinely embarrassed” by the provision and claimed it has been removed from recent exit documentation, though it’s unclear if it remains in force for some employees. After this story was published, an OpenAI spokesperson told Engadget that the company had removed a non-disparagement clause from its standard departure paperwork and released all former employees from their non-disparagement agreements.

The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders, and Daniel Kokotajlo. Kokotajlo said that he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term for AI systems that are as smart or smarter than humans. The letter — which was endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio, and Stuart Russell — expresses grave concerns over the lack of effective government oversight for AI and the financial incentives driving tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbation of inequality, and even the loss of human control over autonomous systems, potentially resulting in human extinction.

See also  Monster Hunter Wilds pre-launch update addresses open beta issues

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas,” wrote Kokotajlo on X. “Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to ‘move fast and break things.’ Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

In a statement shared with Engadget, an OpenAI spokesperson said: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society, and other communities around the world.” They added: “This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company.”

Google and Anthropic did not respond to requests for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems” and it believes in its “scientific approach to addressing risk.” It added: “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society, and other communities around the world.”

See also  Goodbye, money: Lego’s latest gaming set will surely open a wallet-emptying can of worms

The signatories are calling on AI companies to commit to four key principles:

  • Refraining from retaliating against employees who voice safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators about risks

  • Allowing a culture of open criticism

  • And avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out

The letter comes amid growing scrutiny of OpenAI’s practices, including the disbandment of its “superalignment” safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company’s prioritization of “shiny products” over safety.

Update, June 05 2024, 11:51AM ET: This story has been updated to include statements from OpenAI.

Trending