• Home
  • Ai
  • Ai News
  • OpenAI, Google DeepMind Employees Warn of AI Risks, Demand Better Whistleblower Protection Policies

OpenAI, Google DeepMind Employees Warn of AI Risks, Demand Better Whistleblower Protection Policies

The open letter is signed by 13 former and current employees of OpenAI, Google DeepMind, and Anthropic.

OpenAI, Google DeepMind Employees Warn of AI Risks, Demand Better Whistleblower Protection Policies

Photo Credit: Unsplash/Solen Feyissa

The open letter is endorsed by Yoshua Bengio and Geoffrey Hinton, two of the three Godfathers of AI

Highlights
  • The open letter says potential human extinction is one of the AI risks
  • OpenAI, Google DeepMind workers asked for a culture of open criticism
  • Current employees of OpenAI have signed the letter anonymously
Advertisement

OpenAI and Google DeepMind are among the top tech companies at the forefront of building artificial intelligence (AI) systems and capabilities. However, several current and former employees of these organisations have now signed an open letter claiming that there is little to no oversight in building these systems and that not enough attention is being paid toward major risks posed by this technology. The open letter is endorsed by two of the three 'godfathers' of AI, Geoffrey Hinton and Yoshua Bengio and seeks better whistleblower protection policies from their employers.

OpenAI, Google DeepMind Employees Demand Right to Warn about AI

The open letter states that it was written by current and former employees at major AI companies who believe in the potential of AI to deliver unprecedented benefits to humanity. It also points towards the risks posed by the technology which include strengthening societal inequalities, spreading misinformation and manipulation, and even losing control over AI systems that could lead to human extinction.

The open letter highlights that the self-governance structure implemented by these tech giants is ineffective in ensuring scrutiny of these risks. It also claimed that “strong financial incentives” further incentivise companies to overlook the potential danger AI systems can cause.

Claiming that AI companies are already aware of AI's capabilities, limitations, and risk levels of different kinds of harm, the open letter questions their intention to take corrective measures. “They currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily,” it states.

The open letter has made four demands from their employers. First, the employees want companies to not enter into or enforce any agreement that prohibits their criticism for risk-related concerns. Second, they have asked for a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organisation.

The employees also urge the organisations to develop a culture of open criticism. Finally, the open letter highlights that employers should not retaliate against existing and former employees who publicly share risk-related confidential information after other processes have failed.

A total of 13 former and current employees of OpenAI and Google DeepMind have signed the letter. Aside from the two 'godfathers' of AI, British computer scientist Stuart Russell has also endorsed this move.

Former OpenAI Employee Speaks on AI Risks

One of the former employees of OpenAI who signed the open letter, Daniel Kokotajlo, also made a series of posts on X (formerly known as Twitter), highlighting his experience at the company and the risks of AI. He claimed that when he resigned from the company, he was asked to sign a nondisparagement clause to prevent him from saying anything critical of the company. He also claimed that the company threatened Kokotajlo with taking away his vested equity for refusing to sign the agreement.

Kokotajlo claimed that the neural nets of AI systems are rapidly growing from the large datasets being fed to them. Further, he added that there were no adequate measures in place to monitor the risks.

“There is a lot we don't understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all arenas,” he added.

Notably, OpenAI is building Model Spec, a document through which it aims to better guide the company in building ethical AI technology. Recently, it also created a Safety and Security Committee. Kokotajlo applauded these promises in one of the posts.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
CMF Phone 1 Launch Confirmed; Rear Panel Design Teased Ahead of Debut
HTech Partners With nStore to Offer Honor Products on Paytm via ONDC Network
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »