OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter

Enlarge / An AI-generated image of “AI taking over the world.”

Stable Diffusion

On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life’s work could potentially extinguish all of humanity.

The brief statement, which CAIS says is meant to open up discussion on the topic of “a broad spectrum of important and urgent risks from AI,” reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

This statement comes as Altman travels the globe, taking meetings with heads of state regarding AI and its potential dangers. Earlier in May, Altman argued for regulations of his industry in front of the US Senate.

Considering its short length, the CAIS open letter is notable for what it doesn’t include. For example, it does not specify exactly what it means by “AI,” considering that the term can apply to anything from ghost movements in Pac-Man to language models that can write sonnets in the style of a 1940s wise-guy gangster. Nor does the letter suggest how risks from extinction might be mitigated, only that it should be a “global priority.”

However, in a related press release, CAIS says it wants to “put guardrails in place and set up institutions so that AI risks don’t catch us off guard,” and likens warning about AI to J. Robert Oppenheimer warning about the potential effects of the atomic bomb.

AI ethics experts are not amused

An AI-generated image of a globe that has stopped spinning.
Enlarge / An AI-generated image of a globe that has stopped spinning.

Stable Diffusion

This isn’t the first open letter about hypothetical, world-ending AI dangers that we’ve seen this year. In March, the Future of Life Institute released a more detailed statement signed by Elon Musk that advocated for a six-month pause in AI models “more powerful than GPT-4,” which received wide press coverage but was also met with a skeptical response from some in the machine-learning community.

Experts who often focus on AI ethics aren’t amused by this emerging open-letter trend.

Dr. Sasha Luccioni, a machine-learning research scientist at Hugging Face, likens the new CAIS letter to sleight of hand: “First of all, mentioning the hypothetical existential risk of AI in the same breath as very tangible risks like pandemics and climate change, which are very fresh and visceral for the public, gives it more credibility,” she says. “It’s also misdirection, attracting public attention to one thing (future risks) so they don’t think of another (tangible current risks like bias, legal issues and consent).”


Source link