Experts including the "godfather of AI" claim artificial intelligence will be as dangerous as pandemics and nuclear war, and that it must be strictly controlled.
The Future of Life Institute previously issued an open letter warning of the dangers of AI, signed by over 1,000 people including Steve Wozniak and Elon Musk. That letter concerned what it described as the "out of control" development of AI systems.
Now a Center for AI Safety (CAIS) has issued what it calls a "succinct statement" that aims to "open up discussion" about the risks of artificial intelligence.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI," says CAIS on its website. "Even so, it can be difficult to voice concerns about some of advanced AI's most severe risks."
"[The statement] is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously," it continues.
At present, around 370 experts have put their signature to the statement, including OpenAI CEO Sam Altman and several Google execs and managers working on AI. He has most recently objected to what he says is potentially the "over-regulation" of AI by the EU.
CAIS is based in San Francisco. Based on LinkedIn posts and online newsletters, it appears to have been formed since 2021, and expanded in early 2023.
The organization says that it aims to expand "the research field of AI safety by providing funding, research infrastructure and educational resources." These have included funding a $100,000 Machine Learning Safety Workshop at NeurIPS 2022.
The signatories on CAIS's statement include Dr. Geoffrey Hinton. Known as a "godfather of AI," Hinton recently resigned from Google, citing a need to raise awareness about the dangers of AI.