Tech Execs Warn AI Poses ‘Risk of Extinction’ on Par with Pandemics and Nuclear War

Tech executives and artificial-intelligence scientists are sounding the alarm about AI, saying in a joint statement Tuesday that the technology poses an extinction risk as great as pandemics and nuclear war.

Source: WSJ | Published on May 30, 2023

Artificial Intelligence voice cloning

Tech executives and artificial-intelligence scientists are sounding the alarm about AI, saying in a joint statement Tuesday that the technology poses an extinction risk as great as pandemics and nuclear war.

More than 350 people signed a statement released by the Center for AI Safety, an organization that said it works to reduce AI risks.“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the organization said.

The signatories said they wanted to open up discussion about the most severe risks of AI. Sam Altman, chief executive of OpenAI, the company that developed ChatGPT, and Mira Murati, the company’s chief technology officer, are among those who signed the statement.

Other signatories included Kevin Scott, Microsoft’s chief technology officer; Google AI executives Lila Ibrahim and Marian Rogers; and Angela Kane, the former United Nations high representative for disarmament affairs. Leaders from Skype and Quora also signed the statement.

The tech industry has expressed excitement about AI but fears are also mounting that the technology could grow out of control. Critics have increased their calls for AI regulation since OpenAI released ChatGPT last year, saying the technology poses untold threats to humanity.

AI stocks have soared in recent months as investors bet on what they believe is a new computing era. Nvidia, the semiconductor company developing AI technology, on Tuesday became the first chip maker to achieve a $1 trillion valuation, joining Apple, Microsoft, Amazon and Google parent Alphabet on the list of the world’s trillion-dollar companies.

AI experts and tech executives including Elon Musk signed a letter in March calling for AI developers to pause their work on the technology. They said a moratorium of at least six months would give the industry time to set safety standards for AI design and to curb potential harms of the riskiest AI technologies.

The signatories on Tuesday said that while the public has been acknowledging AI risks, there are more challenges to be discussed. They also wanted to “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”

The online statement has an option for AI scientists, professors, executives and leaders to also sign it.

Artificial intelligence essentially refers to a computer’s ability to learn from large amounts of data and subsequently mimic human responses.

The tech industry has been developing AI for years, but the technology became more widespread in November when OpenAI released ChatGPT. The free chatbot can quickly answer almost any question, allowing users to generate answers to problems at work and school. ChatGPT’s answers are sometimes wrong. Proponents say the technology has the potential to transform industries and reshape parts of the labor force.

ChatGPT’s release launched a race among tech companies like Google and Microsoft to come out with similar technologies.

Leaders and tech executives have warned about dangers from AI, but Tuesday’s signatories said they hadn’t discussed the most urgent risks.

The Biden administration in recent months has said AI poses threats to public safety, privacy and democracy, but the government has limited authority to regulate it. Bill Gates has said he believes AI should be properly regulated but has expressed excitement about the technology. He didn’t sign the Tuesday statement.

Cybersecurity chiefs have said there are clear benefits from AI, but the promises and risks of early generative AI are overblown.