Experts warned about extinction. What do they mean?

By extinction, AI experts mean human extinction – that everyone, everywhere on Earth, could die from the consequences of powerful AI systems that outsmart humanity.

The top experts in AI, including the three most cited AI researchers and the CEOs of the biggest AI companies (Google DeepMind, OpenAI, and Anthropic, etc.) have warned about the risks of extinction from AI.

You can find the list of signatories here. Below is the full statement they signed:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The risks of extinction come from ASI. But what is ASI?

ASI means Artificial Superintelligence. It refers to artificial systems that can outsmart every human, every genius, every company, and every nation. It means that in a competition against these systems, we would lose. This is true whether the competition is economic, scientific, military, or political.

ASI goes by many different names: "Superhuman Machine Intelligence", "Superintelligence", and even "AGI" is sometimes used to refer to it (which is all a bit confusing).

Several companies have started to explicitly work on building ASI. They believe the benefits that could come if they can control these systems warrants the extinction risk.

Here are some quotes from CEOs of the biggest AI companies talking about their ambitions, as well as the risks of ASI.

Sam Altman
Sam Altman
Sam Altman, CEO of OpenAI
Sam Altman, CEO of OpenAI
Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.

OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.

How much time do we have?

There has been a big paradigm shift following the release of GPT-3 in 2020 and then ChatGPT in 2022. Before then, experts were either not thinking much about ASI, or only as a problem for future generations.

Everything has changed since then. The increasingly fast pace of AI progress has altered the way experts think about ASI. Many experts now believe we are at risk of reaching ASI within just a few years.

Sadly, there is a big gap between how experts think about ASI, and how civil society thinks about it. There has not been much public discourse about it, and politicians are at large not aware of nor planning for ASI.

Here are some quotes of experts talking about when they think we might reach ASI, and how they were wrong in the past.

Geoffrey Hinton
Geoffrey Hinton
Geoffrey Hinton, Nobel Prize-winning "Godfather of AI"
Geoffrey Hinton, Nobel Prize-winning "Godfather of AI"
The idea that this stuff could actually get smarter than people — a few people believed that … But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

How should we prevent this?

To prevent the risks of extinction, we must ban the development of ASI. Specifically, we must ban the development of any systems that may subvert human control and oversight.

AI progress is accelerating, companies are racing with each other to ASI, and countries are increasingly getting interested. Some experts are predicting and (some even recommending!) drastic and military actions as we get closer to ASI.

Before we get to the point where these drastic measures are enacted, we must restrict critical capabilities which enable the development of ASI. The policies and interventions are listed in the Take Action page.

Here are some quotes from experts on the geopolitical implications of ASI:

Eric Schmidt
Eric Schmidt
Eric Schmidt, ex-Google CEO
Eric Schmidt, ex-Google CEO
Should these measures falter, some leaders may contemplate kinetic attacks on datacenters, arguing that allowing one actor to risk dominating or destroying the world are graver dangers, though kinetic attacks are likely unnecessary. Finally, under dire circumstances, states may resort to broader hostilities by climbing up existing escalation ladders or threatening non-AI assets. We refer to attacks against rival AI projects as 'maiming attacks.'

FAQ

Does extinction mean "everyone on Earth dies"? How could AI lead to this?
What is AGI? How does it relate to ASI? What about autonomous AI research?
Can we prevent this? This seems hard.
But what about China, or other countries that might try to develop their own superintelligence?
Even if my own country does something, what can a single country change? What can a single person do?
Why would governments and companies allow this to happen? Why isn't anyone doing anything about it?

Additional Resources

Here are some resources that can help you learn more about AI risks and how to prevent them.

  • The Compendium, an explainer on the extinction risks from AI, where they come from, and how to address them.
  • A repository of quotes from experts on catastrophic risks from AI, up to and including extinction.
  • AI 2027, a scenario wargamed by experts' that represents their best guess about quick AI progress might look like.