Experts warned about extinction. What do they mean?
By extinction, AI experts mean human extinction – that everyone, everywhere on Earth, could die from the consequences of powerful AI systems that outsmart humanity.
The top experts in AI, including the three most cited AI researchers and the CEOs of the biggest AI companies (Google DeepMind, OpenAI, and Anthropic, etc.) have warned about the risks of extinction from AI.
You can find the list of signatories here. Below is the full statement they signed:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The risks of extinction come from ASI. But what is ASI?
ASI means Artificial Superintelligence. It refers to artificial systems that can outsmart every human, every genius, every company, and every nation. It means that in a competition against these systems, we would lose. This is true whether the competition is economic, scientific, military, or political.
ASI goes by many different names: "Superhuman Machine Intelligence", "Superintelligence", and even "AGI" is sometimes used to refer to it (which is all a bit confusing).
Several companies have started to explicitly work on building ASI. They believe the benefits that could come if they can control these systems warrants the extinction risk.
Here are some quotes from CEOs of the biggest AI companies talking about their ambitions, as well as the risks of ASI.


OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.
How much time do we have?
There has been a big paradigm shift following the release of GPT-3 in 2020 and then ChatGPT in 2022. Before then, experts were either not thinking much about ASI, or only as a problem for future generations.
Everything has changed since then. The increasingly fast pace of AI progress has altered the way experts think about ASI. Many experts now believe we are at risk of reaching ASI within just a few years.
Sadly, there is a big gap between how experts think about ASI, and how civil society thinks about it. There has not been much public discourse about it, and politicians are at large not aware of nor planning for ASI.
Here are some quotes of experts talking about when they think we might reach ASI, and how they were wrong in the past.


How should we prevent this?
To prevent the risks of extinction, we must ban the development of ASI. Specifically, we must ban the development of any systems that may subvert human control and oversight.
AI progress is accelerating, companies are racing with each other to ASI, and countries are increasingly getting interested. Some experts are predicting and (some even recommending!) drastic and military actions as we get closer to ASI.
Before we get to the point where these drastic measures are enacted, we must restrict critical capabilities which enable the development of ASI. The policies and interventions are listed in the Take Action page.
Here are some quotes from experts on the geopolitical implications of ASI:


FAQ
Additional Resources
Here are some resources that can help you learn more about AI risks and how to prevent them.
- The Compendium, an explainer on the extinction risks from AI, where they come from, and how to address them.
- A repository of quotes from experts on catastrophic risks from AI, up to and including extinction.
- AI 2027, a scenario wargamed by experts' that represents their best guess about quick AI progress might look like.