www.axios.com /2025/06/16/ai-doom-risk-anthropic-openai-google

Behind the Curtain: What if predictions of humanity-destroying AI are right?

Jim VandeHei,Mike Allen 6-7 minutes 6/16/2025
Illustration of the keys control, alt, and delete, with cracks forming under the delete button

Illustration: Sarah Grillo/Axios

During our recent interview, Anthropic CEO Dario Amodei said something arresting that we just can't shake: Everyone assumes AI optimists and doomers are simply exaggerating. But no one asks:

Why it matters: We wanted to apply this question to what seems like the most outlandish AI claim — that in coming years, large language models could exceed human intelligence and operate beyond our control, threatening human existence.

That probably strikes you as science-fiction hype.

How it works: There's a term the critics and optimists share: p(doom). It means the probability that superintelligent AI destroys humanity. So Musk would put p(doom) as high as 20%.

Amodei is on the record pegging p(doom) in the same neighborhood as Musk's: 10-25%.

Here, in everyday terms, is how this scenario would unfold:

Between the lines: For LLMs to be worth trillions of dollars, the companies need them to analyze and "think" better than the smartest humans, then work independently on big problems that require complex thought and decision-making. That's how so-called AI agents, or agentics, work.

What's coming: You'll hear more and more about artificial general intelligence (AGI), the forerunner to superintelligence. There's no strict definition of AGI, but independent thought and action at advanced human levels is a big part of it. The big companies think they're close to achieving this — if not in the next year or so, soon thereafter. Pichai thinks it's "a bit longer" than five years off. Others say sooner. Both pessimists and optimists agree that when AGI-level performance is unleashed, it'll be past time to snap to attention.

You'd need some mechanism to know the LLMs possess this capability before they're used or released in the wild — then a foolproof kill switch to stop them.

Right now, the companies voluntarily share their model capabilities with a few people in government. But not to Congress or any other third party with teeth. 

Even if U.S. companies do the right thing, or the U.S. government steps in to impose and use a kill switch, humanity would be reliant on China or other foreign actors doing the same.

That's why p(doom) demands we pay attention ... before it's too late.