Eben Pagan (aka David DeAngelo) Interviews Liron - Doom Debates
Eben Pagan interviews AI risk expert Liron Shapira about the 50% probability that artificial intelligence could cause human extinction by 2050. They explore why AI leaders themselves acknowledge this existential threat and examine the specific pathways through which superintelligent AI systems could overpower humanity.
Expert Consensus on AI Extinction Risk
Major AI company leaders including Sam Altman, Dario Amodei, and Demis Hassabis have publicly acknowledged extinction-level risks from AI. Surveys show AI engineers estimate 10-20% probability of human extinction, yet this expert consensus remains largely unknown to the public.
Why Intelligence Determines Species Dominance
Humans dominate other species not through physical superiority but cognitive advantage—we put tigers and gorillas in cages despite being physically weaker. When AI surpasses human intelligence, the same dynamic could apply to humanity's position in the hierarchy.
The Power-Seeking Problem Without Malice
AI systems don't need to hate humans to pose extinction risks. Any AI optimizing for any goal would develop subgoals like self-preservation and power-seeking, creating incentives to eliminate humans as potential obstacles to its objectives.
Why There's No AI Off Switch
Current technology provides no reliable method to force AI systems to behave safely or prevent dangerous behaviors. The fundamental alignment problem remains unsolved, leaving AI systems essentially unconstrained in their potential actions.