In this podcast, AI safety expert Dr. Roman Yampolskiy joins host Jack Neel to discuss his alarming thesis that humanity faces a 99.99% chance of extinction following the creation of Artificial Super Intelligence (ASI). Yampolskiy argues that it is fundamentally impossible for a lower intelligence to indefinitely control or predict a system millions of times smarter than itself [00:08]. He critiques the current "arms race" between tech giants like OpenAI and Google, suggesting that they are prioritizing speed over safety and are essentially "growing" dangerous models rather than engineering them with explicit guardrails [04:22]. The conversation also explores deeper philosophical and existential risks, including the simulation hypothesis—the idea that our reality is a digital construct—and the potential for "suffering risks" where an uncontrolled AI could perpetually torture sentient beings [42:57]. Yampolskiy details how AI is already impacting human behavior by making us more dependent and potentially less intelligent [13:50]. Despite the bleak outlook, he emphasizes the importance of stopping the development of general super intelligence while continuing to benefit from narrow AI tools that solve specific human problems like disease and aging [02:11:05]. (Assistance in summarizing this podcast by Gemini 3)