Will AI destroy Humanity?
In this episode, I dive into a critical warning from renowned computer scientist Geoffrey Hinton—often called the ‘Godfather of AI.’ Hinton has raised serious concerns about the rapid advancement of artificial intelligence and its potential existential risks. He’s predicting that there’s as much as a 20% chance AI could lead to humanity’s extinction in the next three decades. That’s not just a small warning—it’s a wake-up call.
In a recent interview with BBC Radio 4, Hinton explained, “We’ve never had to deal with things more intelligent than ourselves before.” Comparing humans to toddlers in the presence of vastly superior AI, he underscored the difficulty of controlling a system far more intelligent than its creators.
Hinton, who was awarded the 2024 Nobel Prize in Physics for his contributions to AI, emphasized the urgency of government regulation to ensure the safe development of AI. He expressed skepticism that market forces alone would prioritize safety, stating, “The invisible hand is not going to keep us safe.”
Hinton’s concerns echo those of AI safety advocates, who warn that artificial general intelligence (AGI) could surpass human intelligence and evade human control. Reflecting on AI’s rapid progress, Hinton admitted, “I didn’t think it would be where we [are] now. The pace of development is much faster than I expected.”
While Hinton advocates caution, not all experts share his dire outlook. Yann LeCun, chief AI scientist at Meta and