AI researcher warns of 99.9% human extinction risk | Science | News

Artificial intelligence researcher Roman Yampolskiy believes that civilization is in danger of extinction due to the development of artificial intelligence.
The computer scientist at the University of Louisville, who focuses on AI safety and cybersecurity, estimates that there’s a staggering 99.9% chance that AI will destroy humanity in the next century, according to podcaster Lex Friedman’s show released Sunday.
During the extensive two-hour interview, he argued that no AI system launched to date has proven to be secure and expressed pessimism about future iterations that will avoid critical flaws. He joins a select group of pioneering AI developers who have sounded such alarms during his presidency. Donald Trumpartificial intelligence competition.
Last year, Yampolskiy published a volume titled “Artificial Intelligence: Inexplicable, Unpredictable, Uncontrollable”; this volume has been described as “providing a broad introduction to key issues such as the unpredictability of AI outcomes or the difficulty of explaining AI decisions.”
“This book reaches into more complex questions of ownership and control with an in-depth analysis of potential dangers and unintended consequences,” he said.
“The book then concludes with philosophical and existential reflections that explore questions about AI personality, consciousness, and the distinction between human intelligence and artificial general intelligence (AGI).”
Technologists observe that the original pioneers of artificial intelligence, including Yampolskiy, were among those who issued the harshest warnings about the potential destruction and apocalyptic consequences that this technological advancement could bring.
However, some studies question Yampolskiy’s extinction prediction, indicating a significantly lower threat than his calculations suggest.
Research conducted by the University of Oxford in England and Bonn in Germany found that there is only a 5% chance of AI eliminating humanity, based on assessments from more than 2,700 AI researchers.
“People try to make it sound like expecting extinction risk is a minority view, but among AI experts it’s the mainstream,” warns Katja Grace, one of the paper’s authors. “The disagreement seems to be whether the risk is 1% or 20%.”
Many leading AI experts, including Google Brain co-founder Andrew Ng and AI pioneer Yann Lecun, have completely rejected claims that AI is leading to an apocalyptic situation; the latter accused tech leaders like OpenAI’s Sam Altman of harboring hidden agendas behind their alarmist rhetoric about catastrophic AI outcomes.
OpenAI’s Altman has made several disturbing statements about his industry. In comments that drew harsh criticism, he warned that AI would likely destroy countless jobs that he did not consider “real jobs.”
Echoing Altman’s predictions, many AI skeptics have similarly warned that the technology could trigger an economic disaster by displacing countless workers in every industry, without exception.
More than a decade ago, in 2015, Altman ominously declared: “AI will likely end the world, but there will be great companies along the way.”
He also faced harsh reactions when he said earlier this year that widespread adoption of artificial intelligence would require “changes to the social contract.”




