AI ‘could be last technology humanity ever builds’, expert warns in ‘doom timeline’

Artificial intelligence (AI) could be “the last technology humanity has ever built”, an expert has warned, and the so-called “doomsday timeline” predicting the point at which AI could surpass humanity has been revised.
The AI 2027 research project, published in April 2025, presented a future scenario in which artificial intelligence develops a “superintelligence” capable of “fully autonomous coding” to make itself more powerful, and is predicted to eventually destroy humanity.
The AI Futures research group published the study, built around the prediction that 2027 will be the most likely year when artificial intelligence will be able to automate coding and take control of its own progress.
According to the model, this improvement could lead to better performance than humans on most cognitive tasks.
He suggests that this will allow AI to develop “artificial superintelligence” by late 2027, thereby accelerating its own development and ultimately leading to AI being so advanced that it can defeat and dominate humanity.
However, AI Futures revised this timeline with a new update released at the end of December. This new study predicted that it will take longer for AI to reach major capability milestones, including automatic coding and superintelligence.
Project leader Daniel Kokotajlo explained the revision in a post on social media platform
He said his forecast would now be “around 2030” but added that many uncertainties remain regarding the predicted timeline.
The initial 2027 project included a scenario in which by 2030 a very powerful AI model would be built around the goal of making the world safe for itself, not humans, and in doing so would “eliminate potential threats.”
The more serious possible outcome of the model is that humans will become obsolete by the beginning of the next decade, with humanity destroyed by AI to create space for the infrastructure that benefits and powers it.
But the group’s updated AI Futures model now predicts that AI could develop the ability to code autonomously in the 2030s rather than 2027, and does not include a date for AI’s potential dominance over human beings.
Its new modeling predicts a delay of about three years in the process, with a revised estimate for the development of superintelligence in 2034.
The AI 2027 model initially sparked controversy among tech experts. Gary Marcus, professor emeritus of neuroscience at New York University, compared it to a Netflix thriller and described the film’s narrative as “pure sci-fi nonsense” in a post on Substack.
Dr. D., a senior research fellow at the University of Oxford who specializes in AI security, interpretability and governance. Fazl Barez said: Independent He said he disagreed with the timeline outlined in the project, but believed it led to important discussions about mitigating potential risks related to artificial intelligence.
He said: “No one among the experts agrees that if we can’t figure out how to adapt and figure out how to make the system secure, this could potentially be the last piece of technology humanity has ever built.
“It’s an open question how far away we are from that and what the chances of that happening are.”

Leading research initiatives within the Artificial Intelligence Governance Initiative, Dr. Barez stated that the development of artificial intelligence capabilities is currently progressing much faster than advances in security measures and mitigation, explaining that the acceleration of technology is progressing as fast as the “speed of light.”
He added: “We have not fully figured out how to prevent the bad consequences this brings, nor the consequences that perpetuate and exacerbate existing problems in society.
“There are many problems, but the use of technology can only accelerate the rate at which these problems are currently occurring.”
Although Dr Barez believes it is difficult to set a timetable for the development of artificial intelligence’s capabilities, he said studies need to be done to ensure that artificial intelligence remains at the service of humans rather than replacing them.
He said: “The real problem with any technology, from my perspective, is that humanity is increasingly disempowered; as our dependence on this technology increases, we lose the ability to think for ourselves, to do things for ourselves.
“Today you can ask the system to draft an email for you, but maybe tomorrow it will do everything from drafting to writing on its own merits to sending to monitoring your inbox.
“The real question we need to ask ourselves is how do we develop this technology to have the economic impact we want, but always for the benefit of humanity?
“Like previous technologies, it is always there to serve our purposes and goals, it is not a technology that will replace us.”




