The future no one is ready for

As leaders fail to take climate action, the uncontrolled rise of super-intelligent AI could eclipse all threats as we know them, writes Mark Beeson.
WELL-MEANING friends often tell me that I shouldn’t be so pessimistic, especially about our collective future. For example, what is the point of writing alarmist articles about the environment when “ordinary” people can do almost nothing to influence the course of a global problem?
Good point, which might explain why there are so many choose to turn the setting off instead.
At a time when senseless, anachronistic conflicts in Ukraine and Sudan are killing tens of thousands of people, and a genocidal massacre in Gaza is difficult to stop, it is hard not to despair. Likewise, the world’s most powerful countries and biggest polluters fail to even participate COP30 In Brazil, we could be forgiven for mistrusting the prospects for much-needed cooperation at the international level.
The good news is that such familiar threats to our security may not be as bad as we think. The bad news is that this is because there is a much bigger potential problem: the creation of a superintelligent form of artificial intelligence. As Yudkowsky and Soares, authors of one of the scariest and arguably most important books ever written, put it, If One Builds It, Everyone Dies.
At this point I must admit that I am not qualified to comment on the technical aspects of this argument. Much the same can be said for my understanding of climate science, but tremendous ignorance Despite being the most powerful man in the world, most informed observers have no doubt about the causes and potential consequences of climate change. I’m happy to take their word on this.
Climate change is happening even faster than some pessimists expect, but we are encouraged to believe that the worst effects can be avoided if market forces make renewable energy sources too cheap to resist, or if governments cooperate to ensure the rapid phase-out of fossil fuels. At least we know what we are facing and that possible solutions exist, even if many leaders studiously ignore them.
Unfortunately, it is not possible to say the same for “artificial super intelligence” (VACCINE). One of the issues that also shapes responses to the climate crisis is that some people are making huge amounts of money from business as usual. It is no coincidence that some of the richest and most powerful political figures are around the US President. Donald Trump some are tech siblings that are ports delusional visions about the future and their place in it.
Luckily, not all of the leaders in the race to harness ASI’s potential are so selfish. Nobel laureate Geoffrey Hintonthe so-called godfather of artificial intelligence, He left his job at Google To warn of imminent and poorly understood dangers that may arise from increasing speed, Generously funded development from ASI. Not only the private sector but also governments are pouring huge amounts of money into this research. Everyone seems obsessed with not falling behind and allowing competitors to reap financial or strategic rewards.
According to Yudkowsky and Soares, the most important danger arising from these efforts is ‘alignment problem’Here are a person’s preferences ‘mature’ VACCINE ‘It is almost impossible for us to adapt to ours’. In other words, while ASI will soon be much smarter than us, there is also no reason to assume that they will think like us or put our interests ahead of their own.
Another scientific enlightener, James Lovelock The fame of the Gaia thesis speculates that in the future the Earth will be populated with cyborgs, inorganic life forms that will be much smarter than us. We’ll be lucky if they keep us as pets.
Given humanity’s propensity for violence and apparent inability to cooperate at the speed and scale necessary to find solutions to problems like climate change, cynics might argue that they are no less intelligent than we are.
According to Yudkowsky and Soares, the immediate existential danger is not just that AI will become smarter than us and develop its own alien preferences, but also that ‘to escape’ they go online and reach their goals more easily; None of this probably involves looking at people. I’m not sure what “escape to the internet” actually means, but it doesn’t sound good.
Don’t worry though. ‘The problem is not that artificial intelligences desire to dominate us; Rather, we are made of atoms that they can use for something else.”. It’s not entirely reassuring either. Given such imminent threats, Yudkowsky and Soares argue that AI research should be halted immediately until we know exactly what we are doing and better understand the risks we face.
Considering this Embers sees technological advancement as a chance for personal enrichment, and the Russian President Vladimir Putin He thinks this is a potentially critical source of battlefield advantage, it’s hard to see that happening. But let’s look at the bright side: At least we might not have to worry about climate change for much longer.
Mark Beeson is an adjunct professor at the University of Technology Sydney and Griffith University. He was previously Professor of International Politics at the University of Western Australia.
Support independent journalism Subscribe to IA.
Related Articles

