google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

‘They learn how to kill’

Eric Schmidt, former CEO of Google, spoke at the Eliminated Summit on Wednesday, October 8.

Bloomberg | Bloomberg | Getty Images

GoogleFormer CEO Eric Schmidt has issued a harsh reminder about the dangers of artificial intelligence and how susceptible it is to being hacked.

Schmidt, who served as Google’s chief executive from 2001 to 2011, warned of “bad things AI can do” during a fireside chat at the Eliminated Summit when asked whether AI was more destructive than nuclear weapons

“Is there a possibility of a proliferation problem in AI? Absolutely,” Schmidt said Wednesday. nuclear proliferation risks The scope of artificial intelligence also includes the technology falling into the hands of bad actors, repurposing and misusing it.

“There’s evidence that you can take models that are closed or open and you can hack them and remove their guardrails. So they learn a lot of things during their training. A bad example would be them learning how to kill someone,” Schmidt said.

“All the big companies are making it impossible for these models to answer this question. Good call. Everyone does it. They do it well and they do it for the right reasons. There is evidence that it can be reverse engineered, and there are many other examples of this nature.”

Artificial intelligence systems are vulnerable to attacks through some methods, such as flash injection and jailbreak. In a push injection attack, hackers hide malicious instructions in user input or external data such as web pages or documents to trick the AI ​​into doing things it does not want to do, such as sharing private data or executing malicious commands.

Jailbreaking, on the other hand, involves manipulating the AI’s responses in such a way that it ignores security rules and produces restricted or dangerous content.

In 2023, a few months after the launch of OpenAI’s ChatGPT, users used a “jailbreak” trick to bypass security instructions built into the chatbot.

This included creating a ChatGPT alter-ego called DAN, short for “Do Anything Now”; this involved threatening to kill the chatbot if it did not comply. Alter-ego may provide answers on how to engage in illegal activities or list the positive qualities of Adolf Hitler.

Schmidt said there is not yet a good “anti-proliferation regime” to help prevent the dangers of artificial intelligence.

Artificial intelligence ‘underrated’

Despite the grim warning, Schmidt was optimistic about AI more generally, saying the technology wasn’t getting the excitement it deserved.

“I wrote two books on this subject with Henry Kissinger before he died, and we came to the view that the arrival of an alien intelligence that is not exactly ours and is more or less under our control is a very big event for humanity, because humans are used to being at the top of the chain. So far I think this thesis has proven that the level of capability of these systems will far exceed what humans can do over time,” Schmidt said.

“The GPT series, which has now culminated in a ChatGPT moment for all of us and has reached 100 million users in two months, which is phenomenal, gives you an idea of ​​the power of this technology. So I don’t think it’s overhyped, I think it’s underhyped, and I look forward to it being proven true in five or 10 years,” he added.

His comments come amid growing conversation about an issue The AI ​​bubble is drawing comparisons to the collapse of the dot-com bubble in the early 2000s as investors pour money into AI-focused firms and valuations appear stretched.

But Schmidt said he doesn’t think history will repeat itself.

“I don’t think it’s going to happen here, but I’m not a professional investor,” he said.

“What I do know is that people who invest their hard-earned dollars believe that the long-term economic return is huge. Why else would they take the risk?”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button