Hostile countries could weaponise AI, cybersecurity expert warns

The artificial intelligence developed outside the West can teach the Australians “how to make dirty bombs ve and allow authoritarian regimes to force alternative facts.
Big Tech promises that AI will revolutionize every aspect of modern life until people do their work from how they find knowledge.
The promise was heard in the capitals around the world, and the governments began to understand how to benefit from the economic benefits of early adoption, although they did not know what they were dealing with.
Things are the peak of the mind, but difficulties go far beyond that roles can be stamped in the relentless march of technological progress.
Alastair Macgibbon is the chief strategy of Cybercx, a Canberra -based cyber security company, which helps the government and businesses to prevent threats from hostile states to private hackers.
Among AI’s biggest challenges are, according to Mr. Macgibbon, there are enemy governments engaged in information war.
“The concept of AI models developed outside the West used in the West,” the concept of AI models used in the West, “he said.
“That’s why I was so worried about Deepseek because it can distort the truth with AI models.”
China’s Deepseek model won a trillion dollars from the US Tech Titans when it was released in January.
Nvidia alone received a 600 billion dollar blow.
Disruption is largely, since Deepseek is free and open source, unlike American competitors, anyone with internet connection can use it.
While Deepseek is supported by China target by High-Flyer, the platform is full of the code that connects it to the government.
Earlier this year, Newswire confirmed that Deepseek had a deeply embedded prejudice, even if the model was downloaded and offline.

In 1989, he refused to repeated questions about the massacre of the Tiananmen Square.
Mr. Macgibbon said Beijing could set the history and make the massacre look like “not realized”.
“Why don’t you ideologically poison the information base of the world?” he said.
“In a broader way, AI models are trained on the data produced more and more AI and you get this strange kind of truth reflected in the results of artificial intelligence.
“So, imagine a model of artificial intelligence that has been trained or developed for the truth of a totalitarian, revisionist regime.
“It should scare people considering the role we see AI playing this generation.”
Even Western models can do things wrong.
Grok from Elon Musk, Gaza’da seriously weakened girl in the wrong defined and again defined the users of the X in 2018, he said.
The photo was taken by a photographer for Agency France-Presse in August this year.
The news agency quickly corrected Grook, who acknowledged that he was wrong.
However, Western models have railings that prevent users from using them for damage such as bomb -making.
Mr. Macgibbon said that AI, developed by non -Western countries, could leave these guarantees through low work or design.
Macgibbon said, “Easy enough to overcome the railings of semi -sensitive Western AI companies ranging from marginal to marginal to marginal to marginal to irresponsible.
“Imagine people who want to cause harm and opposition.
“So the combination of truth is no longer real and how to provide you with the ability to access you… Throw sand to the gears of the West.
“I don’t think this is beyond the worlds of the madness of these regimes.”

