google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Australia

How AI is helping criminals target more victims online

Online criminals are leveraging artificial intelligence tools to increase the number of attacks they launch and increase their chances of stealing information and money from victims.

In some cases, inexperienced hackers ask AI chatbots to write malicious code on their behalf, with mixed results.

Cybersecurity firm CrowdStrike revealed the trend in a report published Wednesday, stating that artificial intelligence is helping to increase the speed of online intrusions, and ChatGPT has become a favorite on hacking forums.

The findings come after the World Economic Forum warned that AI has become the biggest driver of cybersecurity threats and that companies may need to deploy their own AI defenses to defeat them.

CrowdStrike’s annual Global Threat Report tracked the actions of 281 groups in 2025 and found that “adversaries of all stripes are leveraging AI to accelerate, optimize and troubleshoot online attacks.”

The company found that AI-enabled threats nearly doubled by 89 percent over the year, playing a role in convincing phishing emails, malicious websites and social engineering attacks.

Generative AI tools are used by many types of online criminals to make their attacks appear more legitimate, said Adam Meyers, senior vice president of challenger CrowdStrike.

“AI can impact many different aspects and functioning of the threat actor,” he told AAP.

“We’re seeing phishing emails and we’re seeing content created using (big language models) to allow the threat actor to have a more convincing story with the victim and have a little more success.”

AI tools have also reduced the time it takes criminals to break into another part of a computer system, with the average dropping from 48 minutes to 29 minutes by 2024, according to the report.

Fastest exit time reduced from 51 seconds to 27 seconds.

The company found that exploitation of zero-day vulnerabilities, or unreleased software flaws, increased by 42 percent in 2025, and ChatGPT was the most discussed AI tool in forums by a wide margin, followed by Gemini, Grok, DeepSeek, and Claude.

But AI tools don’t always increase criminals’ success, Mr. Meyers said, because less sophisticated hackers rely on the technology to write malicious code for them.

In one case, CrowdStrike researchers were able to reverse engineer and subvert ransomware created using generative AI.

“(This) threat actor used AI to create the ransomware tool and they did not understand proper encryption,” he said.

“There were flaws in the creation of the malware that we were able to exploit, and that’s why they didn’t get paid.”

Future online attacks will likely use AI to target data stored in the cloud and leverage agency AI tools given autonomy over business systems, Mr. Meyers said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button