Cybercriminals are Tired of AI Tools

Cybercriminals are Tired of AI Tools

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Many GPT-based tools like WormGPT and FraudGPT became popular on underground forums and were assumed to be helping deliver new strains of malware and automate cybercrime. GPTs are a form of large language model that is trained on massive datasets, and jailbroken GPTs do not have restrictions for generated content so they can be trained on the information typically used by cybercriminals.

Nevertheless, many cybercrime fans and experts have expressed their skepticism, stating that dark GPT versions are “overrated, overhyped, redundant, and unsuitable for generating malware.” Furthermore, threat actors have expressed concerns about the security of the final product, wondering whether it could bypass antivirus and EDR detection, concluding that real-world applications remain “aspirational.”

According to Cybernews, researchers claimed that they found “only a few examples of threat actors using LLMs to generate malware and attack tools, and that was only in a proof-of-concept context. However, others are using it effectively for other work, such as mundane coding tasks.”

The researchers found very few discussions of actual cybercriminals who seem to be using such AI tools. Among the few discussions found, many focused on jailbreak tactics for legitimate AI models and compromised ChatGPT accounts for sale. “Unsurprisingly, unskilled ‘script kiddies’ are interested in using GPTs to generate malware, but are – again unsurprisingly – often unable to bypass prompt restrictions, or to understand errors in the resulting code,” the report said.

In general, researchers report observing a lot of skepticism with hackers worrying about operational security, and some even having ethical concerns about using AI- “We found little evidence of threat actors admitting to using AI in real-world attacks, which is not to say that that’s not happening. But most of the activity we observed on the forums was limited to sharing ideas, proof-of-concepts, and thoughts.”

The researchers concluded that none of the AI-generated malware they found in forums was sophisticated, and found no evidence of such sophistication on the posts they examined. They also believe that illegal clones of ChatGPT purposefully built for malicious applications aren’t very useful for cybercriminals, and while they did find some uses for them, GPTs aren’t up to the task of creating malware or finding new vulnerabilities.