WormGPT- A New Criminal Chatbot Emerges

WormGPT- A New Criminal Chatbot Emerges

image provided by pixabay

This post is also available in: heעברית (Hebrew)

ChatGPT has a new, criminally active sibling without any ethical boundaries or limitations. WormGPT is an AI-based tool that can automate phishing emails and facilitate business email compromise (BEC) attacks that are remarkably persuasive, strategically cunning, and have impeccable grammar in multiple languages.

According to the security firm SlashNext, this new cyber weapon will revolutionize phishing attacks by generating human-like text based on the input it receives.

This new tech can be used by cybercriminals to automate the creation of compelling fake emails personalized to recipients, and even hold conversations, which significantly increases the scope and chances of successful attacks.

According to Cybernews WormGPT doesn’t use OpenAI’s tech. It’s based on the GPT-J open-source large language model developed in 2021, has over 6 billion parameters, and boasts various features including unlimited character support, chat memory retention, and code formatting capabilities. Its performance is described as similar to an older GPT-3 model.

Supposedly, the author and creator of WormGPT had used diverse data sources to train the bot and mainly concentrated on malware-related data.

A representative working with SlashNext stated- “We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors.”

WormGPT costs 100 euros a month or 550 euros a year and is subscription-based.

Even ChatGPT, as we’ve reported in the past, can be persuaded with carefully crafted prompts to “facilitate a significant number of criminal activities, ranging from helping criminals to stay anonymous to specific crimes including terrorism and child sexual exploitation,” Europol noted in a recent report.

So what can be done about this? According to researchers, companies should train employees, implement strict email verification, and test security measures.