You’re Not Talking to Who You Think You Are

Image provided by pixabay

This post is also available in: עברית (Hebrew)

Next time you’re talking to a new person online, a new colleague or maybe a new potential partner, maybe reconsider whether they are actually who they think they are.

Users of underground forums start sharing malware coded by OpenAI’s viral sensation and dating scammers are planning on creating convincing fake girls with the tool.

Cybercriminals have started using OpenAI’s artificially intelligent chatbot ChatGPT to quickly build hacking tools, cybersecurity researchers warned. Scammers are also testing ChatGPT’s ability to build other chatbots designed to impersonate young females to ensnare targets, one expert monitoring criminal forums told Forbes.

Many early ChatGPT users had raised the alarm that the app, which went viral in the days after its launch in December, could code malicious software capable of spying on users’ keyboard strokes or create ransomware.

Underground criminal forums have finally caught on, according to a report from Israeli security company Check Point. In one forum post reviewed by Check Point, a hacker who’d previously shared Android malware showcased code written by ChatGPT that stole files of interest, compressed them and sent them across the web. They showed off another tool that installed a backdoor on a computer and could upload further malware to an infected PC.

One user also discussed “abusing” ChatGPT by having it help code up features of a dark web marketplace. As an example, the user showed how the chat bot could quickly build an app that monitored cryptocurrency prices for a theoretical payment system.

Alex Holden, founder of cyber intelligence company Hold Security, said he’d seen dating scammers start using ChatGPT too, as they try to create convincing personas. “They are planning to create chatbots to impersonate mostly girls to go further in chats with their marks,” he said. “They’re trying to automate idle chatter.”