Cybercriminals Create Malicious AI Chatbots

Cybercriminals Create Malicious AI Chatbots

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Artificial intelligence tools for the general public (ChatGPT, Bard, Dall-E) are being widely used for many everyday tasks, but users with malicious intentions can exploit and subvert these technologies and are even creating their own AI chatbots to support hacking and scams.

There are many different ways in which generative AI systems can be used by criminals. For example, ChatGPT’s ability to create tailored content based on a few simple prompts, it could easily be exploited for crafting convincing scams and phishing messages and widely distributing them. There have been many recorded cases in underground hacking communities of criminals using ChatGPT for fraud and creating software to steal information, or even to create ransomware.

According to Techxplore, hackers have taken this further and created malicious variants of LLMs. Two examples are WormGPT and FraudGPT, which can create malware, find security vulnerabilities in systems, advise on ways to scam people, support hacking and compromise people’s electronic devices. A newer variant is Love-GPT, which is used for romance scams – it creates fake dating profiles capable of chatting to unsuspecting victims on Tinder, Bumble, and other apps.

A major risk with using chatbots and LLM-based tools is the issue of privacy and trust. The more people use AI tools, the higher the risk of personal and confidential corporate information being shared. This is especially problematic since LLMs use any data input as part of their future training dataset and may share that confidential data with others if they are compromised.

Researchers have already proven ChatGPT can leak a user’s conversations and expose the data used to train the model behind it – vulnerabilities that place a person’s privacy or a business’s confidential data at risk.

On a wider scale, this phenomenon could contribute to a lack of trust in AI, with many large companies (including Apple and Amazon) banning the use of ChatGPT as a precautionary measure.

Since ChatGPT and similar LLMs represent the latest advancements in AI, are freely available for anyone to use, and are clearly here to stay, it is important for users to be aware of the risks and know how they can use these technologies safely at home or at work.

When it comes to safety measures, experts recommend being cautious of what content we share with AI tools and avoid sharing any sensitive or private information. They also remind us that AI tools are not perfect and sometimes provide inaccurate or made-up responses, which should be kept in mind when considering their use in professional settings.