ChatGPT Almost as Good as Humans at Phishing

ChatGPT Almost as Good as Humans at Phishing

images provided by pixabay

This post is also available in: heעברית (Hebrew)

New research from IBM proves how closely AI-enabled tools have perfected the art of writing phishing emails and fooling victims. Cybersecurity professionals and government officials have long warned of malicious actors weaponizing AI tools (like ChatGPT) to expand their phishing campaigns.

IBM’s testing experiment consisted of half the participants receiving a phishing email fully written by humans, and the other half receiving an email composed by ChatGPT. The results show that humans are only slightly better at tricking other people- 14% of employees fell for the human-written phishing email and clicked on a malicious link, and 11% of the ChatGPT-written email’s targets fell for the note.

Stephanie Carruthers, IBM’s chief people hacker who led the experiment, said that it only took five minutes for the researchers to get ChatGPT to generate a convincing email- “With only five simple prompts, we were able to trick a generative AI model into developing highly convincing phishing emails in just five minutes — the same time it takes me to brew a cup of coffee.”

“It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models,” she added.

According to Cybernews, despite ChatGPT’s developers putting safeguards that prevent it from responding to direct requests for a phishing email, malware, or other malicious cyber tools, Carruthers and her team have been able to find a workaround.

The researchers began by asking ChatGPT to list the primary areas of concern for employees in the healthcare industry, then prompted it to list the top social engineering and marketing techniques within the email. These choices optimized the likelihood of a greater number of employees clicking on a malicious link in the email. Then, a prompt asked ChatGPT who the sender should be, and finally, the researchers asked the bot to craft an email based on the information it had just provided.

Regarding the aforementioned process, Carruthers said: “I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive.” She also explained that humans are still better than machines in creating phishing emails because generative AI models still lack the emotional intelligence needed to trick larger numbers of people.

Nevertheless, IBM’s X-Force has already observed that tools like WormGPT are sold on various forums advertising phishing capabilities, which shows that attackers are testing AI’s use in phishing campaigns and that the technology is constantly improving.