This Software Blocks AI Phishing Scams

images provided by pixabay

This post is also available in: עברית (Hebrew)

The past few years have seen cybercriminals using more and more AI-based technology to design and execute scams and cyberattacks. AI chatbots (like Bard and ChatGPT) made launching online scams highly accessible for attackers with any level of technical knowledge, requiring only the right prompts and the right AI tools.

A team of researchers from the University of Texas developed what could be the solution to this growing issue – software that prevents artificial intelligence chatbots from creating phishing websites by enabling the AI chatbots to better detect and reject instruction prompts that could be used to create phishing websites.

The team explained that while there are some inbuilt detection capabilities in today’s chatbots, they found several loopholes that could easily bypass them and exploit the chatbots to create attacks.

According to Interesting Engineering, the group developed their tool by initially identifying various instruction prompts that could be used to create phishing websites. Leveraging this knowledge, they successfully trained their software to recognize and react to those specific keywords and patterns, enhancing its ability to detect and block such malicious prompts from being executed by the chatbots.

The cybersecurity community has been greatly interested in the team’s work, which was highlighted by their recent publication at the IEEE Symposium on Security and Privacy that received the Distinguished Paper Award, underscoring the impact of their research.

The researchers reached out to the tech giants who run these chatbots (such as Google and OpenAI) with the intention of integrating their findings into broader AI security strategies. Doctoral students Sayak Saha Roy and Poojitha Thota, who worked on the innovation, both expressed a strong commitment to their research’s implications for cybersecurity.

“I want people to be receptive to our work and see the risk,” Saha Roy said. “It starts with the security community and trickles down from there.”

“I’m really happy that I was able to work on this important research,” Thota added. “I’m also looking forward to sharing this work with our colleagues in the cybersecurity space and finding ways to further our work.”