How Hackers Use AI for Cybercrime

Image by Pexels

This post is also available in: עברית (Hebrew)

Cybercriminals are increasingly turning to artificial intelligence (AI) models to enhance their productivity, according to a recent warning from Google’s Threat Intelligence Group (GTIG). While these attempts so far have been largely unsuccessful, hackers continue to explore the potential of AI-powered systems for tasks such as code generation, phishing, and content localization.

According to GTIG, hackers have attempted to use Google’s Gemini AI assistant in their attempts to create more efficient workflows for their malicious activities. They tried to use the model for tasks such as drafting phishing emails, creating malicious content, and researching vulnerabilities. GTIG researchers found that many attacks were based on simple, publicly available prompts, with no major breakthroughs yet. However, the continuous evolution of AI tools is fueling growing concerns in the cybersecurity community.

North Korean-backed hacking groups, for instance, have used Gemini to draft cover letters and research job opportunities at foreign companies. In addition, they have used the tool to explore topics of strategic interest to the North Korean government, such as South Korean nuclear technology and cryptocurrency. These groups have also employed other AI tools, such as image generators for creating fake profiles or assistive writing tools to craft phishing lures.

Iranian hackers, linked to more than 10 threat groups, have been heavy users of Gemini for a range of purposes, including reconnaissance on defense organizations and technology, vulnerability research, and the creation of malicious campaign content. They have also used the model for researching Israel’s defense systems and “topics related to the Iran-Israel proxy conflict”.

Chinese threat groups, have primarily used Gemini for research and scripting tasks, focusing on hacking techniques such as lateral movement and privilege escalation, as well as evading detection. Similarly, Russian government-backed actors have mainly used the AI for malware coding tasks and research related to the Russia-Ukraine war.

While current AI models like Gemini are not yet capable of enabling significant advancements in cybercrime, Google researchers caution that new AI technologies are rapidly emerging. As these systems evolve, threat actors may find new ways to exploit them. At the same time, Google also sees AI’s potential to enhance digital defense, with LLMs helping to sift through complex data, discover vulnerabilities, and streamline cybersecurity operations.

The increasing use of AI by hackers highlights the importance of staying ahead of technological advancements in the cybersecurity landscape. As AI continues to evolve, both cybercriminals and security professionals alike will find new ways to leverage this powerful tool.