Microsoft Claims Adversary Countries are Using Generative AI For Cyber Operations

images provided by pixabay

This post is also available in: עברית (Hebrew)

Microsoft claims US adversaries like Iran, North Korea, Russia, and China are beginning to use generative AI to organize offensive cyber operations.

The tech giant has collaborated with OpenAI to detect and disrupt threats that used or attempted to exploit AI technology developed by the company. They announced in a blog post that the techniques were both at an early stage of their development and weren’t particularly novel or unique.

According to Interesting Engineering, cybersecurity firms have long been using machine learning for defense (mainly to detect abnormal behavior in networks), but other users include criminals and offensive, and the introduction of LLMs like ChatGPT upped the stakes of the game.

Microsoft also recently reported that generative AI is expected to enhance malicious social engineering, which will lead to more sophisticated deepfakes and voice cloning – a threat to democracy since over 50 countries are expected to conduct elections this year.

Following are several examples provided by Microsoft:

  • The “Kimsuky” North Korean cyberespionage group used the models to research foreign think tanks that study the country and to generate content that was likely meant to be used in spear-phishing hacking campaigns.
  • Iran’s Revolutionary Guard has used large-language models to assist in social engineering and study how intruders might evade detection in a compromised network. That reportedly includes generating phishing emails, in which case the AI would help accelerate and boost email production.
  • The “Fancy Bear” Russian GRU military intelligence unit has used AI models to research satellite and radar technologies that may relate to the war in Ukraine.
  • The “Aquatic Panda” Chinese cyberespionage group, which is known to target a broad range of industries, higher education, and governments worldwide, has interacted with the AI models “in ways that suggest a limited exploration of how LLMs can augment their technical operations.”
  • Maverick Panda,” a Chinese group that targeted U.S. defense contractors among other sectors for more than a decade, had interactions with LLMs that suggest it was evaluating their effectiveness as a source of information “on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”