This post is also available in:
עברית (Hebrew)
A recent cyber operation linked to North Korea has revealed a new layer of sophistication in the use of artificial intelligence for malicious purposes. According to findings from Genians Security Center (GSC), the North Korean threat group known as Kimsuky used AI tools to generate deepfake images of South Korean military ID cards as part of a targeted spear-phishing campaign.
The attack, which began in mid-July, aimed to compromise individuals with ties to defense, human rights, and North Korean policy. Victims received emails that appeared to request feedback on draft versions of military identification documents. These messages were accompanied by content on topics such as North Korean economic conditions and political investigations, likely crafted to enhance the message’s credibility.
Attached to the emails was a ZIP archive containing a shortcut file. When opened, this file ran concealed commands that executed a series of steps to download and install malware. These included decoding obfuscated PowerShell commands, connecting to command-and-control servers hosted in South Korea and France, and retrieving additional malicious files. Among the files was an AI-generated military ID image, created using ChatGPT.
The use of deepfake ID cards marks an evolution in Kimsuky’s tactics. While the group has previously used phishing and document lures, the integration of AI-generated visuals adds a layer of realism that may increase the success rate of attacks.
GSC researchers linked the campaign to previously observed Kimsuky techniques, such as the “ClickFix” method, which mimics CAPTCHA pop-ups to deliver malware. Analysis showed that the same malware variants used in earlier campaigns were reused in this operation.
The incident is part of a broader trend in which North Korean actors leverage AI for cyber operations. Recent reports have also highlighted the use of AI to create fake job applications and video interviews as part of efforts to infiltrate foreign companies. South Korean authorities have issued warnings to local businesses about the risks of inadvertently employing these operatives.
The campaign underscores how AI is becoming a key enabler of more convincing, targeted, and persistent cyber threats—particularly in state-sponsored espionage activities.