Cybercriminals Use Fake AI Tools to Deploy Malware at Scale

Image by Unsplash

This post is also available in: עברית (Hebrew)

As the global fascination with generative AI continues to grow, threat actors are seizing the opportunity—this time by weaponizing fake AI video generation tools. A sophisticated phishing campaign has emerged, using counterfeit websites designed to mimic legitimate platforms like Kling AI, Luma AI, and Canva Dream Lab to distribute malware.

Victims are lured in by promises of advanced AI video creation. After submitting a prompt, instead of receiving the expected media file, users unknowingly download a ZIP file containing a malicious payload. The deceptive download includes a dropper named STARKVEIL, which acts as the first stage of infection.

According to Mandiant, a Google-owned cybersecurity firm tracking the operation, once activated, STARKVEIL delivers a trio of malware strains: GRIMPULL, XWORM, and FROSTRIFT. These tools enable attackers to log keystrokes, extract saved browser passwords and session cookies, and harvest login credentials—especially from social media and email accounts. The stolen data is then transmitted via the Telegram API, bypassing traditional security mechanisms. The campaign, designated UNC6032, has been operational since mid-2024 and is believed to be linked to Vietnamese-based threat actors.

At least 30 separate phishing domains have been identified, with operators relying heavily on Facebook and LinkedIn ads to drive traffic. These platforms, widely used by both professionals and creatives interested in AI tools, serve as ideal hunting grounds. Ads are regularly refreshed and rotated to avoid detection and takedown, with many disappearing within hours of launch.

Estimates suggest the Facebook-based campaign alone has reached over 2.3 million users in the European Union. LinkedIn, though less saturated, has recorded between 50,000 and 250,000 impressions.

The combination of realistic branding, high-quality websites, and ad targeting precision has made this campaign particularly dangerous. Users are urged to verify the legitimacy of AI services before downloading any files and to remain vigilant for social media ads promoting new AI tools.

The incident underscores the growing trend of cybercriminals embedding their operations within the booming generative AI ecosystem—turning user curiosity into a high-risk vulnerability.