This post is also available in:
עברית (Hebrew)
The rapid rise of AI-powered web development tools has made it easier than ever to create functional websites with minimal technical expertise. But recent research suggests that these same tools are being repurposed by cybercriminals to run large-scale phishing operations.
According to a report from cybersecurity firm Proofpoint, attackers are leveraging an AI platform called Lovable to build and deploy phishing websites at scale. Lovable, which allows users to generate websites using simple text prompts, has seen a surge in malicious use, with tens of thousands of malicious URLs detected each month since early 2025.
The platform’s ease of use is a major factor. Even free-tier users can replicate public sites, swap in a new logo, and quickly launch phishing pages. This allows one phishing template to multiply into hundreds of copycat campaigns with little effort.
Proofpoint tracked several attack campaigns using Lovable-hosted websites. These included fake Microsoft login pages for credential theft, counterfeit CAPTCHA challenges, shipping company impersonations targeting payment data, and bogus DeFi platforms used to trick victims into connecting crypto wallets. Other examples involved malware disguised as legitimate software installers, which then infected users with remote access trojans.
This trend falls under a growing practice called “vibe coding”—a term describing how AI-generated code is often built based on broad inputs without concern for long-term maintainability or security. While the technology lowers the barrier to web development, it also generates poorly structured or vulnerable code that attackers can exploit.
Researchers warn that AI-generated websites often lack proper security controls and can easily be repurposed by malicious actors. Because these tools automate much of the work traditionally required to create a phishing campaign, attackers can now devote more attention to social engineering and payload delivery.
While Lovable has acknowledged the issue and says it’s working on mitigation strategies, the broader concern remains: as AI tools continue to democratize coding, they also risk making cyberattacks more accessible. Security experts urge developers of such platforms to implement stronger safeguards against misuse.