AI Models Are Now Generating Phishing Links

Image by Unsplash

This post is also available in: עברית (Hebrew)

A troubling development in AI-driven technology is drawing attention from cybersecurity researchers: large language models (LLMs) are confidently recommending phishing pages as legitimate login portals.

According to findings from cybersecurity firm Netcraft, when users ask AI bots—such as those embedded in search tools or browsers—for login pages to well-known platforms, the models sometimes return malicious or spoofed URLs. The AI wasn’t tricked—it simply generated the wrong link, bypassing conventional verification processes used in traditional search engines.

Netcraft tested this behavior across 50 major brands and found that over 34% of AI-generated responses directed users to domains not controlled by the actual companies. Some of these links led to inactive or unrelated websites, while others posed genuine phishing threats.

Unlike traditional search engines, LLMs typically present information as plain answers—with no visible context or security cues. This creates a dangerous illusion of trustworthiness, allowing malicious content to slip through undetected.

A growing trend is the mass production of fake support pages, software guides, and login flows designed specifically to be recognized—and amplified—by AI systems. Many of these sites mimic official documentation or troubleshooting pages for banking services, cryptocurrency wallets, and travel platforms.

Even more concerning, cybercriminals are generating fake tools and guides tailored to deceive AI models directly—poisoning the content AI trains or relies on.

While some brands preemptively register deceptive domain names to block scammers, AI models can easily generate limitless variations. This makes proactive defense especially challenging.

Security researchers emphasize that this is not just a technical error—it’s a systemic risk. When AI presents a malicious link with full confidence, it compromises user safety and trust. The incident highlights an urgent need for LLMs to integrate URL verification, threat intelligence, and robust content filtering before becoming trusted digital advisors.