Cybercrime’s New Wave: Has AI Made Attacks Unstoppable?

Representational image of deepfake

This post is also available in: עברית (Hebrew)

Cybercrime is entering a phase where speed and scale are no longer limited by human skill. For years, defenders have tracked the evolution of online crime from simple malware to organized supply-chain attacks. Today, that progression has reached a new inflection point. Artificial intelligence is now being used not just as a helper, but as a core engine that automates and industrializes cybercrime.

Recent research describes this shift as a “fifth wave” of cybercrime, defined by the weaponization of AI. Instead of relying on experienced hackers, attackers can now buy ready-made tools that turn complex techniques into push-button services. Tasks that once required technical expertise—writing phishing emails, cloning voices, or building malware—can now be handled by inexpensive AI-driven kits available on underground markets.

According to Infosecurity Magazine, one of the most visible impacts is the rise of synthetic impersonation. Deepfake video, voice cloning, and even biometric spoofing have become commodities. According to investigators, complete “synthetic identity kits” can be purchased for only a few dollars, while subscription-based deepfake services cost little more than a streaming platform. These tools are increasingly used to bypass identity checks, trick employees into authorizing transactions, or manipulate victims in real time through live video or audio calls.

Phishing has also been transformed. AI is no longer just helping criminals draft convincing messages. New phishing platforms use autonomous agents to select targets, generate personalized lures, distribute messages, and adjust campaigns based on victim responses. This feedback loop allows attacks to evolve continuously, making them harder to detect with static defenses. Even if only a small percentage of attempts succeed, the economics remain attractive due to automation and low overhead.

More concerning is the emergence of so-called “dark” large language models. These are custom-built, self-hosted AI systems with no ethical safeguards, trained specifically on malicious code, scam language, and exploit techniques. Offered via subscription, they assist with everything from fraud scripts to vulnerability discovery, lowering the barrier to entry for serious cybercrime.

From a defense and homeland security perspective, this trend has direct implications. Critical infrastructure, government networks, and defense supply chains are increasingly targeted by campaigns that blend social engineering, automation, and AI-generated deception. The ability to impersonate trusted individuals or generate adaptive attacks at scale challenges traditional detection methods and response timelines.

The core problem is not that AI created criminal intent, but that it removed friction. What once required teams of skilled operators can now be scaled globally by a handful of individuals. For defenders, this means cybersecurity can no longer rely on manual analysis alone. Automated, AI-driven defense, stronger identity verification, and faster response mechanisms are becoming essential.

The fifth wave of cybercrime signals a structural change. As AI continues to lower costs and accelerate attacks, the balance between offense and defense will depend on how quickly protective systems adapt to match the same speed and sophistication now available to attackers.