This post is also available in:
עברית (Hebrew)
Cyber threats are evolving rapidly as attackers increasingly harness generative AI to automate and enhance their operations. A recent report from Google’s Threat Intelligence Group highlights how advanced AI tools, particularly large language models (LLMs), are now being used to generate malware, obfuscate code, and carry out attacks with unprecedented speed and sophistication.
Researchers found that threat actors employ AI to create malicious scripts on demand and adapt them in real time to evade detection. Some attackers even pose as researchers or students to bypass LLM safeguards, enabling them to gather information and develop exploit functions. The rise of AI-powered malware means that traditional signature-based cybersecurity defenses are becoming less effective, as these attacks can continuously change and adapt.
Social engineering, amplified by AI, remains a central concern. Generative AI can produce flawless phishing messages, voice recordings, and other deceptive content, making scams much harder to identify. According to cybersecurity experts, the most effective defense is not more AI-driven protection, but improving human awareness and training. Users are advised to verify unusual requests through separate channels, implement family or workplace “safe words,” and limit the personal information publicly available online.
Some malware families, such as PROMPTLOCK, combine cross-platform ransomware with AI-generated code that executes dynamically, performing tasks like file reconnaissance, data exfiltration, and encryption on both Windows and Linux systems. These developments signal a trend toward autonomous, adaptive attacks capable of scaling across targets at machine speed.
According to Cybernews, underground marketplaces are also offering AI tools that allow less skilled threat actors to launch complex operations, further increasing the frequency and sophistication of attacks. As attackers leverage AI for reconnaissance, luring, and bypassing security controls, the landscape of cybercrime is shifting from manual operations to automated, highly efficient campaigns.
Experts emphasize that the game has changed: AI has become a force multiplier for traditional cyberattacks, especially social engineering. While technology continues to evolve, strengthening human vigilance, awareness, and response remains the most reliable line of defense.
























