New Malicious “AI Worm” Threatens Generative AI Tools

New Malicious “AI Worm” Threatens Generative AI Tools

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Many companies are integrating generative AI tools into their products to reduce the human workload and make hard tasks easier, which is both transformative and risky.

Many experts and researchers are currently exploring the potential threats targeting generative AI-powered models, one of which is a “worm” that could potentially steal data and deploy malware in generative AI-powered tools and applications.

The name of this “worm” is Morris II, and it was designed by researchers to exploit generative AI ecosystems, demonstrating its capacity to propagate and orchestrate malicious activities in order to show the extent of the havoc such tools could cause.

According to Interesting Engineering, the researchers named the worm Morris-II after “Morris Worm,” one of the first computer bugs ever. They explained in their study that “a computer worm is a type of malware that operates by independently spreading across computer networks, often without requiring any user interaction.” It also does not require a host program in order to spread (as opposed to computer viruses). Instead, worms exploit weaknesses in operating systems, network protocols, or applications to copy themselves and spread from one computer to another autonomously.

This new research identifies vulnerable generative AI-powered tools and shows how Morris II can manipulate them, emphasizing the importance of understanding and mitigating security risks in the evolving AI landscape.

When it comes to the capabilities of Morris II, it can spread itself through generative AI, trick the systems, send spam messages, spread lies, or take people’s personal information. The worm replicates itself by injecting malicious prompts into the generative AI models and then spreads to other agents within the ecosystem.

This research claims Morris II could affect two types of apps powered by generative AI: the apps relying on the results produced by the GenAI service to function properly (which are vulnerable to manipulation), and the apps that utilize RAG (Recurrent Aggregation of Generative Models) to enhance their GenAI queries.

The researchers concluded: “While we hope this paper’s findings will prevent the appearance of GenAI worms in the wild, we believe that GenAI worms will appear in the next few years in real products and will trigger significant and undesired outcomes.”