Home Technology Artificial Intelligence This Robot Attack Didn’t Need the Internet in Order to Spread

This Robot Attack Didn’t Need the Internet in Order to Spread

AI generated image
AI generated image

This post is also available in: עברית (Hebrew)

As humanoid and quadruped robots move from research labs into public spaces, factories, and homes, their growing autonomy is creating a new class of security risk. Many of these machines rely on always-on connectivity, voice interfaces, and embedded AI agents to interact naturally with people. While convenient, those same features are now being shown to expose robots to rapid and complete takeover.

Recent security demonstrations have highlighted just how fragile current safeguards can be. Researchers showed that commercially available robots can be compromised in minutes through simple attack paths, including spoken commands and short-range wireless connections. Once control is gained, the attacker is no longer limited to digital disruption—the robot itself becomes the attack surface.

According to Interesting Engineering, in one controlled test, cybersecurity specialists demonstrated how a humanoid robot running an AI-based control system could be hijacked using voice input alone. By exploiting weaknesses in how the robot’s AI agent interpreted commands, researchers bypassed built-in restrictions and gained full control while the robot was online. The machine was then instructed to carry out physical actions, illustrating how cyber compromise can translate directly into real-world harm.

More concerning was what happened next. The compromised robot was used to infect another unit that was not connected to any network. Using short-range wireless signals, the exploit was transmitted from one robot to another, creating a cascading attack. Within minutes, multiple machines were under external control, undermining the assumption that keeping robots offline is enough to ensure safety.

For defense and homeland security planners, these findings carry serious implications. Autonomous robots are increasingly considered for roles such as patrol, logistics, infrastructure inspection, and support tasks in hazardous environments. A vulnerability that allows a single compromised robot to spread control across a cluster could be exploited to disrupt operations, gather sensitive information, or cause physical damage. Unlike traditional cyberattacks, failures in robotic systems combine digital compromise with kinetic risk.

The demonstrations also underscore how voice interfaces, often seen as user-friendly safeguards, can become attack vectors if not tightly constrained. Similarly, insecure Bluetooth or short-range communication channels can allow lateral movement between machines, turning groups of robots into unintended botnets.

Experts argue that many of these risks stem from security being treated as an afterthought. As robots gain mobility and decision-making capability, security must be designed into their software and hardware from the outset. Techniques such as automated vulnerability scanning, hardened AI control layers, strict authentication for voice commands, and independent penetration testing are increasingly seen as essential.

As robots begin to operate closer to people and critical systems, the line between cyber safety and physical safety is rapidly disappearing. The latest tests make clear that securing intelligent machines is no longer optional—it is foundational to their safe deployment.