Running This AI on Your Work PC? Think Again

Representational image of Microsoft

This post is also available in: עברית (Hebrew)

As autonomous AI agents become more capable, they are also gaining deeper access to users’ systems. Tools designed to act on behalf of individuals—sending emails, accessing files or interacting with online services—often require extensive permissions to function effectively. That level of integration, however, introduces significant security considerations.

Microsoft has warned that its AI assistant OpenClaw should not be operated on standard personal or enterprise workstations. The system is designed to execute tasks autonomously, which requires full access to a user’s computer environment, including email accounts, local files, online services and login credentials. According to the company’s Defender Security Research Team, this means sensitive data and authentication details could potentially be exposed or exfiltrated.

Beyond data access, the assistant maintains a persistent internal state, or “memory”, that influences its ongoing behavior. According to Cyber News, researchers caution that this state can be manipulated if the agent processes malicious input, potentially causing it to follow attacker-supplied instructions over time. In addition, if the assistant is prompted to retrieve and execute external code, the host environment itself could be compromised.

The company characterizes the assistant as equivalent to running untrusted code with persistent credentials. As a result, it recommends that organizations evaluating the tool deploy it only within a fully isolated environment, such as a dedicated virtual machine or separate physical system. The runtime should use non-privileged credentials and limit access strictly to non-sensitive data. Continuous monitoring and a defined rebuild strategy are also advised.

Two primary risk categories have been identified. The first is indirect prompt injection, in which malicious instructions are embedded in content the agent reads, subtly steering its actions or altering its memory. The second is “skill malware,” referring to harmful code acquired when the agent downloads and executes external capabilities.

Recent threat intelligence findings identified tens of thousands of exposed instances of assistants online, with many potentially vulnerable to remote code execution. For defense organizations and critical infrastructure operators, the warning underscores the need to treat autonomous AI agents as high-risk components, particularly when they are granted broad system privileges.