openclaw

OpenClaw security risk is real, and Microsoft wants you to stop running it on your everyday workstation

The OpenClaw security risk has just been put into sharp focus by Microsoft, and the message from the company’s security researchers is about as direct as it gets. In a blog post published on February 19, 2026, Microsoft stated that OpenClaw “should be treated as untrusted code execution with persistent credentials” and that it is “not appropriate to run on a standard personal or enterprise workstation.” That is not a vague advisory. That is Microsoft telling you in plain language that if you are running this AI agent on your day-to-day machine, you are taking on risks your existing security setup probably cannot contain.
So what exactly is going on here, and should you be worried? The short answer depends heavily on whether you are actually using OpenClaw or planning to. But even if you are not, the underlying issues Microsoft is raising tell us something important about where AI agents are headed and the security gaps that come with them.

What OpenClaw actually is

Before getting into why it poses a risk, it helps to understand what OpenClaw is designed to do. It is a self-hosted AI agent runtime, meaning it is software you run on your own machine or server to carry out automated tasks on your behalf. Unlike a chatbot that just answers questions, OpenClaw is built to take action.

To function fully, users grant it broad access across their digital environment. That includes online services, email accounts, login tokens, local files, repositories, APIs, and SaaS platforms. Once connected, it can browse, send messages, edit documents, call external services, and automate workflows across both cloud-based and internal systems. It can also download and install additional capabilities from public sources, which are referred to as skills, and these extend what the agent can do over time.

The runtime keeps persistent tokens and stores its working state, which means it can continue operating across sessions without the user needing to log in again each time. For productivity purposes, that is the appeal. For security purposes, that is where the trouble starts.

Why this is a different kind of threat

The concern Microsoft is raising is not simply that OpenClaw runs code. Plenty of software executes code on your machine every day without becoming a security crisis. The difference here is structural, and it comes down to what happens when you combine three specific things: the ability to install third-party capabilities from public sources, the ability to process unpredictable external instructions, and the ability to act with saved credentials that have broad access.

Most software operates within a clear boundary. It does a defined thing using defined permissions, and that makes it relatively predictable to secure. OpenClaw blurs that boundary because it can retrieve new capabilities while simultaneously processing instructions that may contain hidden manipulation. This is what Microsoft refers to as combining code supply risk with instruction supply risk in one environment.
What makes this particularly tricky is that the resulting harmful actions do not require traditional malware to occur. They can happen through completely normal API calls made with legitimate permissions that the user themselves granted. There is no virus that your antivirus needs to catch. There is no obviously suspicious network request for your firewall to block. The agent is doing exactly what it is technically authorised to do, just potentially in ways you never intended.

The persistent token problem

One of the specific concerns Microsoft highlights is what happens with OpenClaw’s persistent tokens and stored state over time. Because the runtime keeps operating across sessions and remembers its configuration, any manipulation that has already influenced its behaviour does not just go away at the end of a session. It carries over.

This means that if something has subtly altered how OpenClaw behaves, perhaps through content it read during a previous session or a skill it downloaded from a public source, that influence can persist without any obvious sign that something has changed. Microsoft describes this as quiet configuration drift rather than a visible compromise. You would not necessarily see a warning or an alert. You would just have a system that has silently shifted in ways that may be leaking credentials, exposing data, or making unauthorised connections.

An OAuth consent approval or a scheduled task that was set up as part of this drift could extend access further without any immediate red flag. Standard endpoint protection and a properly configured firewall reduce certain threats, but they are not built to catch logic that is operating under approved credentials with legitimate permissions.

What running it on a regular workstation actually means

The core of Microsoft’s warning is about the environment in which OpenClaw is running. If it is installed on your regular personal or work computer, it sits alongside your primary accounts, your work files, your saved logins, and everything else you do on that machine.

When OpenClaw processes an instruction that contains hidden manipulation, or downloads a skill from a public source that behaves unexpectedly, the blast radius is everything that machine touches. Your work email. Your cloud storage. Your internal tools. Any service for which that machine has a saved session or token.

This is why Microsoft draws a distinction between running OpenClaw on a standard workstation and running it in a properly isolated environment. On a standard workstation, there is too much valuable access available, and the runtime’s design means that access can be reached and used through paths that look entirely legitimate.

What Microsoft actually recommends if you are going to use it

Microsoft does not say OpenClaw should never be used at all, but it is clear that deploying it without proper controls is not acceptable from a security standpoint. For organisations or individuals who still want to test or use the runtime, the guidance is straightforward.

OpenClaw should run inside a dedicated virtual machine or on a completely separate device that has no connection to primary work accounts. Think of it as giving the agent its own contained space with no window into the rest of your environment. Any credentials provided to it should be purpose-built for that isolated instance, carry only the minimum permissions needed for the specific tasks it is performing, and be rotated regularly rather than left open indefinitely.

Continuous monitoring is also advised. Microsoft specifically points to Microsoft Defender XDR and similar tools as appropriate for detecting unusual activity around an OpenClaw deployment. The goal is to have visibility into what the agent is actually doing so that anything unexpected can be caught before it becomes a serious problem.

The bigger picture on AI agents and security

The OpenClaw security risk is worth understanding even for people who are not directly using the software, because it illustrates a broader pattern that is going to keep coming up as AI agents become more capable and more widely deployed.

The problem is not that AI agents are inherently malicious. The problem is that they are designed to be powerful and autonomous, and those are exactly the properties that create security exposure when combined with broad access and persistent state. An agent that can install new capabilities, process external content, and act with saved credentials is essentially an always-on automation system that interacts with the world on your behalf. If anything in that chain is manipulated, the consequences do not look like a conventional attack. They look like normal activity.

Microsoft’s warning about OpenClaw is an early signal that the security industry is going to need fundamentally different frameworks for evaluating and containing AI agent deployments. Standard endpoint security tools were not designed with this threat model in mind, and the guidance around isolation, minimal credentials, and continuous monitoring is where responsible AI agent deployment needs to start.
If you are already running OpenClaw on your regular workstation, Microsoft’s message is clear enough. That needs to change.