Your Users Already Installed It
Over the weekend of January 24, 2026, an open-source AI assistant called ClawdBot went viral. Within days, thousands of users had it running on their machines. By the following Wednesday, infostealers had added it to their target lists, fake VS Code extensions were distributing remote access trojans, and security researchers had found three CVEs allowing remote code execution.
The tool has since been renamed twice — first to Moltbot after a trademark dispute, then to OpenClaw — but the security problems followed every rename. If you manage a Microsoft 365 environment, this is not an abstract AI ethics debate. This is a concrete identity and access risk sitting on your users' endpoints right now.
What Is ClawdBot (and Why Should You Care)?
ClawdBot is a local-first AI agent that runs on a user's machine and interacts with their files, email, calendar, and cloud services. Think of it as an AI assistant with the same permissions as the person running it — including access to saved credentials, browser sessions, and API tokens for services like Microsoft 365.
The appeal is obvious: it automates tasks, reads your email, manages your calendar, and writes code. The security model, however, was not built for corporate environments.
Here is what researchers found:
Plaintext Credential Storage
ClawdBot stores everything — memory files, conversation history, API tokens, and user configurations — in plaintext Markdown and JSON files in the user's home directory. VPN configurations, corporate credentials, OAuth tokens for Microsoft 365 and other cloud services — all sitting unencrypted on disk.
For any infostealer malware already on the system, this is a gift. No need to hook browser processes or intercept authentication flows. Just read the files.
Remote Code Execution
Security researchers identified CVE-2026-25253, a remote code execution vulnerability in the gateway component. Attackers could execute commands on the host system with the same permissions as the user. Combined with CVE-2026-24763 and CVE-2026-25157 (both command injection flaws), the attack surface was substantial.
Exposed Control Interfaces
Many users deployed ClawdBot's web interface without password protection, leaving it publicly accessible. A separate flaw in localhost connection handling allowed attackers to bypass authentication when the tool was deployed behind common reverse proxies like Nginx.
Malicious Extensions
Researchers found hundreds of malicious skills in the ClawHub repository and fake "ClawdBot Agent" VS Code extensions that installed ScreenConnect RAT on victim machines.
Why This Is a Microsoft 365 Problem
You might be thinking: "We didn't approve this tool." That is exactly the point. Shadow IT does not wait for approval.
Your Users' Tokens Are the Target
When ClawdBot connects to Microsoft 365 services — reading email, managing calendars, accessing OneDrive — it uses OAuth tokens. Those tokens are stored locally on the user's machine. If any of the vulnerabilities above are exploited, attackers get:
- Access tokens for Microsoft Graph API — read email, enumerate users, access SharePoint
- Refresh tokens — persistent access that survives password changes
- Service principal credentials — if developers configured app registrations for the agent
This is not theoretical. RedLine, Lumma, and Vidar adapted their infostealer payloads within 48 hours of ClawdBot's viral moment, specifically targeting the directories where it stores credentials.
Prompt Injection Reaches Your Tenant
ClawdBot reads email, chat messages, and web content to perform its tasks. This creates a prompt injection attack surface: an attacker sends a carefully crafted email, and the AI agent — running with the user's full permissions — follows the embedded instructions. That could mean exfiltrating files, forwarding emails, or modifying settings.
Consent Grant Sprawl
Users connecting ClawdBot to Microsoft 365 may have granted OAuth consent to app registrations you don't control. These consent grants persist even if the user stops using the tool, creating dormant access paths into your tenant.
What You Should Do This Week
1. Discover What Is Running
Check for ClawdBot, Moltbot, or OpenClaw processes across your managed endpoints. If you have an EDR solution, query for:
- Processes named
clawdbot,moltbot, oropenclaw - Files in
~/.clawdbot/,~/.moltbot/, or~/clawd/directories - Network connections to ClawdBot gateway ports (typically 3000-3100)
2. Audit OAuth Consent Grants
In the Entra ID portal, review enterprise applications and user consent grants. Look for unfamiliar app registrations that request Mail.Read, Files.ReadWrite, or other sensitive Microsoft Graph permissions. Revoke anything you did not explicitly authorize.
3. Review Conditional Access Coverage
Ensure your Conditional Access policies cover token-based access, not just interactive sign-ins. Specifically:
- Require compliant devices for token access
- Block legacy authentication protocols
- Set token lifetime policies to limit the window of exposure
4. Restrict User Consent
If you have not already, restrict user consent for applications in Entra ID to admin-approved apps only. This is the single most effective control against unauthorized AI tool integrations.
5. Monitor for Privileged Access Changes
The real danger is not the tool itself — it is what happens after credentials are stolen. Watch for:
- New Global Admin or Privileged Role Admin assignments
- Unexpected service principal creations
- Changes to Conditional Access policies
- New OAuth app registrations with high-privilege scopes
The Bigger Pattern: AI Agents Are the New Shadow IT
ClawdBot is not unique. It is the first viral example of a pattern that will repeat throughout 2026 and beyond. AI agents that need broad permissions to be useful will keep appearing, and your users will keep installing them — because they make people dramatically more productive.
The old approach of blocking every unapproved tool does not scale against this. You need:
Visibility into what is actually happening in your tenant. Not what your policies say should happen, but what is actually configured right now. Are there consent grants you did not authorize? Role assignments that should not exist? Conditional Access policies that were quietly modified?
Continuous monitoring that catches drift from your intended state. A user granting OAuth consent to an AI agent creates a configuration change. An infostealer using a stolen token to add a service principal creates another one. If you are only checking your tenant configuration quarterly — or worse, only when something goes wrong — these changes compound silently.
This is exactly what Desired State Configuration is built for. You define how your Microsoft 365 tenant should be configured — which roles should exist, which consent grants are authorized, which Conditional Access policies should be active — and continuously monitor for deviations. When an AI agent introduces an unauthorized consent grant or a stolen token leads to a new role assignment, you see it immediately.
TrueConfig monitors your Microsoft 365 privileged access configuration against your defined baseline and alerts you when something changes. In a world where your users are installing AI agents faster than your policies can keep up, knowing your actual state versus your desired state is the difference between catching a breach early and discovering it months later.
Sources
- Security Boulevard, "From ClawdBot to Moltbot to OpenClaw: 6 Immediate Hardening Steps"
- The Register, "ClawdBot becomes Moltbot, but can't shed security concerns"
- VentureBeat, "Infostealers added ClawdBot to their target lists before most security teams knew it was running"
- InfoStealers, "ClawdBot: The New Primary Target for Infostealers in the AI Era"
- Aikido, "Fake ClawdBot VS Code Extension Installs ScreenConnect RAT"
- Tenable, "Agentic AI Security: How to Mitigate ClawdBot Vulnerabilities"