OpenClaw (Clawdbot/Moltbot): Why This AI Agent is a Security Disaster
TL;DR:
- OpenClaw has some major security hurdles before it can be run safely
- It's basically Claude Code with more customization and a nice UI
- Major risks: unsecured VPS deployments, malicious skills, and runaway API costs
What is OpenClaw (Clawdbot/Moltbot)?
If you're confused about the name, you're not alone. This AI agent has been through more identity changes than a witness protection candidate. It launched as Clawdbot in November 2025, got hit with a trademark request from Anthropic, rebranded to Moltbot, and then became OpenClaw in early 2026.
OpenClaw is an open-source AI agent that runs locally on your machine. It can read files, run code, send emails, manage your calendar, and automate workflows. You connect it to Claude or another LLM via your own API key.
When you see people on Twitter saying "Clawdbot just did X amazing thing for me," they're really just talking about an AI agent with access to tools and API keys.
Why is OpenClaw a security nightmare?
Security experts aren't mincing words. Cisco called it a "security nightmare." The Register described it as a "dumpster fire." These aren't hypothetical concerns. They're actively being exploited.
Lock down your VPS infrastructure
A lot of people run OpenClaw on a VPS, which is basically a computer in the cloud that you rent. The problem is that OpenClaw needs access to your passwords, API keys, and other sensitive login information to do its job. Hudson Rock discovered that OpenClaw stores all of this in plain text on the server. If someone gets access to that server, they get everything.
Many people are setting up these cloud servers with OpenClaw and no protection at all. Their data is just sitting there for anyone to take. If you're going this route, get help from a dev who knows what they're doing. Or have an AI model walk you through every step in excruciating detail. But preferably help from a dev.
Don't carelessly download public agent skills
This is where things get scary. Koi Security identified 341 malicious skills submitted to ClawHub in just its first month. 335 of them were designed to install malware that steals your credentials, crypto wallets, and browser data.
When you download and install a skill, you're giving it access to everything OpenClaw can touch. Before installing anything from GitHub or ClawHub, have an AI model review the code first.
Set limits for your API usage
Because OpenClaw is autonomous, it can get stuck in loops and burn through tokens. I've heard of people racking up hundreds of dollars in API costs in a single session because they didn't set spending limits.
Set hard caps on your API accounts before you start. Anthropic and OpenAI both let you configure monthly spending limits. Do this first, not after you get a surprise bill.
How is Clawdbot different from Claude Code?
Not that different. OpenClaw is basically Claude Code with a nicer interface and more customization. Claude Code also has risks, but it's built and maintained by Anthropic with security as a core concern.
The main differences: OpenClaw integrates with messaging platforms like WhatsApp and Telegram. It has a visual UI instead of just a terminal. And the community has made it easy to share and install pre-built skills. Claude Code can do all of this too (Anthropic invented agent skills), but for whatever reason people find OpenClaw's setup more accessible.
If you want the agent experience with fewer security headaches, Claude Code is the safer choice. If you want OpenClaw's flexibility, go in with your eyes open. Its creator explicitly says it's "not meant for non-technical users."