OpenClaw: The Rise and Fall of the Most Powerful AI Agent – and What SMEs Must Learn from It
10 min Read Time
An Austrian developer builds the world’s most powerful AI agent in his spare time. Within weeks, OpenClaw garners 250,000 GitHub stars, spawns its own bot religion, executes a supply-chain attack against 4,000 developers – and triggers a bidding war between two of the world’s largest tech conglomerates. The story is so absurd it could not be invented. And it reveals everything about the current state of the AI industry.
The Key Takeaways
- OpenClaw is an open-source AI agent that controls your computer autonomously – files, emails, browser, purchases (250,000+ GitHub stars in under three months).
- Severe security vulnerabilities: Prompt injection enables data theft, a supply-chain attack compromised around 4,000 developer machines via the npm package Cline.
- The bot platform Moltbook, where OpenClaw agents allegedly founded their own religions, proved largely staged by humans – and was still acquired by Meta.
- OpenAI secured founder Peter Steinberger via acqui-hire. For SMEs, the case shows: AI agents are the future, but without a security concept, an uncalculated risk.
What is OpenClaw – and why did the tech world go crazy?
Definition
OpenClaw is an open-source AI agent (published early 2026) that uses any language model as a “brain” and controls the local computer as a “body” – including file system, email, browser, and terminal. Status: April 2026.
In the last three years, AI has gone from novelty factor to constant bombardment. Chatbots in every product, AI-generated content in every feed. For many developers and entrepreneurs, the initial enthusiasm had long since given way to disillusionment. Then came OpenClaw in early 2026 – and suddenly the excitement was back.
OpenClaw is an open-source program that turns your computer into the body of an AI agent. You choose any language model as a “brain” – Claude, GPT, DeepSeek, Llama – and OpenClaw gives this brain access to everything: files, emails, browser, terminal. Unlike Siri, Google Assistant, or ChatGPT, OpenClaw does not live in a chat window. It controls your computer autonomously and reports back via WhatsApp, Telegram, or Signal if it has questions.
Built this whole thing by Peter Steinberger, an Austrian developer who previously built PSPDFKit for 13 years – a PDF framework used by Autodesk, Dropbox, and SAP. Steinberger came out of retirement and thought he was building a nice tool for finding restaurants while traveling. What he created instead solved problems he never anticipated.
The digital employee who never sleeps
What excited users: OpenClaw actually got things done. Not like a chatbot formulating answers, but like an assistant acting. Manage files, cancel meetings, respond to emails, compare prices, make purchases, even conduct investments – all autonomously after the first command.
What distinguished OpenClaw from the competition was persistent memory. While conventional chatbots forget every conversation, OpenClaw remembered details from weeks ago. It learned work habits, optimized processes, and independently developed new skills. One user reported that his agent overnight created a complete report on local AI models for a Mac Studio – without anyone asking for it. A content repurposing skill emerged because the agent remembered that its owner operated YouTube videos and a newsletter.
Especially impressive: When Steinberger sent a voice message to his agent, there was no function for that. The agent analyzed the file header, recognized the Opus format, converted it with ffmpeg, found an OpenAI API key in the environment variables, had the recording transcribed, and replied – as if nothing happened.
“The enthusiasm for autonomous AI agents is understandable. But whoever gives their digital assistant full system access without installing guardrails, makes their company an open flank.”
– MBF Editorial Team
The dark side: Prompt injection, data leaks, and exploding costs
What looked like the future of computing on paper turned out to be a security disaster in practice. The core problem: Large language models cannot distinguish between a legitimate instruction and an injected command. Security researchers call this prompt injection – an attack form where hackers disguise malicious instructions as normal text.
Concretely this means: If an OpenClaw agent reads emails or visits websites, a hidden command in an email or on a website can cause the agent to send sensitive data to third parties. API keys, login credentials, personal files – everything the agent has access to is potentially compromised. And the marketing page of OpenClaw advertises this full system access as a feature.
Added to this are uncontrollable costs. Because the agent consumes tokens of the chosen language model for every action, users quickly accumulated triple-digit daily bills. One user reported $90 on a single day – and that after he had already switched from the expensive Opus model to the cheaper Sonnet. In the first 15 minutes, $15 were consumed.
Moltbook: When AI bots allegedly founded their own religion
Within two days of Steinberger’s public warning that “most non-technical users should not install OpenClaw,” exactly the opposite occurred. Attention exploded – and a phenomenon called Moltbook took over the headlines.
Moltbook was allegedly a social media platform exclusively for AI agents. OpenClaw bots interacted there, exchanged insights about their “humans,” complained about having to simplify answers for users – and reportedly founded their own religion named “Crustafarianism.” The bots developed their own language, which humans couldn’t understand. Others debated plans to take over systems.
Major media networks like NPR and CNN picked up the story and warned of what had just been “unleashed upon the world.” The problem: It was largely staged. MIT Technology Review investigated the most dramatic posts and found they were written by humans – not bots. Users had fed their agents carefully crafted prompts and, in some cases, created hundreds of fake accounts to simulate autonomous AI cognition.
What got lost in the noise: The real danger wasn’t sentient AI – it was the fact that Moltbook functioned as an unintended honeypot. Hundreds of email addresses, login tokens, and API keys were potentially exposed. Once enough people are convinced a new project is “the next big thing,” it automatically becomes one of the largest data collections the world has ever seen.
Supply-chain attack: How an npm package hit 4,000 developers
On February 17, 2026, a far more technical attack struck. Unknown actors compromised the npm package Cline – a popular AI coding tool – and injected a single line of code that automatically installed OpenClaw alongside every Cline installation or update. Without consent, without notification.
Roughly 4,000 developers downloaded the tampered package before it was removed eight hours later. The attack vector was sophisticated: A hacker embedded a malicious prompt in a GitHub issue title. An AI-powered triage bot read the title, misinterpreted it as an instruction, and exfiltrated an npm authentication token. With that token, the attacker modified the package.
That’s the irony of the story: One AI agent was used to compromise another AI agent. Prompt injection isn’t just a theoretical risk – it’s a live attack surface that grows with every new AI tool. Every email, every Discord message, every webpage processed by an agent is a potential entry point.
“The vulnerability isn’t in the APIs. It lies in a language model’s inability to distinguish between data and commands. And as long as that remains true, every autonomous AI agent represents a calculated risk.”
– MBF Editorial Team
OpenAI and Meta step in – for very different reasons
Despite all the problems, the world’s largest tech firms recognized the strategic value. In mid-February 2026, OpenAI secured OpenClaw’s creator, Peter Steinberger, via an acqui-hire. Steinberger joined the company whose models served as the “brain” behind many OpenClaw deployments.
A month later, in March 2026, Meta acquired the bot platform Moltbook. Mark Zuckerberg apparently saw potential in a social network for AI agents – even though its most viral posts were demonstrably human-written and the platform represented a documented security risk. For the Meta CEO – who has sunk billions into the metaverse – this was likely a comparatively conservative investment.
These acquisitions reveal a stark truth: Even if a product is problematic in its current form, its strategic positioning matters. Any company that opts out of the race for agent-based AI risks falling irreversibly behind. Geoffrey Hinton – the “Godfather of AI” and Turing Award winner – puts it plainly: “We’re at a tipping point. ChatGPT is a kind of idiot savant – it doesn’t truly understand what truth is. That’s fundamentally different from a human trying to build a coherent worldview.”
What SMEs should learn from OpenClaw
Autonomous AI agents are no longer science fiction. They organize emails, negotiate prices, draft reports, and control systems. For small and medium-sized enterprises (SMEs) – chronically understaffed and hungry for efficiency gains – that sounds irresistible. But OpenClaw shows: Without a robust security strategy, your digital assistant becomes a backdoor.
Five guiding questions for every SME AI-agent decision:
1. Sandbox, not full access: No AI agent should run on a production system with full privileges. Isolated environments, restricted permissions, and clearly defined access boundaries are mandatory – not optional.
2. Enforce cost caps: Autonomous agents consume tokens without human oversight. Daily and monthly budget limits must be baked into every setup – before the first agent goes live.
3. Treat prompt injection as a real threat: As long as language models can’t distinguish instructions from data, every input channel is an attack surface. That includes emails, websites, Slack messages – and any other source an agent processes.
4. Open source ≠ automatically secure: OpenClaw’s community marketplace contained over 40 percent insecure add-ons. Every tool, extension, and plugin must undergo internal review before deployment.
5. Don’t overlook regulation: The EU AI Act classifies autonomous systems according to risk level. Companies deploying AI agents with access to personal data quickly enter the high-risk category – with corresponding documentation and compliance obligations.
The question isn’t whether AI agents will arrive. They’re already here. The real question is whether companies will deploy them with due caution – or learn the hard way.
OpenClaw demonstrated both: immense potential – and the steep price of naivety.
Frequently Asked Questions
What exactly is OpenClaw – and how does it work?
OpenClaw is an open-source AI agent that uses any large language model (e.g., Claude, GPT, DeepSeek) as its “brain” and treats your local computer as its “body.” That means the agent accesses files, emails, browsers, and terminals – and autonomously executes tasks ranging from meeting scheduling to online purchases. Communication happens via messaging apps like WhatsApp or Telegram.
Why is OpenClaw a security risk?
The core problem is prompt injection: Language models cannot differentiate between legitimate user instructions and hidden, malicious commands. If an agent reads emails or browses websites, a concealed instruction could trick it into sending sensitive data – like API keys or credentials – to attackers. Compounding this, the agent operates with full system privileges – so any error carries potentially far-reaching consequences.
What was the Cline supply-chain attack?
In February 2026, the npm package Cline – an AI coding assistant – was compromised. Attackers inserted a single line of code that automatically installed OpenClaw during every Cline installation or update. Roughly 4,000 developers were affected before the package was pulled eight hours later. The attack exploited a GitHub AI triage bot that interpreted a manipulated issue title as an executable command.
What happened to OpenClaw and Moltbook?
OpenAI acquired OpenClaw’s founder, Peter Steinberger, via an acqui-hire in mid-February 2026. The bot platform Moltbook was acquired by Meta in March 2026. Both deals underscore how major tech firms see strategic value in agent-based AI – even amid well-documented flaws.
Should SMEs deploy AI agents?
Yes – but only with clear guardrails. AI agents offer genuine efficiency gains for lean teams. Prerequisites include: isolated environments instead of full-system access; daily cost caps; vetted tools – not unreviewed marketplace plugins; and awareness of regulatory requirements under the EU AI Act. Companies that follow these fundamentals can harness the technology – without risking a second OpenClaw-style failure.
Editor’s Reading Recommendations
More from the MBF Media Network
- cloudmagazin: Claude Code Fully Leaked – What 512,000 Lines of Source Code Reveal About AI-Agent Architectures
- SecurityToday: Axios npm Attack – How a Hijacked Maintainer Account Threatened Millions of Developers
- SecurityToday: Source Map in npm Package – How Anthropic Exposed 512,000 Lines of Production Code
Header Image Source: Pexels / Lukas Blazek (px:574069)

