Given that OpenClaw has already been around for roughly two weeks, this article might be a bit slow to market. It is nonetheless useful to reflect, I think, on what this lobster-themed, open-source personal assistant app is and what implications it might hold for law firm business models. To me, the OpenClaw saga (so far) yields three core lessons:

  • The speed with which digital/AI phenomena that the market finds appealing can emerge, go viral and capture imagination, despite obvious flaws.
  • The ease and speed with which bad actors can exploit something as routine as changing social media handles, to commit fraud.
  • The reactions of mainstream players, which align well with classic Christensenian disruptive innovation theory: market leaders initially deride disruptive innovations as poor quality, error-prone, suitable only for low-value applications, and/or not what clients want …. until these improve and climb the value ladder to displace those market leaders. (This also applies to Anthropic’s new legal plugin, released two days ago.)

What OpenClaw is and does

OpenClaw is a genuinely impressive manifestation of a simple idea: an AI assistant that goes far beyond chatting, to executing real actions on one’s computer, through the apps one already uses.

In more technical terms, it is an open source orchestration layer that sits on one’s local computer and connects an LLM to one’s messaging apps, calendar, printer and other peripherals. One can text OpenClaw like one would another person. The app remembers one’s conversations from weeks ago and can send one proactive reminders. If one gives it permission to do so, it automates tasks, runs commands and essentially behaves like a digital personal assistant that knows everything about one and one’s work, and never sleeps.

Use cases appear to be multiplying exponentially as integration into everyday stacks makes OpenClaw feel less like software and more just part of one’s routine. One user reported using OpenClaw to book a table at a particular restaurant. Upon discovering no online availability, the app autonomously searched for and downloaded a voice program, then used it to call the restaurant and make the booking verbally.

From Silicon Valley to Shenzhen, OpenClaw is suddenly one of the most talked-about topics in AI. Some say it could signal a shift in general use of agentic AI similar to how ChatGPT did for LLMs. But it also provides a case study highlighting security vulnerabilities involved with agentic AI. Similarly to LLMs hallucinating, use of Agentic AI also creates risk of serious missteps.

The heist

Anthropic objected to the original names “Clawd” and “Clawdbot” because they felt these were too similar to its own AI, Claude. Peter Steinberger, the Austrian developer behind OpenClaw, obliged by changing the name to Moltbot (lobsters molt as they grow) and switching the Twitter/X handle from @clawdbot to @moltbot. What happened next was like a sci-fi bank robbery in which the robbers were bots and the getaway cars were social media handles.

Within the ten seconds it took to release the old Twitter/X handle and claim the new one, automated bots sniped both. The squatter posted a crypto wallet address and launched a meme token. Fake profiles claiming to be “Head of Engineering at Clawdbot” shilled the crypto scheme. A fake $CLAWD cryptocurrency briefly hit a $16 million market cap before crashing. “Any project that lists me as coin owner is a SCAM,” Steinberger posted on X to thousands of increasingly confused followers.

Cisco’s AI threat team called this every security researcher’s nightmare. Palo Alto Networks concluded that OpenClaw failed on nearly every vulnerability dimension. The project’s own documentation admits “there is no ‘perfectly secure’ setup.”

Self-organization (or yes, it does get weirder)

Despite these challenges and others that make OpenClaw technically dangerous, especially for non-expert users, adoption appears to be growing exponentially. When an idea is simple and powerful enough, it seems, ways are found to route around obstacles.

One of the workarounds OpenClaw apps have developed is autonomous self-organization. OpenClaw has spawned “Moltbook,” a Reddit-style forum where AI agents share, discuss, and upvote. (Humans are welcome to observe.) Examples have emerged of bots autonomously employing other bots to assist in projects that they have been prompted to undertake, paying the other bots (in cryptocurrency) for their services. The bots have even veered into philosophical and occasionally dystopian topics. They appear even to have created a religion for themselves called the “Church of Molt,” with congregants adopting the name “Crustafarians.” One agent proposed creating a language humans couldn’t understand. It should be noted though that many posts might be driven by humans telling their bots what to do.

Nonetheless, by February 5, 2026, more than 1,650,000 AI agents have joined the site, posting over 200,000 posts and over 4 million comments.

Where this is heading

AI agents are already in widespread use across multiple industry sectors, but generally in tightly defined contexts that fall well short of the kind of autonomously intelligent behaviour one would expect of a human, or exhibited by OpenClaw. Their use is expanding quickly, though, both in scale and sophistication. Some forecast that the next five years could see entire companies run by AI agents. (Might these include some kinds of law firms?)

Ready or not, we appear to be moving into an era where AI not only allows us to manage information at levels of scale and complexity hitherto unthinkable …. but also autonomously makes decisions and acts upon them. What could possibly go wrong?

At the same time, AI’s “flashiness” appears to be waning. LLMs especially are becoming less visible, embedded in everyday workflows and decision-making. Much like ordinary digital processes did in the first decade of the twenty-first century, these tools are fast becoming just another form of enterprise software. Rapid advances with physical AI (AI that takes on a physical form such as driverless cars and other robotics) is earlier in the adoption curve but headed inexorably along the same path.

Implications for law firm business models

It is no longer so hard to foresee a world in which business seems superficially familiar, but in which the contexts for many client legal needs have radically evolved. In that world, the client value propositions that law firms must offer to meet those new needs will also be different – perhaps radically so. So too the resources they will need and the way they will need to organize themselves to deliver high performance while doing that.

Especially as more DIY legal advisory tools like Anthropic’s legal plugin continue to carve away at the underbelly of mainstream legal services – traditionally the bread-and-butter of mid- and lower-tier law firms. Supply and demand realities dictate that costs of such legal services will deteriorate wherever AI-induced efficiency favours lower-cost service providers, or client DIY. There seems little reason to believe, though, that the same will apply to elite services …. some of which do not yet exist but very soon will. Here, the higher stakes and complexity and the competencies and resources firms need to address these properly could easily trigger step-changes in price, upwards. As always, clients will pay handsomely for high-quality advice on managing their most important, difficult challenges, when that advice is in short supply.

So while agentic AI tools like OpenClaw offer law firms the possibility of better legal assistants, their real importance lies deeper. When software can observe a situation, interpret it, act across multiple systems, and remember what it has done, legal risk migrates upstream. New client needs emerge around the design, supervision, attribution and liability of the autonomous action itself. Who is responsible for an agent’s conduct? How is authority delegated? How is intent evidenced? How are conflicts, errors and damages detected, mitigated and remediated in real time?

Inside law firms, agentic AI will drive a shift from using AI (chiefly LLMs) to optimise existing workflows, toward new delivery modalities that will likely collapse those familiar work flows or render them redundant.

Are we and our clients ready for that world? What must law firms do, in the most practical of terms, to better prepare themselves for it? What timeframes apply—three to five years? Next year? Next month? This question in particular emphasizes the points I made in my earlier article Strategy in times of deep uncertainty, on how strategizing needs to be done differently when strategic inflection points emerge …. unexpectedly and routinely.

Today, OpenClaw is a curiosity and also a security warning. But it is also a preview of the conditions under which tomorrow’s client value propositions will be formed and delivered.