top of page

Anthropic Just Showed You Exactly Who They Are


Audio cover
Anthro'Dicks

There is a pattern in the technology industry that repeats itself with uncomfortable regularity. A platform builds an ecosystem. Developers build on top of it. The platform watches which ideas work, absorbs them, then quietly closes the door on the builders who proved the concept. It happened with Facebook and third-party apps. It happened with Apple and the App Store. This week, it happened in real-time at Anthropic — and the speed of the betrayal was almost impressive. Let me tell you the full story, because there are several moving parts and they all point in the same direction.


The OpenClaw Chapter

Not long ago, a developer named Peter Steinberger built an open-source AI agent framework originally called Clawdbot — later renamed OpenClaw after Anthropic's legal team objected to the similarity to "Claude." The irony of that name dispute will become relevant shortly. OpenClaw was extraordinary by any measure. At its peak: 190,000 GitHub stars. 1.5 million active AI agents running on the platform. Two million weekly visitors. It functioned as a personal agent framework — handling email, bookings, messaging, and complex multi-step workflows across Telegram, Discord, WhatsApp, and more. And it defaulted to Claude. It was, essentially, a massive free distribution channel for Anthropic's models, driving developer adoption without costing Anthropic a dollar in sales.


In January 2026, without warning, Anthropic cut off OpenClaw's API access. Cease-and-desist included. The chaos was immediate — scammers hijacked the project's GitHub and accounts within hours, launching a $16 million pump-and-dump scheme in the confusion. The community that had built around Claude, because of OpenClaw, began to scatter.


Sam Altman moved fast. Within weeks, OpenAI hired Steinberger to "drive the next generation of personal agents." OpenClaw transitioned to an OpenAI-supported independent foundation — still open source, now optimizing for GPT models instead of Claude. Anthropic handed OpenAI the developer ecosystem that had been their single greatest organic growth driver. For free.


That was chapter one.

April 5th: The Subscription Shakedown

On April 4th, Anthropic announced that effective April 5th, Claude Pro and Max subscribers would no longer be able to use third-party AI agent frameworks — including OpenClaw — under their flat-rate subscription plans. The stated reason: "unsustainable demand." The framing: capacity management.


The math for affected users is not subtle. A Claude Max subscription runs $200 per month. Users who built workflows on top of Claude through third-party frameworks are now looking at $1,000 to $5,000 per month in API costs. That is a 50x price increase, effective overnight, with a one-time credit equal to one month's subscription as a goodwill gesture.

135,000 active OpenClaw instances were running at the time of the announcement.


Peter Steinberger — now at OpenAI — called it "sad for the ecosystem." He also noted that the timing aligned suspiciously with Anthropic's rollout of its own competing agentic products. "Funny how timings match up," he wrote. The accusation is specific: Anthropic watched which features developers were building, absorbed them into Claude Code and its own agent tooling, and then locked out the open-source alternatives that had pioneered those same features.


Boris Cherny, the Claude Code creator who made the announcement, framed it as strategic: "We want to be intentional in managing our growth to continue to serve our customers sustainably long-term." Read that sentence carefully. "Our customers." Not the developer ecosystem. Not the third-party builders. Their customers. The ones using Anthropic's own products.


The $30 Billion Backstory

Anthropic just announced that its annualized run-rate revenue has surpassed $30 billion. That is up from $9 billion at the end of 2025. In less than two months since their Series G announcement, the number of business customers each spending over $1 million annually has doubled — from 500 to more than 1,000.


Today — the same day this article is being written — Anthropic announced a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, expected to come online in 2027. Broadcom has committed to a long-term chip supply arrangement valued at approximately $21 billion. Amazon remains Anthropic's primary cloud provider. Claude is now the only frontier AI model available on all three of the world's largest cloud platforms simultaneously: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry. This is not a company struggling with capacity. This is a company with $30 billion in run-rate revenue, a $21 billion chip commitment, multiple gigawatts of incoming compute, and cloud relationships with all three hyperscalers — announcing that it cannot afford to support the developers who built their user base.


The capacity argument is a fig leaf. What is actually happening is vertical integration. Anthropic is moving down the stack, building its own agentic products, and systematically removing the oxygen from the third-party ecosystem that proved those products had a market.


The Pattern Has a Name

This is called "embrace, extend, extinguish" — and it is as old as the software industry. You embrace the ecosystem, extend it with your own features, then extinguish the third-party players once the market is validated. What makes the Anthropic version particularly sharp is the sequence. They forced a name change on the tool that was their biggest organic growth driver. Then they absorbed its most popular features into their own closed product. Then they cut off API access for the original. Then they cut off subscription access for the rebuilt version. Then they announced a $30 billion revenue run rate and a multi-gigawatt compute partnership on the same day.


The developer community is not stupid. They can read the timeline.

What This Means for Anyone Building on Someone Else's Foundation

Every developer, every startup, every business that has built a workflow, a product, or a revenue stream on top of a platform they do not control is running this risk. Anthropic today. Google tomorrow. OpenAI the day after. The history of technology platforms is the history of this exact dynamic playing out over and over, at increasing speed and scale.


The only durable answer is the same one it has always been: own your layer. Build on open models when you can. Gemma 4. Llama. Mistral. Models you can run locally, on your own infrastructure, with no API call leaving your perimeter. When you build on a foundation you control, no announcement at 9pm Pacific time can 50x your operating costs before breakfast.


Anthropic's compute announcement today is, in a perverse way, the most honest thing they have published this week. They are building an infrastructure empire — gigawatts of compute, three cloud partnerships, a $30 billion revenue machine. That empire has a natural gravity, and third-party ecosystems that make it more valuable without paying rent will eventually be brought inside the walls or shut out. This is not a criticism of their business strategy. It is a description of it. And it is useful information for anyone who has been treating Claude as infrastructure.


The Footnote That Deserves to Be the Headline

Peter Steinberger, whose open-source framework with 1.5 million active agents was killed by Anthropic, now works at OpenAI building the next generation of personal agents. The 190,000 developers who starred his repository are watching where he points. That is the most efficient competitive intelligence operation in the history of the AI industry, and it cost OpenAI exactly one job offer.


Anthropic had the most viral AI agent project in the world. They had the developer ecosystem. They had the community. They had the momentum.


They chose the walls over the garden. The garden is now OpenAI's.

Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page