We are witnessing the end of the ‘All You Can Eat’ era for AI. For a year, we’ve been the competitive eaters at the subscription buffet, running loops and agentic sub-tasks on a flat $20-a-month fee. Anthropic just pulled the tray away, and Ben Thompson thinks it’s an obvious move.

A lot of people are understandably upset about this change, but it seems like an obvious one for Anthropic to make: yes, there are competitive concerns, particularly now that OpenAI owns OpenClaw (an acquisition that I haven’t written about that does make a bit more sense), but the bigger issue is the exponential increase in tokens that are caused by agents, particularly agents that aren’t tuned to the model. Subscription pricing for a good with meaningful marginal costs was always questionable; it’s completely untenable if you remove human friction from usage and replace it with an agent that never sleeps and has no incentive to increase efficiency. More generally, the reason to mention these three stories together is to note that AI generally and agents specifically are going to break a lot of things, above and beyond the security concerns I wrote about last week. The Internet’s removal of distribution friction broke a whole host of industries and business models; AI’s removal of substantiation friction is set to do the same thing, and tech companies and services are likely to be the first set of victims.

Source: OpenAI Buys TBPN, Tech and the Token Tsunami – Stratechery by Ben Thompson

Ben notes the ‘exponential increase in tokens,’ but the true second-order effect is what those tokens actually represent. It’s not just that LLMs require more compute; what they produce, from the logs, the loops, and the agentic exhaust, requires even more. From that perspective, when Anthropic decides that one cannot use their LLMs for non Claude harnesses, it has major downstream productivity impacts.

I am a high-burn statistical outlier. I run constant agentic loops on a flat-fee subscription. I am one of the very users who is affected by the changes at Anthropic.

Let’s revisit what happened last week:

Anthropic cut off the use of flat Pro and Max subscription limits on third-party harnesses like OpenClaw. Those users now have to use discounted extra-usage bundles or API keys instead. Anthropic’s stated reasons were capacity, sustainability, and the claim that these tools create usage patterns the subscriptions were not built for. It also offered a one-time credit equal to the monthly plan cost and said refunds were available.

The rest of the article attempts to see this from different viewpoints in an attempt to arrive at a reasoned conclusion.

To understand why this feels like a blueprint for user resentment, we have to look at the math Anthropic is using to justify the cage.

The logic holds up in the boardroom but falls apart at the terminal. if the math is sound, the optics are a masterclass in how to alienate the very power users who built their momentum.

Benefits of the move

An opportunity to understand value

At the end of the day, when it comes to game theory: Anthropic made this decision because they could afford to and it sounds like they may have made the right call.

This move creates a clean test. If users keep paying and migrate to Anthropic’s native surfaces or paid usage bundles, Anthropic learns that Claude itself and Claude Code have standalone pricing power. If users churn, downgrade, or shift to rivals and bring-your-own-key setups, Anthropic learns that a lot of the old value sat in open orchestration rather than in the subscription bundle. It also stops hiding heavy token burn inside a flat fee and makes unit economics easier to see. That is a plausible inference, not something Anthropic has publicly admitted. The facts that support the inference are the new metered path for third-party use and Anthropic’s existing separation between consumer bundles and usage-based enterprise pricing.

The IPO angle

This is obviously speculative: Both Anthropic and their main competitor lab - OpenAI - are scheduled to go public in 2026. Anthropic last announced a $30B round at a post valuation of $380B while OpenAI announced a $122B round with a post valuation of $850B. Anthropic has the incentive to try and front run OpenAI’s IPO. Front running the OpenAI IPO forces OpenAI to defend its (likely) messier consumer-token-economics against Anthropic’s now cleaner narrative. Anthropic already has the advantage with enterprises and hence their monetization is much more straightforward. The pay as you use is for tokens and as harnesses cause token explosion, they are able to present a healthy narrative around unit economics and long term valuation. By front running, they also put OpenAI’s narrative in question, which is far more complex given their consumer bent.

While Anthropic polishes its narrative for Wall Street, the individual developer is left holding a more expensive, more restrictive bag. When the bundle breaks, the only thing left on the table is the bill. For the power user, the choice is no longer between ‘Pro’ or ‘Max’; it’s between agency and convenience.

Consumer options

As someone attempting to analyze the situation I understand why Anthropic made this move. As a consumer, things are more grim.

And if I cannot at a reasonable cost, then maybe Anthropic’s offering is not the right one for me.

Holding two thoughts at once

Anthropic’s showing strong management - it’s found a leak with its subscription pricing and is plugging it with improved Claude Code capabilities and aligning its tokenomics. The optimistic read here is that they are committed to operating a sustainable business and are being honest to align pricing with costs. Taken seriously, this also suggests that this is likely the eventual outcome for Codex and Gemini.

However, as a consumer, this only makes me more resolute in building for the exit. I want a stack where the provider is a utility, not a cage.