What happens when an AI company decides you’re using their product a little too well? You get banned, apparently.
Anthropic temporarily suspended Peter Steinberger from accessing Claude in April 2026. Steinberger, the creator of OpenClaw, found himself locked out after what Anthropic described as “suspicious activity” following a pricing dispute. The ban was brief, but the message was clear: build on our platform, but don’t build too much.
The Pricing Problem
The suspension came right after Anthropic changed its pricing structure for OpenClaw users. Steinberger’s tool had apparently become popular enough to trigger alarm bells at Anthropic headquarters. When your success becomes someone else’s problem, you know you’ve hit a nerve.
This isn’t just about one developer getting temporarily kicked off a platform. This is about the fundamental tension in the AI industry right now. Companies want developers to build amazing things on their APIs. They want the ecosystem. They want the buzz. But when those amazing things start consuming resources at scale, suddenly the relationship gets complicated.
Suspicious Activity or Successful Product?
Anthropic flagged “suspicious activity” on Steinberger’s account. But what does that actually mean? Was he doing something genuinely problematic, or was his tool just working exactly as intended and proving more popular than expected?
The timing tells us everything. Pricing changes followed by a ban suggests this wasn’t about security or abuse. This was about economics. OpenClaw was probably costing Anthropic more than they anticipated, and rather than having a conversation about it, they hit the suspend button.
I’ve reviewed dozens of AI tools, and this pattern keeps repeating. A developer builds something useful on top of an AI platform. Users love it. Usage scales. The platform provider freaks out about costs. Developer gets penalized. It’s becoming predictable.
The Real Cost of Building on AI Platforms
Here’s what nobody wants to admit: building on someone else’s AI platform is building on quicksand. The rules can change overnight. Pricing can shift without warning. Your access can vanish because an algorithm decided you looked suspicious.
Steinberger got his access back, which is good. But the damage is done. Every developer watching this situation now knows that success on Claude comes with an asterisk. Scale too fast, use too many tokens, become too popular, and you might find yourself explaining your business model to Anthropic’s risk team.
This isn’t unique to Anthropic, by the way. OpenAI has done similar things. Google has done similar things. Every AI platform provider walks this tightrope between encouraging development and protecting their margins. Developers are just the ones who fall off when the rope moves.
What This Means for AI Tool Builders
If you’re building on Claude, or any other AI API, you need a backup plan. You need multiple providers. You need to architect your product so that a temporary ban doesn’t kill your entire business.
Steinberger’s suspension was brief, but imagine if it had lasted a week. Or a month. How many users would OpenClaw have lost? How much trust would have evaporated?
The AI industry loves to talk about democratizing access and enabling developers. But actions speak louder than marketing copy. When push comes to shove, platform providers will protect their interests first. Your viral tool is someone else’s cost center.
Anthropic eventually restored Steinberger’s access, which suggests they realized the optics were bad. But the precedent is set. Build something too successful on our platform, and we might just turn off the lights until we figure out how to price you properly.
That’s not a foundation anyone should want to build on.
🕒 Published: