Remember when tech companies used to pretend they were above politics? That they were just building cool stuff in garages and didn’t need to worry about the messy business of campaign contributions and PACs? Yeah, those days are dead.
Anthropic, the AI startup that’s spent the last few years positioning itself as the “responsible” alternative to OpenAI, just launched AnthroPAC. That’s right—the company that won’t shut up about AI safety and constitutional AI principles now has its own political action committee. And they’re planning bipartisan contributions during the midterms to both current lawmakers and rising political candidates.
Look, I’m not shocked. I’m just disappointed in how predictable this all is.
The Safety Company Gets Political
AnthroPAC will be funded exclusively by employee donations, which is the standard corporate line for making this look grassroots. But let’s be real about what’s happening here. Anthropic has grown up. They’ve got serious revenue, serious competition, and serious regulatory threats on the horizon. Of course they’re going to start playing the Washington game.
This isn’t their first rodeo either. Back in February, Anthropic dropped $20 million on Public First Action, a group launched last year to support AI safeguard efforts. Twenty million dollars. That’s not pocket change, even for a well-funded AI startup. That’s a statement of intent.
What This Actually Means
Here’s what bothers me about this whole situation. Anthropic has built its entire brand on being the thoughtful, safety-conscious AI company. They publish papers about constitutional AI. They talk endlessly about responsible scaling policies. They position themselves as the adults in the room.
And now they’re doing exactly what every other tech giant does when they get big enough: buying political influence.
Is that hypocritical? Maybe. Is it necessary? Probably. The AI regulatory environment is a mess right now, and if you’re not at the table, you’re on the menu. But it does make all those high-minded safety principles feel a bit more like marketing copy and a bit less like genuine conviction.
The Bipartisan Playbook
The bipartisan angle is particularly telling. Anthropic isn’t picking sides—they’re hedging their bets. They want friends on both sides of the aisle because they know AI regulation is coming, and they want to shape it rather than have it shaped for them.
Smart? Absolutely. Cynical? Also absolutely.
This is the same playbook every major tech company has run. Start out idealistic, grow fast, realize the government is paying attention, and suddenly discover that political contributions are just part of doing business. Google did it. Meta did it. Amazon did it. Now Anthropic is doing it.
The Real Question
The question isn’t whether Anthropic should be involved in politics. They should. AI regulation is too important to leave to people who don’t understand the technology. The question is whether their political activities will align with their stated safety principles, or whether those principles will quietly take a backseat to business interests.
Will AnthroPAC support candidates who actually care about AI safety, or will it support candidates who are friendly to Anthropic’s business model? Will that $20 million to Public First Action translate into meaningful safeguards, or will it be used to water down regulations that might hurt the bottom line?
I don’t have answers yet. Nobody does. But I’m watching, and I’m skeptical.
Anthropic has spent years telling us they’re different. That they care more about safety than speed, more about principles than profits. Now they’re playing the same political games as everyone else. Maybe that’s just growing up in the tech industry. Maybe it’s necessary to have a seat at the table.
But it sure does make those safety sermons ring a little hollow.
🕒 Published: