Anthropic talked to Trump about Mythos.
That’s the confirmation from Anthropic co-founder Jack Clark at the Semafor World Economy summit this week. The AI company briefed the Trump administration on its latest model before the former president declared their relationship over. The timing? Classic tech-meets-politics mess.
What Actually Happened
Here’s what we know for certain: Anthropic gave the Trump administration a briefing on Mythos. The company claims this model has powerful capabilities, though they’ve been characteristically vague about specifics. This briefing happened before Trump announced he was ending the relationship with Anthropic.
The exact date remains unknown, which is frustrating but typical for these situations. What we do know is that a new court filing revealed something interesting: the Pentagon told Anthropic the two sides were “nearly aligned” just a week after Trump declared things kaput. That’s either terrible timing or someone wasn’t reading the room.
The Obvious Questions
Why brief an administration on your latest AI model if the relationship is already strained? Either Anthropic thought they could salvage things, or they didn’t see the breakup coming. Neither option looks great.
And what does “powerful capabilities” actually mean? This is the kind of vague language that drives me nuts when reviewing AI tools. Give us specifics. What can Mythos do that Claude can’t? What makes it worth briefing government officials about?
The Pentagon’s “nearly aligned” comment raises more questions than it answers. Nearly aligned on what? Security protocols? Use cases? Pricing? The gap between “nearly aligned” and “relationship over” suggests either miscommunication or someone changed their mind fast.
Why This Matters
Government relationships with AI companies aren’t just about contracts and revenue. They shape policy, influence regulation, and set precedents for how these tools get deployed in sensitive contexts. When an AI company briefs an administration on new capabilities, they’re not just showing off tech—they’re positioning themselves for future opportunities.
Anthropic has positioned itself as the “responsible AI” company, emphasizing safety and alignment. Briefing government officials fits that narrative. But the messy aftermath shows how quickly these relationships can sour, regardless of good intentions or technical merit.
The Real Story
Strip away the corporate speak and political theater, and you’re left with a straightforward situation: a tech company tried to work with a government administration, gave them a preview of new technology, and then watched the relationship fall apart anyway.
Clark’s willingness to discuss this publicly at a summit suggests Anthropic isn’t trying to hide the briefing. That’s refreshing in an industry that often treats government relationships like state secrets. But transparency about the briefing doesn’t answer the more important questions about what went wrong afterward.
The court filing detail about Pentagon alignment adds another layer. If the Pentagon thought things were nearly aligned a week after Trump declared the relationship over, someone wasn’t communicating effectively. That’s either an internal government disconnect or Anthropic reading signals that weren’t there.
What We’re Left With
We have confirmation of a briefing, claims about powerful capabilities, and a relationship that ended despite apparent progress. We don’t have dates, specifics about Mythos, or clear explanations for why “nearly aligned” became “relationship over” so quickly.
For those of us who review AI tools and track this industry, this episode is a reminder that technical capabilities matter less than relationships and timing. Anthropic could have the most impressive model in the world, but if the political winds shift, none of that matters.
The Mythos briefing happened. The relationship ended. The Pentagon thought things were nearly aligned. Those are the facts. Everything else is speculation, and I’m not in the business of filling gaps with guesses.
đź•’ Published: