$100 billion. That’s the number Broadcom CEO Hock Tan just threw at Wall Street for AI chip revenue by fiscal 2027. Not total revenue. Just chips. Just AI.
If you’re not paying attention to Broadcom right now, you’re missing one of the most aggressive plays in the AI hardware space. This isn’t some vague “we’re investing in AI” press release. Tan has already secured supply through 2028 and the company’s AI semiconductor revenue has more than doubled. These aren’t projections built on hope—they’re backed by actual contracts and manufacturing capacity.
The Custom Silicon Bet
What makes this projection interesting isn’t just the size. It’s the strategy. Broadcom is betting big on custom accelerators, not general-purpose chips. That means they’re building specific silicon for specific customers with specific workloads. This is the opposite of Nvidia’s “one chip fits all” approach.
Custom silicon is harder. It requires deep partnerships, longer development cycles, and customers willing to commit years in advance. But when it works, it creates moats. Once a hyperscaler builds their infrastructure around your custom chip, switching costs become astronomical.
Tan clearly believes the biggest AI players want control over their hardware stack. He’s probably right. Google has TPUs. Amazon has Trainium. Meta is building custom chips. The pattern is clear: if you’re spending billions on AI infrastructure, you want chips designed for your exact use case, not generic solutions.
The Numbers Don’t Lie
Broadcom’s fiscal Q1 2026 results showed AI chip revenue surging 106% to $8.4 billion. Total revenue hit $19.31 billion, up 29%. These aren’t rounding errors. The company is already executing on this strategy at scale.
But here’s what matters: going from $8.4 billion in one quarter to $100 billion annually by 2027 requires sustained triple-digit growth. That’s not just “business is good” territory. That’s “we’re capturing a massive market shift” territory.
Either Tan knows something the market doesn’t, or he’s setting himself up for one of the most spectacular misses in semiconductor history. Given that he’s secured supply through 2028, I’m betting on the former.
What This Means for AI Infrastructure
If Broadcom hits anywhere near this target, it confirms something important: the AI chip market isn’t winner-take-all. There’s room for both general-purpose accelerators and custom silicon. Nvidia won’t own everything.
This matters for anyone building AI products. Right now, most startups assume they’ll run on Nvidia hardware because that’s what’s available. But if hyperscalers are increasingly using custom chips, the performance characteristics and cost structures of AI inference could look very different in three years.
For enterprises, this means the cloud providers you’re using today might have fundamentally different AI capabilities tomorrow based on their chip strategies. AWS with custom Trainium chips will have different economics than a provider running pure Nvidia.
The Risk Nobody’s Mentioning
Here’s the uncomfortable question: what happens if AI demand plateaus before 2027? Tan is betting that training runs keep getting bigger, that inference volumes keep exploding, and that customers keep needing more specialized silicon.
That’s probably the right bet. But it’s still a bet. The company has locked in supply, which means they’ve committed to manufacturing capacity. If demand softens, they’re stuck with expensive fabs running below capacity.
The stock market seems to believe the story—shares have been strong despite broader tech volatility. But investors should understand this is a high-conviction play on AI infrastructure spending continuing to accelerate for the next three years straight.
Tan isn’t hedging. He’s going all-in on custom AI silicon. By 2027, we’ll know if he was prescient or just early to a party that ended before he arrived.
đź•’ Published: