Meta Goes DIY On Chips

Plus: CoreWeave Chaos; Waymo Expansion

In partnership with

Welcome back to Forests Over Trees, your weekly tech strategy newsletter. It’s time to zoom-out, connect dots, and (try to) predict the future.

Here’s the plan:

  • Tech News Takes — super-short analysis and commentary

  • Tool of the Week — tools you’ll find useful

  • Strategy Tips — strategy nuggets (for business and life)

Meta Goes DIY On Chips

Plus: CoreWeave Chaos; Waymo Expansion

Tech News Takes

  • What’s up: CoreWeave, an AI cloud provider, is all over the news. First, they’ve filed for an IPO targeting a ~$35B valuation, and they could be public as early as next week. Second, they announced a $12B contract to sell compute to OpenAI for model training and research. Third, they announced they’re planning to acquire Weights and Biases, an AI developer platform. Lastly, they’ve been fighting rumors (including a Financial Times report) that they are seeing a shrinking customer base. There are plenty of other CoreWeave stories, but we’ll stop there.

  • So what: All of this is IPO-related. Like any other company going public, they want the IPO to go well. So there are huge incentives to share good news in the run-up to that. Acquiring Weights and Biases — and being able to tell a better story about end-to-end AI model development — is good. Getting OpenAI to pay $12B to be a customer and validate that story is really good. Having rumors circulating that they are losing customers… is bad. They can take action to generate good news (M&A, contracts), but being highly scrutinized on the way to an IPO seems likely to unearth both good and bad news in the eyes of analysts. None of this is investment advice, but I’m reminded of the rollercoaster of news Reddit had before their IPO (which we covered here)… and they turned out just fine. It’ll be interesting to see if it goes well for CoreWeave, and whether other private companies come out of the woodwork for their own IPOs.

  • What’s up: Earlier this week, Waymo announced they’re expanding in Northern California. The robotaxi service will soon be available 24/7 across Silicon Valley, rather than being limited to San Francisco the way it is today. Waymo also recently launched a partnership to make robotaxis available on Uber in Austin, which will expand to Atlanta later in 2025. They’ll continue operating their own app for booking robotaxis in San Francisco, Phoenix, and Los Angeles.

  • So what: A few threads to pull on here. First, their expansion is a positive for self-driving — both inside and outside the Bay Area. The Cruise pedestrian accident was in San Francisco in October 2023, and GM eventually shut Cruise down and sent those teams to work on driver assist features. Compared to fully autonomous robotaxis, that’s like glorified cruise control! Point being… good news on the self-driving front has been hard to come by, so Waymo is a breath of fresh air. Second, it seems like their stubbornness around maintaining a separate app (and constraining demand) is partially an effort to create negotiating leverage with Uber. If they keep expanding service coverage for their own app, they can negotiate better terms to be listed in Uber’s app in other new cities.

  • What’s up: Meta has started testing its second custom AI chip, which will be specially focused on AI training (building models) and will be fabricated by TSMC. The first chip was focused on AI inference (running models). According to Reuters, sources from Meta have said the custom chips are an effort to reduce reliance on Nvidia and cut costs. Meta’s AI infrastructure costs are a large portion of capex, which is projected to reach $65B in 2025.

  • So what: This is smart and not at all surprising. First, Meta had success rolling its first AI inference chip into internal recommendation systems (to generate your social media feeds, etc.). So even if its genAI efforts don’t make money, these investments benefit Meta’s core products. Second, model training is more compute intensive — and expensive — than inference. So they’ll unlock more of those delicious cost savings with an AI training chip. Lastly, on a completely different note, it’s interesting to see AI chip performance follow the conventional wisdom that specialization beats generalization (specialized training or inference chips beat generalized GPUs). I bet we’ll see the same splintering of chip types happen even within those broad “training” and “inference” categories!

🛠️ Tool of the Week 🛠️

You found global talent. Deel’s here to help you onboard them

Deel’s simplified a whole planet’s worth of information. It’s time you got your hands on our international compliance handbook where you’ll learn about:

  • Attracting global talent

  • Labor laws to consider when hiring

  • Processing international payroll on time

  • Staying compliant with employment & tax laws abroad

With 150+ countries right at your fingertips, growing your team with Deel is easier than ever.

🧭 Strategy Tips 🧭

Meta Goes DIY

Today's strategy tip is all about focusing internally rather than externally.

We’ll dive deeper into the story about Meta’s AI chips, using the Resource-Based View framework to do it.

Let’s get acquainted with the framework.

Resource-Based View 101

The Resource-Based View (RBV) argues that a company’s long-term success depends on their internal resources.

So instead of focusing on external factors (ex. competitive positioning, Porter’s 5 forces, etc.), RBV looks at the resources and capabilities inside the company.

But not all resources are created equal! To lead to a sustainable advantage, a resource needs to meet these criteria:

💰 Valuable – Does it drive revenue or reduce costs?
💎 Rare – Is it something competitors lack?
📚 Inimitable – Is it difficult for others to copy?
🗂️ Organized – Can the company effectively use it to its advantage?

By taking this lens to internal resources, you can filter out noise and see what resources (teams, tech, processes) really make a difference.

Makes sense, right?

Ok – now let’s bring Meta back in here.

Meta’s AI Chip + RBV

We can talk through the criteria one-by-one.

Valuable – Yes, highly valuable.
Like we talked about before, AI compute is one of Meta’s biggest cost drivers, with capex projected to hit $65B in 2025. This could unlock huge cost savings.

Rare – Yes, pretty rare.
To be fair, Meta isn’t the only one making AI chips, but it’s not a long list (Google, Microsoft, Amazon). Most companies lack the scale/expertise required, so they buy chips instead.

Inimitable – Yes, very hard to copy.  
Even if a flood of competitors started designing their own chips, they’d face other barriers to entry that make scaling the resource hard. Meta has invested billions crafting the right software and supporting infrastructure around those chips.

Organized – Yes, they’re ready to take full advantage.
As we covered before, they plan to immediately put these chips to work in their core business (recommendation algos, advertising tech, etc.). It’s not just random R&D – Meta is ready to capture the value these chips create.

Wrapping up

So all-in-all, Meta building custom AI chips is likely to be a competitive advantage – even if we nitpick them on the rarity point.

And for the founders and leaders reading, here’s the core lesson:

Advantages are built inside – not outside – your company.

When you look externally and see a competitive, chaotic market, it’s calming and clarifying to look internally instead.

The forest is growing.
Feel free to share this post.

To advertise or share feedback, just reply to this email or send me a note!