- Forests Over Trees
- Posts
- Elon is right
Elon is right

Imagine you work at a car company. At 6am, the alarm goes off. It’s jarring, but you’re already awake and have been for hours. BIG presentation at work today. Well… it’s basically a presentation. Today, your bosses will sit on either side of you, a VP you’ve never met will sit in the passenger seat, and Elon Musk will jump in the driver’s seat to test your new self-driving software.
That’s a version of the scene that Walter Isaacson laid out in a CNBC article a few days ago – one of many teasers for his new book about Elon. Ordinarily, this new software release would be no biggie. This is Full Self-Driving (FSD) version 12 we’re talking about… not Tesla’s first rodeo. But the article got my attention for two reasons – 1/ the FSD approach is entirely different, and 2/ it made me realize I wrote something stupid a few months ago.
Let’s take those in order, shall we?
The Approach to FSD v12
Like I said, FSD v12 does things differently. Instead of defining rules for each situation a driver might encounter on the road (ex. green means go, red means stop, yellow means “you better make this damned light!”) – the new approach is more hands off. It uses millions of clips of video footage to train the driving models on what a good driver does in certain situations. This still requires human judgement to classify video clips as either “good” or “bad” handling of driving situations, but it’s much less labor intensive (and much more effective) than maintaining a list of rules for every possible situation on the road. Why? Because tons of crazy shit happens on our roads, making it really difficult to list everything out. If you’ve heard the stories of Waymo cars stopping in traffic or Teslas hitting fire trucks, you can imagine why a new approach like this would be enticing. It’s the same reason why the best bosses explain why certain things need to get done, but don’t spell-out exactly how. Learning to reason through things on your own is better than following a list of rules. Here’s a great old video to drive the point home, featuring an ingredient near and dear to my heart – peanut butter:
But there are a few downsides to this new approach:
Human drivers break the rules, so training Tesla’s FSD on human driving examples means that FSD v12 will want to break the rules too. Isaacson shares the example that 95% of humans roll through stop signs. Will we let self-driving cars do that? That’s a question regulators are debating right now while they review Tesla’s new FSD.
Hallucinations and traceability – These classic AI problems could affect FSD. A hallucination is the weird term AI researchers have chosen to refer to AI making completely idiotic statements. For example – you ask ChatGPT, “who is the greatest basketball player of all time?”, and it tells you JJ Redick instead of Michael Jordan. In the driving department, this could mean making really bad decisions on the road. And as for traceability, you lose that when you abandon training on rules in favor of training on outcomes (good vs. bad driver). This means it’s hard to know how the FSD made a given decision. Without that, it’s harder to identify and fix specific performance issues.
And now onto the humbling part – admitting I was wrong.
What I got wrong about self-driving
In February, I wrote a spectacular piece (if I do say so myself!) about Nuro, an autonomous delivery company with these cool little self-driving pods.

But the thing I got wrong – which hit me while reading the Isaacson piece, was about Elon’s approach to sensors. Let’s read that blurb for context:
There’s been a debate in the AV community since it began about which sensors are most important. Many AV companies, including Nuro, use a broad range of sensors, including: optical cameras, thermal cameras, lidar, and radar. They’re all good at different things. Optical cameras are cheap and help cars recognize and classify objects. Thermal cameras augment detection of pedestrians and people. Lidar uses lasers to measure distances to objects. Radar does the same but with radio waves. So while Nuro smashes together data from these sensors to chart a course on the road, Tesla is the spokesperson for the other side of the argument. They don’t believe in using lidar on AVs, and part of the reason is cost-focused. Cameras and radar are cheap, but a new Velodyne lidar sensor costs a jaw-dropping $75K. As another piece of the argument, Elon doesn’t think they’re necessary to achieve self-driving. He advocates for trusting cameras and advances in AI image recognition to solve for depth perception and obstacle detection.
Intuitively, I believe having more “senses” makes you more aware of your surroundings, so I just can’t side with Elon. Data suggests that depth perception is much better with lidar. And yes, it costs more, but those costs will come down, just as they have for other critical digital supplies.
Ok, did you spot my mistake? It’s true that costs are coming down, and it’s true that depth perception is better with lidar… we evolved to have five senses, so I’m sticking to my guns that more sensor types is better. But the part I messed up on is how I defined “better”. My comments above basically explain which self-driving sensors might lead to the best performing self-driving cars, but best performing is super vague and unhelpful. I need to adjust my definition of best to mean – meets safety standards and dominates in market share. Because that’s the game that full self-driving cars are playing, Tesla included.
Clay Christenson’s “disruptive innovation” is a widely used term that’s helpful to define and apply here. It says that incumbents win until there are newcomers that introduce products at lower prices (and sometimes lower quality). Newcomers attract price-sensitive buyers and raise their quality over time, ultimately disrupting the incumbents, who only take them seriously when it’s too late.
Tesla’s FSD is a great example. Elon is maniacally focused on using cameras as the only sensor type. Because they’re cheaper and don’t work as well (yet), many in the self-driving community dismissed Tesla’s approach. And maybe Waymo’s array of sensors will perform better (eventually), but for now, what’s winning? Tesla’s cheaper, more widely adopted tech. It’s giving Tesla a lead on self-driving and a steady stream of new training data for its models.
Wrapping up
I hate to say it, because Elon has been such an annoying twat recently, but he was right about his bet on sensors. I think he will also be right about his bet on rules-less FSD. Because he’s bringing lower cost self-driving to life and he’s meeting the minimum safety bar, I predict regulators will eventually let him and FSD v12 through.
This post is public so feel free to share it.
Bonus Bullets
Quote of the Week:
Saying you want to "learn to code" to get into tech/start a startup is like saying "I want to learn how to lay bricks" when you're interested in becoming a real estate developer. You don't need to learn to code. You need to understand how it all fits together.
— Andrew Wilkinson, Co-Founder of Tiny
Quick News Reactions:
Apple did something: They held their much-anticipated “Wonderlust” event earlier this week, announcing new phones and watches. I love my iPhone… but this was basically a nothing-burger.
Text to music AI tools have arrived: Stability AI’s tool is free for short audio clips. Since ChatGPT prompt engineering has taken off, I bet we’ll see audio-prompt-engineering-DJs soon, or whatever you want to call that!
TikTok launches shopping: Instagram has tried this with mixed results, so I’m thinking this is harder than it sounds. Disrupting “the infinite scroll” to put in your shipping info sounds like it could be a lose-lose.
Tech Jobs Update:
Here are a few things I’m paying attention to this week:
Big Tech Job Posts: LinkedIn has 7,886 (-7.2% WoW) US-based jobs for a group of 20 large firms (the ones I typically write about — Google, Apple, Netflix, etc.).
Graph: Layoffs since covid (Source: Layoffs.FYI)
