The first car woke Jennifer King at 2 a.m. with a loud, high‑pitched hum. “It sounded like a hovercraft,” she says, and that wasn’t the weird part. King lives on a dead-end street at the edge of the Presidio, a 1,500-acre park in San Francisco where through traffic isn’t a thing. Outside she saw a white Jaguar SUV backing out of her driveway. It had what looked like a giant fan on its roof — a laser sensor — and bore the logo of Google’s driverless car division, Waymo.
She was observing what looked like a glitch in the self-driving software: The car seemed to be using her property to execute a three-point turn. This would’ve been no biggie, she says, if it had happened once. But dozens of Google cars began doing the exact thing, many times, every single day.
King complained to Google that the cars were driving her nuts, but the K-turns kept coming. Sometimes a few of the SUVs would show up at the same time and form a little line, like an army of zombie driver’s-ed students. The whole thing went on for weeks until last October, when King called the local CBS affiliate and a news crew broadcast the scene. “It is kind of funny when you watch it,” the report began. “And the neighbors are certainly noticing.” Soon after, King’s driveway was hers again.
Waymo disputes that its tech failed and said in a statement that its vehicles had been “obeying the same road rules that any car is required to follow.” The company, like its peers in Silicon Valley and Detroit, has characterized incidents like this as isolated, potholes on the road to a steering-wheel-free future. Over the course of more than a decade, flashy demos from companies including Google, GM, Ford, Tesla, and Zoox have promised cars capable of piloting themselves through chaotic urban landscapes, on highways, and in extreme weather without any human input or oversight. The companies have suggested they’re on the verge of eliminating road fatalities, rush-hour traffic, and parking lots, and of upending the $2 trillion global automotive industry.
It all sounds great until you encounter an actual robo-taxi in the wild. Which is rare: Six years after companies started offering rides in what they’ve called autonomous cars and almost 20 years after the first self-driving demos, there are vanishingly few such vehicles on the road. And they tend to be confined to a handful of places in the Sun Belt, because they still can’t handle weather patterns trickier than Partly Cloudy. State-of-the-art robot cars also struggle with construction, animals, traffic cones, crossing guards, and what the industry calls “unprotected left turns,” which most of us would call “left turns.”
The industry says its Derek Zoolander problem applies only to lefts that require navigating oncoming traffic. (Great.) It’s devoted enormous resources to figuring out left turns, but the work continues. Earlier this year, Cruise LLC — majority-owned by General Motors Corp. — recalled all of its self-driving vehicles after one car’s inability to turn left contributed to a crash in San Francisco that injured two people. Aaron McLear, a Cruise spokesman, says the recall “does not impact or change our current on-road operations.” Cruise is planning to expand to Austin and Phoenix this year. “We’ve moved the timeline to the left for what might be the first time in AV history,” McLear says.
Cruise didn’t release the video of that accident, but there’s an entire social media genre featuring self-driving cars that become hopelessly confused. When the results are less serious, they can be funny as hell. In one example, a Waymo car gets so flummoxed by a traffic cone that it drives away from the technician sent out to rescue it. In another, an entire fleet of modified Chevrolet Bolts show up at an intersection and simply stop, blocking traffic with a whiff of Maximum Overdrive. In a third, a Tesla drives, at very slow speed, straight into the tail of a private jet.
This, it seems, is the best the field can do after investors have bet something like $100 billion, according to a McKinsey & Co. report. While the industry’s biggest names continue to project optimism, the emerging consensus is that the world of robo-taxis isn’t just around the next unprotected left — that we might have to wait decades longer, or an eternity.
“It’s a scam,” says George Hotz, whose company Comma.ai Inc. makes a driver-assistance system similar to Tesla Inc.’s Autopilot. “These companies have squandered tens of billions of dollars.” In 2018 analysts put the market value of Waymo LLC, then a subsidiary of Alphabet Inc., at $175 billion. Its most recent funding round gave the company an estimated valuation of $30 billion, roughly the same as Cruise. Aurora Innovation Inc., a startup co-founded by Chris Urmson, Google’s former autonomous-vehicle chief, has lost more than 85% since last year and is now worth less than $3 billion. This September a leaked memo from Urmson summed up Aurora’s cash-flow struggles and suggested it might have to sell out to a larger company. Many of the industry’s most promising efforts have met the same fate in recent years, including Drive.ai, Voyage, Zoox, and Uber’s self-driving division. “Long term, I think we will have autonomous vehicles that you and I can buy,” says Mike Ramsey, an analyst at market researcher Gartner Inc. “But we’re going to be old.”
Our driverless future is starting to look so distant that even some of its most fervent believers have turned apostate. Chief among them is Anthony Levandowski, the engineer who more or less created the model for self-driving research and was, for more than a decade, the field’s biggest star. Now he’s running a startup that’s developing autonomous trucks for industrial sites, and he says that for the foreseeable future, that’s about as much complexity as any driverless vehicle will be able to handle. “You’d be hard-pressed to find another industry that’s invested so many dollars in R&D and that has delivered so little,” Levandowski says in an interview. “Forget about profits—what’s the combined revenue of all the robo-taxi, robo-truck, robo-whatever companies? Is it a million dollars? Maybe. I think it’s more like zero.”
In some ways, Levandowski is about as biased a party as anyone could be. His ride on top of the driverless wave ended in ignominy, after he moved from Google to Uber Technologies Inc. and his old bosses sued the crap out of his new ones for, they said, taking proprietary research along with him. The multibillion-dollar lawsuit and federal criminal case got Levandowski fired, forced him into bankruptcy, and ended with his conviction for stealing trade secrets. He only avoided prison thanks to a presidential pardon from Donald Trump.
On the other hand, Levandowski is also acknowledged, even by his detractors, as a pioneer in the industry and the person most responsible for turning driverless cars from a science project into something approaching a business. Eighteen years ago he wowed the Pentagon with a kinda-sorta-driverless motorcycle. That project turned into Google’s driverless Prius, which pushed dozens of others to start self-driving car programs. In 2017, Levandowski founded a religion called the Way of the Future, centered on the idea that AI was becoming downright godlike.
What shattered his faith? He says that in the years after his defenestration from Uber, he began to compare the industry’s wild claims to what seemed like an obvious lack of progress with no obvious path forward. “It wasn’t a business, it was a hobby,” he says. Levandowski maintains that somebody, eventually, will figure out how to reliably get robots to turn left, and all the rest of it. “We’re going to get there at some point. But we have such a long way to go.”
For the companies that invested billions in the driverless future that was supposed to be around the next corner, “We’ll get there when we get there” isn’t an acceptable answer. The industry that grew up around Levandowski’s ideas can’t just reverse course like all those Google cars outside Jennifer King’s bedroom. And the companies that bet it all on those ideas might very well be stuck in a dead end.
All self-driving car demos are more or less the same. You ride in the back seat and watch the steering wheel move on its own while a screen shows you what the computer is “seeing.” On the display, little red or green boxes hover perfectly over every car, bike, jaywalker, stoplight, etc. you pass. All this input feels subliminal when you’re driving your own car, but on a readout that looks like a mix between the POVs of the Terminator and the Predator, it’s overwhelming. It makes driving feel a lot more dangerous, like something that might well be better left to machines. The car companies know this, which is why they do it. Amping up the baseline tension of a drive makes their software’s screw-ups seem like less of an outlier, and the successes all the more remarkable.
One of the industry’s favorite maxims is that humans are terrible drivers. This may seem intuitive to anyone who’s taken the Cross Bronx Expressway home during rush hour, but it’s not even close to true. Throw a top-of-the-line robot at any difficult driving task, and you’ll be lucky if the robot lasts a few seconds before crapping out.
“Humans are really, really good drivers — absurdly good,” Hotz says. Traffic deaths are rare, amounting to one person for every 100 million miles or so driven in the US, according to the National Highway Traffic Safety Administration. Even that number makes people seem less capable than they actually are. Fatal accidents are largely caused by reckless behavior — speeding, drunks, texters, and people who fall asleep at the wheel. As a group, school bus drivers are involved in…
