Self-Driving Cars: The Road to Nowhere?

Self-driving cars were meant to be the future. However, prominent critics, including industry pioneer Anthony Levandowski, are becoming more vocal as the losses mount.

Source: Bloomberg | Published on October 6, 2022

driverless car

Jennifer King was awakened at 2 a.m. by a loud, high-pitched hum from the first car. "It sounded like a hovercraft," she says, but that's not the strange part. King lives on a dead-end street on the outskirts of the Presidio, a 1,500-acre park in San Francisco with no through traffic. A white Jaguar SUV was backing out of her driveway. It had a laser sensor on its roof that looked like a giant fan and bore the logo of Google's driverless car division, Waymo.
She was observing what appeared to be a bug in the self-driving software: the car appeared to be performing a three-point turn on her property. If it had only happened once, she claims, it would not have been a big deal. However, dozens of Google cars began doing the same thing repeatedly every day.

King complained to Google about the cars driving her insane, but the K-turns persisted. Occasionally, a few of the SUVs would arrive at the same time and form a small line, like an army of zombie driver's education students. The whole thing went on for weeks until King called the local CBS affiliate and a news crew came to the scene last October. "When you watch it, it's kind of funny," the report began. "And the neighbors are taking notice." King's driveway was soon hers again.

Waymo denies that its technology failed, claiming in a statement that its vehicles "obeyed the same road rules that any car is required to follow." The company, like its peers in Silicon Valley and Detroit, has described incidents like this as isolated, potholes on the road to a future without a steering wheel. Over the past decade, flashy demos from companies such as Google, GM, Ford, Tesla, and Zoox have promised self-driving cars capable of navigating chaotic urban landscapes, highways, and extreme weather without human intervention or oversight. The companies claim to be on the verge of eliminating traffic fatalities, rush-hour congestion, and parking lots, as well as upending the $2 trillion global automotive industry.

It all sounds great until you come across a real-life robo-taxi. Which is unusual: Six years after companies began offering rides in self-driving cars, and nearly 20 years after the first self-driving demos, there are vanishingly few such vehicles on the road. They're also limited to a few locations in the Sun Belt because they can't handle weather patterns more complicated than Partly Cloudy. Robot cars are also challenged by construction, animals, traffic cones, crossing guards, and what the industry refers to as "unprotected left turns," which most of us refer to as "left turns."

According to the industry, the Derek Zoolander problem only applies to left turns that require navigating oncoming traffic. (Great.) It has devoted significant resources to determining left turns, but the work continues. Earlier this year, Cruise LLC—majority-owned by General Motors Co.—recalled all of its self-driving vehicles after one car’s inability to turn left contributed to a crash in San Francisco that injured two people. According to Cruise spokesman Aaron McLear, the recall "does not impact or change our current on-road operations." This year, Cruise intends to expand to Austin and Phoenix. "We may have moved the timeline to the left for the first time in AV history," McLear says.

Cruise didn’t release the video of that accident, but there’s an entire social media genre featuring self-driving cars that become hopelessly confused. When the results are less serious, they can be funny as hell. In one example, a Waymo car gets so flummoxed by a traffic cone that it drives away from the technician sent out to rescue it. In another, an entire fleet of modified Chevrolet Bolts show up at an intersection and simply stop, blocking traffic with a whiff of Maximum Overdrive. In a third, a Tesla drives, at very slow speed, straight into the tail of a private jet.

This, it seems, is the best the field can do after investors have bet something like $100 billion, according to a McKinsey & Co. report. While the industry’s biggest names continue to project optimism, the emerging consensus is that the world of robo-taxis isn’t just around the next unprotected left—that we might have to wait decades longer, or an eternity.

“It’s a scam,” says George Hotz, whose company Comma.ai Inc. makes a driver-assistance system similar to Tesla Inc.’s Autopilot. “These companies have squandered tens of billions of dollars.” In 2018 analysts put the market value of Waymo LLC, then a subsidiary of Alphabet Inc., at $175 billion. Its most recent funding round gave the company an estimated valuation of $30 billion, roughly the same as Cruise. Aurora Innovation Inc., a startup co-founded by Chris Urmson, Google’s former autonomous-vehicle chief, has lost more than 85% since last year and is now worth less than $3 billion. This September a leaked memo from Urmson summed up Aurora’s cash-flow struggles and suggested it might have to sell out to a larger company. Many of the industry’s most promising efforts have met the same fate in recent years, including Drive.ai, Voyage, Zoox, and Uber’s self-driving division. “Long term, I think we will have autonomous vehicles that you and I can buy,” says Mike Ramsey, an analyst at market researcher Gartner Inc. “But we’re going to be old.”

Our driverless future is starting to look so distant that even some of its most fervent believers have turned apostate. Chief among them is Anthony Levandowski, the engineer who more or less created the model for self-driving research and was, for more than a decade, the field’s biggest star. Now he’s running a startup that’s developing autonomous trucks for industrial sites, and he says that for the foreseeable future, that’s about as much complexity as any driverless vehicle will be able to handle. “You’d be hard-pressed to find another industry that’s invested so many dollars in R&D and that has delivered so little,” Levandowski says in an interview. “Forget about profits—what’s the combined revenue of all the robo-taxi, robo-truck, robo-whatever companies? Is it a million dollars? Maybe. I think it’s more like zero.”

In some ways, Levandowski is about as biased a party as anyone could be. His ride on top of the driverless wave ended in ignominy, after he moved from Google to Uber Technologies Inc. and his old bosses sued the crap out of his new ones for, they said, taking proprietary research along with him. The multibillion-dollar lawsuit and federal criminal case got Levandowski fired, forced him into bankruptcy, and ended with his conviction for stealing trade secrets. He only avoided prison thanks to a presidential pardon from Donald Trump.

On the other hand, Levandowski is also acknowledged, even by his detractors, as a pioneer in the industry and the person most responsible for turning driverless cars from a science project into something approaching a business. Eighteen years ago he wowed the Pentagon with a kinda-sorta-driverless motorcycle. That project turned into Google’s driverless Prius, which pushed dozens of others to start self-driving car programs. In 2017, Levandowski founded a religion called the Way of the Future, centered on the idea that AI was becoming downright godlike.

What shattered his faith? He says that in the years after his defenestration from Uber, he began to compare the industry’s wild claims to what seemed like an obvious lack of progress with no obvious path forward. “It wasn’t a business, it was a hobby,” he says. Levandowski maintains that somebody, eventually, will figure out how to reliably get robots to turn left, and all the rest of it. “We’re going to get there at some point. But we have such a long way to go.”

For the companies that invested billions in the driverless future that was supposed to be around the next corner, “We’ll get there when we get there” isn’t an acceptable answer. The industry that grew up around Levandowski’s ideas can’t just reverse course like all those Google cars outside Jennifer King’s bedroom. And the companies that bet it all on those ideas might very well be stuck in a dead end.

All self-driving car demos are more or less the same. You ride in the back seat and watch the steering wheel move on its own while a screen shows you what the computer is “seeing.” On the display, little red or green boxes hover perfectly over every car, bike, jaywalker, stoplight, etc. you pass. All this input feels subliminal when you’re driving your own car, but on a readout that looks like a mix between the POVs of the Terminator and the Predator, it’s overwhelming. It makes driving feel a lot more dangerous, like something that might well be better left to machines. The car companies know this, which is why they do it. Amping up the baseline tension of a drive makes their software’s screw-ups seem like less of an outlier, and the successes all the more remarkable.

One of the industry’s favorite maxims is that humans are terrible drivers. This may seem intuitive to anyone who’s taken the Cross Bronx Expressway home during rush hour, but it’s not even close to true. Throw a top-of-the-line robot at any difficult driving task, and you’ll be lucky if the robot lasts a few seconds before crapping out.

“Humans are really, really good drivers—absurdly good,” Hotz says. Traffic deaths are rare, amounting to one person for every 100 million miles or so driven in the US, according to the National Highway Traffic Safety Administration. Even that number makes people seem less capable than they actually are. Fatal accidents are largely caused by reckless behavior—speeding, drunks, texters, and people who fall asleep at the wheel. As a group, school bus drivers are involved in one fatal crash roughly every 500 million miles. Although most of the accidents reported by self-driving cars have been minor, the data suggest that autonomous cars have been involved in accidents more frequently than human-driven ones, with rear-end collisions being especially common. “The problem is that there isn’t any test to know if a driverless car is safe to operate,” says Ramsey, the Gartner analyst. “It’s mostly just anecdotal.”

Waymo, the market leader, said last year that it had driven more than 20 million miles over about a decade. That means its cars would have to drive an additional 25 times their total before we’d be able to say, with even a vague sense of certainty, that they cause fewer deaths than bus drivers. The comparison is likely skewed further because the company has done much of its testing in sunny California and Arizona.

For now, here’s what we know: Computers can run calculations a lot faster than we can, but they still have no idea how to process many common roadway variables. People driving down a city street with a few pigeons pecking away near the median know (a) that the pigeons will fly away as the car approaches and (b) that drivers behind them also know the pigeons will scatter. Drivers know, without having to think about it, that slamming the brakes wouldn’t just be unnecessary—it would be dangerous. So they maintain their speed.

What the smartest self-driving car “sees,” on the other hand, is a small obstacle. It doesn’t know where the obstacle came from or where it may go, only that the car is supposed to safely avoid obstacles, so it might respond by hitting the brakes. The best-case scenario is a small traffic jam, but braking suddenly could cause the next car coming down the road to rear-end it. Computers deal with their shortcomings through repetition, meaning that if you showed the same pigeon scenario to a self-driving car enough times, it might figure out how to handle it reliably. But it would likely have no idea how to deal with slightly different pigeons flying a slightly different way.

The industry uses the phrase “deep learning” to describe this process, but that makes it sound more sophisticated than it is. “What deep learning is doing is something similar to memorization,” says Gary Marcus, a New York University psychology professor who studies artificial intelligence and the limits of self-driving vehicles. “It only works if the situations are sufficiently akin.”

And the range of these “edge cases,” as AI experts call them, is virtually infinite. Think: cars cutting across three lanes of traffic without signaling, or bicyclists doing the same, or a deer ambling alongside the shoulder, or a low-flying plane, or an eagle, or a drone. Even relatively easy driving problems turn out to contain an untold number of variations depending on weather, road conditions, and human behavior. “You think roads are pretty similar from one place to the next,” Marcus says. “But the world is a complicated place. Every unprotected left is a little different.”

Self-driving companies have fallen back on shortcuts. In lieu of putting more cars on the road for longer, they run simulations inside giant data centers, add those “drives” to their total mile counts, and use them to make claims about safety. Simulations might help with some well-defined scenarios such as left turns, but they can’t manufacture edge cases. In the meantime the companies are relying on pesky humans for help navigating higher-order problems. All use remote operators to help vehicles that run into trouble, as well as safety drivers—“autonomous specialists,” Waymo calls them—who ride inside some cars to take over if there’s a problem.

To Levandowski, who rigged up his first self-driving vehicle in 2004, the most advanced driverless-car companies are all still running what amount to very sophisticated demos. And demos, as he well knows, are misleading by design. “It’s an illusion,” he says: For every successful demo, there might be dozens of failed ones. And whereas you only need to see a person behind the wheel for a few minutes to judge if they can drive or not, computers don’t work that way. If a self-driving car successfully navigates a route, there’s no guarantee it can do so the 20th time, or even the second.

In 2008, Levandowski kludged together his first self-driving Prius, which conducted what the industry widely recognizes as the first successful test of an autonomous vehicle on public streets. (The event was recorded for posterity on a Discovery Channel show called Prototype This!) Levandowski was aware of how controlled the environment was: The car was given an extremely wide berth as it made its way from downtown San Francisco across the Bay Bridge and onto Treasure Island, because there was a 16-vehicle motorcade protecting it from other cars and vice versa. The car did scrape a wall on its way off the bridge, yet he says he couldn’t help but feel amazed that it had all basically worked. “You saw that, and you were like, ‘OK, it’s a demo and there are a lot of things to work on,’ ” he recalls. “But, like, we were almost there. We just needed to make it a little better.”

For most of the years since he built his first “Pribot,” Levandowski says, it’s felt as though he and his competitors were 90% of the way to full-blown robot cars. Executives he later worked with at Google and Uber were all too happy to insist that the science was already there, that his prototypes could already handle any challenge, that all that was left was going commercial.” They threw around wild claims that investors, including the Tesla bull Cathie Wood, built into models to calculate that the industry would be worth trillions.

Once again, this was a bit of self-hypnosis, Levandowski says. The demos with the sci-fi computer vision led him and his colleagues to believe they and their computers were thinking more similarly than they really were. “You see these amazing representations of the 3D world, and you think the computer can see everything and can understand what’s going to happen next,” he says. “But computers are still really dumb.”

In the view of Levandowski and many of the brightest minds in AI, the underlying technology isn’t just a few years’ worth of refinements away from a resolution. Autonomous driving, they say, needs a fundamental breakthrough that allows computers to quickly use humanlike intuition rather than learning solely by rote. That is to say, Google engineers might spend the rest of their lives puttering around San Francisco and Phoenix without showing that their technology is safer than driving the old-fashioned way.

In some ways the self-driving future seemed closest and most assured in 2017, after Levandowski went to Uber and Google sued them. Google accused Levandowski of taking a work laptop home, downloading its contents, and using that information to jump-start his work at Uber. (Although he doesn’t deny the laptop part, he’s long disputed that its contents found their way into anything Uber built.) The lawsuit was destabilizing but also validating in a way. Google’s $1.8 billion claim for damages suggested it had done the math based on just how imminent the fortunes to be made from driverless technology were. “People were playing for this trillion-dollar prize of automating all transportation,” Levandowski says. “And if you think it’s really just a year away, you take the gloves off.”

Uber had promised to defend Levandowski if he was sued, but it fired him in May 2017, and he faced an arbitration claim in which Google sought to recoup hundreds of millions of dollars. During the 2018 trial, with Google struggling to prove Uber had used its trade secrets, the company settled with Uber. It got about $250 million in Uber stock, a fraction of what it had initially sought, plus a promise that the ride-hailing company wouldn’t use Google’s driverless technology.

The fallout continued for Levandowski in 2019, when federal prosecutors announced that a grand jury had indicted him on 33 counts of trade secrets theft. Soon after, the deal his new company, Pronto.ai, had been negotiating with a truck manufacturer—to try out Pronto’s more modest driver-assist feature for trucks—fell apart. “It turns out a federal indictment does cramp your style,” he says. An arbitration panel also ordered him to pay Google $179 million. He stepped down as Pronto’s chief executive officer, turned the company over to its chief safety officer, Robbie Miller, and declared bankruptcy. As part of a deal with prosecutors, in exchange for the dismissal of the other 32 counts, Levandowski pleaded guilty to one and was sentenced to 18 months in federal prison in August 2020. Because of the pandemic, the sentence was delayed long enough that he never served a day before his pardon, which came on the last day of the Trump presidency.

According to a White House press release at the time, the pardon’s advocates included Trump megadonor Peter Thiel and a half-dozen Thiel allies, including Arizona Senate candidate Blake Masters and Oculus founder Palmer Luckey. Levandowski says that he and Thiel have some mutual friends who spoke up for him but that they never talked until after the pardon was announced. He says he doesn’t know why Thiel took up his cause, but Thiel’s antipathy for Google is legendary, and pardoning Levandowski would’ve been an opportunity to stick a thumb in the company’s eye. Earlier this year, Levandowski reached a settlement with Uber and Google over the $179 million judgment that will allow him to emerge from bankruptcy.

The idea that the secret to self-driving was hidden on Levandowski’s laptop has come to seem less credible over time. A year after Uber fired him, one of its self-driving cars killed a pedestrian in Phoenix. (The safety driver was charged with negligent homicide and has pleaded not guilty; Uber suspended testing its cars on public roads and added additional safety measures before resuming testing. The company was never charged.) Uber sold its self-driving unit to Aurora, the now-struggling upstart, in 2020, when times were better. In September, Waymo claimed, based on the results of a simulation, that its vehicles are safer in some circumstances than humans. Back in the real world, the safety figures are much less conclusive, and Waymo is basically where it was five years ago. (Waymo disputes this.)

Levandowski says his skepticism of the industry started around 2018. It was a little more than a year after Elon Musk unveiled a demo of a Tesla driving itself to the tune of Paint It Black. Levandowski checked the official road-test data that Tesla submitted to California regulators. The figures showed that, in that time, the number of autonomous miles Tesla had driven on public roads in the state totaled—wait for it—zero. (Tesla hasn’t reported any autonomous miles traveled in California since 2019. The company didn’t respond to a request for comment.) Although Levandowski says he admires Tesla, is impressed by its driver-assistance technology, and believes it may one day produce a truly self-driving car, he says the lack of progress by Musk and his peers forced him to question the point of his own years in the field. “Why are we driving around, testing technology and creating additional risks, without actually delivering anything of value?” he asks.

While Tesla has argued that its current system represents a working prototype, Musk has continued to blur the lines between demos and reality. On Sept. 30 he unveiled what looked like a barely functional robot, promising it would unleash “a fundamental transformation of civilization as we know it.” Six years after it began selling “full self-driving” capabilities, Tesla has yet to deliver a driverless car. Levandowski, for his part, has been spending time in gravel pits.

For more than 100 years, mining companies have been blasting rocks out of the hills near Santa Rosa, Calif., and crushing them into gravel bound for driveways, roads, and drains. Levandowski sometimes refers to Mark West Quarry, where Pronto has been operating its driverless trucks since last December, as a “sandbox,” and it’s easy to see why. The dusty mine features life-size versions of the Tonka toys you’d find in a child’s playroom. Yellow excavators knock enormous boulders down from a terraced cliffside into the mining pit, where front-end loaders pick up the stones and place them in 50-ton dump trucks to be carried to the crusher. “An 8-year-old boy’s dream,” Levandowski says as the boulders rattle through the crusher, which spits the smaller pieces out onto piles.

The mine work started as a sort of backup plan—a way to bring in revenue while Pronto got trucking companies comfortable with using its driver-assistance technology in their long-haul semis. Now, Levandowski says, construction sites are Plan A. Pronto took the same basic system it had used on the semis and built it into a self-driving dump truck, adding cameras, radar, and an onboard computer. Because connectivity is spotty at mine sites, the company created its own networking technology, which it spun off as a separate company, Pollen Mobile LLC. “With mining we’re doing driverless, but controlling the environment,” says Pronto Chief Technology Officer Cat Culkin. BoDean Co., the company that owns Mark West Quarry, is one of a half-dozen clients that pay installation fees to retrofit dump trucks with sensors, plus hourly fees for use. Neither Levandowski nor BoDean will say how much Pronto charges or how much it’s taking in.

Here’s his new vision of the self-driving future: For nine-ish hours each day, two modified Bell articulated end-dumps take turns driving the 200 yards from the pit to the crusher. The road is rutted, steep, narrow, requiring the trucks to nearly scrape the cliff wall as they rattle down the roller-coaster-like grade. But it’s the same exact trip every time, with no edge cases—no rush hour, no school crossings, no daredevil scooter drivers—and instead of executing an awkward multipoint turn before dumping their loads, the robot trucks back up the hill in reverse, speeding each truck’s reloading. Anthony Boyle, BoDean’s director of production, says the Pronto trucks save four to five hours of labor a day, freeing up drivers to take over loaders and excavators. Otherwise, he says, nothing has changed. “It’s just yellow equipment doing its thing, and you stay out of its way.”

Levandowski recognizes that making rock quarries a little more efficient is a bit of a comedown from his dreams of giant fleets of robotic cars. His company plans to start selling its software for long-haul trucks in 2023. And hopefully, in a few decades, all his old boasts will come true: driverless cities with cushy commutes, zero road fatalities, and totally safe road naps. But for now: “I want to do something that’s real, even if that means scaling back the grandiose visions.”