Self-Driving Cars: Not Worth the Trouble, Won’t Replace Transit

By Michael D. Setty, CRN Editor

Major media in the United States incessantly tell us that the widespread adoption of self-driving cars is just around the corner, and that this technological wonder is going to “disrupt” transportation on a broad scale. It is routinely asserted that most taxi and truck drivers will soon be out of jobs.

The claim that rail transit and rail passenger service will be rendered obsolete is of more interest to rail advocates, despite almost non-existent evidence. Already more opportunistic rail opponents claim that there should not be any more investments in new rail lines and service because the “self-driving cars are coming.

For example, The Antiplanner–a blog by Randal O’Toole, perhaps the most outspoken rail opponent in the U.S.–constantly claims that rail is obsolete because self-driving cars are “inevitable,” among other things (For a sampling of O’Toole’s vast collection of blog posts on this topic, see, search for “self driving cars”).

Well, no. As a 2014 Fortune article put it:

Political ideology, as it tends to, may rush into the vacuum of facts. Florida offers a preview where driverless cars have become part of right-wing pushback to mass transit plans. It was Brandes who introduced legislation that made Florida one of only four states to allow monitored testing of driverless vehicles on public roads. Republican governor Rick Scott, who once was strongly associated with the populist Tea Party movement, has made public appearances to support driverless car development in Florida even as he has rejected federal funding for a Tampa-to-Orlando high-speed rail line.

Florida transit advocates pointed out that the supposed quick arrival of self-driving cars are simply a stalling tactic:

Others in the fight see the autonomous vehicles argument as little more than a political stalling tactic, deployed by those who oppose mass transit for ideological reasons. “We are the last metropolitan area in the United States to develop a regional transit system,” says Phil Compton, a national Sierra Club staffer who has been tasked with supporting Greenlight [a transit plan for Pinellas County, Florida] for the past three years. “That is an objective fact. How many more decades do we have to wait for an alternative to what we have now?”


A website catering to driving instructors,, gives five reasons why self-driving cars will never catch on the way their apologists claim (paraphrased):

  1. The technology is too expensive and offers vehicle purchasers limited benefits compared to the extra cost of self-driving technology.
  2. The technology is still mostly untested and imperfect, “outside the Google hothouse.”
  3. Self-driving cars are a legal minefield, and it will take decades to work out liability and a host of other sticky issues.
  4. Society’s tolerance for malfunctions (“machine error”) is extraordinarily low, illustrated by the fact that plane and train crashes are generally headline news, mainly because they are so rare. A few more incidents like the driver death caused by a semi-automated Tesla that failed to see a tractor-trailer will slow down self-driving car deployment and adoption to a crawl.
  5. Self-driving cars are too disruptive. Paid drivers will not sit idly by while their jobs disappear. Political action will delay if not stop complete automation (sic) in its tracks.

The British political magazine New Statesman recently ran an article by transport export Christopher Wolmar, Transport’s Favourite Myths: Why We Will Never Own Driverless Cars. Wolmar points out that there are far more urgent transportation problems that are obscured by “all the hype.”

For example, Elon Musk claims that “fully autonomous” cars will be on the road by 2018. But the article points out the numerous hardware problems that must be overcome if self-driving cars are to live up to the hype. For example, sensors work on sunny, clear days but very poorly in bad weather. While not mentioned in the article, designing electronic components and sensors with “military grade” reliability results in very high unit costs. Wolmar points out:

The driverless car does not stand up to scrutiny. When pressed, Musk conceded that the “fully autonomous” car that he said would be ready by 2018 would not be completely automatic, nor would it go on general sale. There is a pattern. Whenever I ask people in the field what we can expect by a certain date, it never amounts to anything like a fully autonomous vehicle but rather a set of aids for drivers.

This is a crucial distinction. For this technology to be transformational, the cars have to be 100 per cent autonomous [emphasis added.] It is worse than useless if the “driver” has to watch over the controls, ready to take over if an incident seems likely to occur. Such a future would be more dangerous than the present, as our driving skills will have diminished, leaving us less able to react. [emphasis added] Google notes that it can take up to 17 seconds for a person to respond to alerts of a situation requiring him or her to assume control of the vehicle.


This point is also reinforced by an April 2015 Washington Post article:

“This notion, fall back to a human, in part it’s kind of a fallacy,” Eustice said. “To fall back on a human the car has to be able to have enough predictive capability to know that 30 seconds from now, or whatever, it’s in a situation it can’t handle. The human, they’re not going to pay attention in the car. You’re going to be on your cell phone, you’re going to totally tune out, whatever. To take on that cognitive load, you can’t just kick out and say oh ‘take over.’ ”


Another potential Achilles Heel often ignored in the self-driving car propaganda haze is that it seems probable that government regulators will not allow sale of a self-driving design unless it has been programmed to follow established traffic laws, or at least most of them:

The project at Stanford is considering more minor ethical issues, which may have less severe consequences but will crop up more often. For example, if an autonomous car approaches an obstacle that takes up half a lane, and there’s a double line in the middle of the road, what should the vehicle do? A human driver might not think twice about momentarily breaking the law and passing over the lines to get past – assuming there’s no oncoming traffic, obviously – but is it right for autonomous cars to be programmed to plan ahead of time to break the law? And if so, under what circumstances, and to what extent? (

Despite the efforts of some very smart people at Stanford, the history of artificial intelligence is not reassuring on this matter. It also inevitably will attract the attention of law enforcement, attorneys and politicians, guaranteeing that arguments of when and under what circumstances self-driving cars can legitimately “break the law” will drag on for many years, if not decades.

Wolmar identifies the obscured political agenda behind self-driving cars in his New Statesman article, the anti-transit political agenda of Randal O’Toole and others of his ilk:

The danger of all the hype is that politicians will assume that the driverless revolution obviates the need to search for solutions to more urgent problems, such as congestion and pollution. Why bother to build infrastructure, such as new Tube lines or tram systems, or to push for road pricing, if we’ll all end up in autonomous pods? Google all but confesses that its autonomous cars are intended to be an alternative to public transport – the opposite of a rational solution to the problems that we face.