One morning this past spring, Baruch Fisch­hoff, a professor in the department of ­engineering and public policy at Carnegie ­Mellon University, was walking to campus on a quiet tree-lined Pittsburgh street when a prototype computer-driven Uber, a gray Volvo XC90, motored slowly past.

Pittsburghers have grown accustomed to seeing the vehicles prowling the streets of the company’s de facto outdoor test bed. But as Fischhoff approached the corner, he noticed a road crew and a cement mixer in the middle of the intersection. “I thought, ‘Gee, I wonder if the computer can figure out if this is something in the road that’s not moving.’”

He waited for the car, which was idling to his left, to make a right turn—the only plausible move. But it did nothing. “It was there first, it had the right-of-way, and I was waiting for it to turn,” he said. Still, the car did nothing. “The longer I waited, the less willing I was to trust that the computer would make the right decision.” Finally, the human stationed inside the car tapped on the window and silently motioned him across.

Fischhoff, who studied math and psychology—and who trained with both Daniel Kahneman and the late Amos Tversky, giants in the field of the psychology of decision making—is one of the world’s leading scholars on the psychology of risk. There is a cosmic irony in the idea of Fischhoff standing on a corner, trying to calculate what this machine was going to do, what he should do, and how he felt about the whole thing.

We are all, in a sense, standing on the corner, caught between the whirlwind of breathless news about the driverless future and our own uncertain sense of what that looks like and, more important, what our comfort level with that future is.

In terms of saving lives, driverless vehicles, on paper anyway, make sense: Simply remove the possibility of fatigue or alcohol impairment in a driver, and you have just knocked 45.5 percent off the fatality rate in the U.S.—and that is merely the lowest-hanging fruit in a forest of human-factor hazards.

But we don’t tend to think on paper. We think in our heads, which Fischhoff has spent his working life trying to get into. In a 1978 paper titled “How Safe Is Safe Enough?” he and his coauthors noted that in modern industrial societies, “the benefits from technology must be paid for not only with money, but with lives.” From nuclear energy to aerosol cans—and, of course, cars—“every technological advance carries some risks of adverse side effects.” The question was: How much were people willing to pay in convenience, efficiency, and money to lessen that risk? The answer does not simply depend on the amount of benefit, but how we feel about the risk itself—and not all risks are felt with the same force.

Let us now flip the equation somewhat. Imagine that fully automated vehicles come to market with a promise of being safer than conventional automobiles and at a comparable price. What level of additional safety—rather than “acceptable risk,” think “perceived benefit”—would prompt drivers (who rarely have immediately warmed to new safety technologies) to give up the wheel for good? Last year, Mark Rosekind, head of the National Highway Traffic Safety Administration at the time, declared that a driverless-­vehicle fleet, to earn government approval, should increase safety at least twofold. In other words, it should at a minimum cut in half the current toll of roughly 40,200 deaths annually. Even that large of an improvement might not be enough to entice Americans to give up the steering wheel, though, since computer-driven cars push the very same psychological buttons that most strongly affect our feelings of risk.

Before considering the safety of driverless vehicles, it is worth determining how safe human driving is. The chance of any one trip in the U.S. ending in a fatality is remarkably small. In a 1978 study, Fischhoff estimated the odds at 1 in 3.5 million; after ­several decades of safety improvements, he roughly estimates the risk has halved, to 1 in 7 million. Over a lifetime, however, according to the National Safety Council, that figure drops to a mere 1 in 114 (with the proviso that one’s individual risk may vary greatly). The disparity seems massive, but according to Fischhoff, people have trouble seeing “how small risks mount up through repeated exposure.”

And so, even though driving has been called “the most important contributor to the danger of leaving home,” few of us would openly admit to needing a car that drives for us. Simply being a driver changes our risk assessment of driving. When the driver moves to the passenger seat, research has found, they become much less optimistic about their chances of not being in a crash. Suddenly, their risk has become more “involuntary.” For us to become full-time passengers in a car, to accept that we are not in control, we will probably need commercial-aviation levels of safety, where the lifetime death rate is a mere 1 in 9821. No one much wonders anymore—given the extraordinary gains in aviation safety—whether man or machine is at the controls of their flight at any given moment. But to get us to that point, driverless cars will need an extraordinarily robust safety net.

Feelings of risk decline when one feels in control. We so enjoy the feeling, laboratory studies have shown, that people will pay more to make their own choices—even when doing so comes with a greater risk of a bad outcome.

Gill Pratt, the CEO of the Toyota Research Institute (which investigates service robots as well as robots for the road), uses the example of motion sickness to describe that connection, that feeling of dominion the driver has over a car. “When a person is behind the wheel, they tend to get much less carsick than if someone is driving for them,” he says. “At a very basic neurological level, you know that you get to control acceleration and how the car turns—when I turn left, I know that pretty soon my inner ear is going to feel me turning left.” Passengers, lacking that direct interface, can feel at sea, in more ways than one.

Fully automated cars not only make us passengers (and hence more fearful), they put us under machine control. That turns a voluntary activity into something that feels less voluntary, making us more risk averse.

“The rational utilitarian view is that we should trust the machines versus us as long as they’re even the slightest bit better.” _ Gill Pratt, CEO of the Toyota Research Institute

Anuj Pradhan, a research scientist at the University of Michigan’s Transportation Research Institute, was riding in a Navya Arma, a French-made driverless bus, in Mcity, the institute’s test facility, when the vehicle made a small series of juddering braking maneuvers. Out of the corner of his eye, Pradhan had seen “a tiny bird, a sparrow,” fly in front of the vehicle, and its motion was picked up by the vehicle’s lidar sensors. If he had not seen the bird, Pradhan says, he might have been tempted to yank the emergency shutoff—or what the researchers call the “Oh shit! bar.” A driver might have explained the reason for the incident, but the machine simply resumed. “This is where the human-machine interface becomes critical,” he says. “You want to be able to inform the passenger or operator what the state of the system is.” Automation researchers have noted that where human relationships tend to build toward trust, our trust in machines begins high and then erodes with errors. We are more likely to give people second chances than machines.

“The rational utilitarian view is that we should trust the machines versus us as long as they’re even the slightest bit better,” says Toyota’s Pratt. But how good will be good enough? “Our hypothesis is that, because it’s hard to empathize with a machine compared with a person, society is going to hold machines to a higher standard,” Pratt says. This not-level playing field could have legal ramifications, the authors of an article in the Santa Clara Law Review contend. Cost/benefit arguments, they suggest, would “not depend on whether the at-fault autonomous vehicle is better overall than a traditional vehicle, but whether the autonomous-vehicle technology could have been tweaked to make it safer.” Computer-­driven cars do not so much have to be better than people, but rather the platonic ideal of a driverless vehicle itself.

A machine-caused crash looms larger than a human-caused crash. As the English risk expert John Adams puts it, “What kills you matters.” It matters in myriad ways. In one study, subjects were presented with two fatal crash scenarios: In the first, death was the result of “inhalation of toxic fumes from a damaged engine”; the second death came from “trauma caused by the force of the airbag deployment.” Both were arguably product flaws, but under a dynamic termed “betrayal aversion,” subjects felt much more betrayed by the manufacturer under the airbag condition because their trust was violated by the very thing that was supposed to save them. Driverless vehicles, marketed as a correction to human error, will have to overcome our low threshold for betrayal.

In “How Safe Is Safe Enough?” the authors wrote that “reduction of risk typically entails reduction of benefit.” Traffic fatalities could be massively cut, virtually overnight, if speed governors set to 35 mph were installed on every car. We have decided that the benefits of fast, unfettered mobility are worth the cost in human lives.

Ian Walker, a psychologist specializing in traffic safety and travel choices at the University of Bath, says there is something else, besides the perceived benefit and the voluntary risk, that leads us to accept highway fatality numbers that far exceed workplace—even combat—deaths. And that is randomness. “Implicitly, people are buying into this idea that it’s okay to have a system in which ­millions of people die as long as there’s no predictability or forethought about it.” It is millions of drivers, driving millions of miles, making their own risk/reward decisions, having the occasional “accident”—a word we turn to for comfort, even though we know most accidents are preventable. More than a century after the invention of the automobile, with the true “auto” mobile shimmering on the edge of feasibility, the whole calculus of risk and safety will be taken out of the hands of individuals and brought into the system-wide decisions of software engineers, vehicle manufacturers, and regulators. Whether those parties favor absolute safety at the cost of conveni­ence (with slower, less aggressive cars) or embrace the gains of machine optimality with higher risks (platoons of tightly packed, fast-moving vehicles) remains to be seen, but for the first time, the decision to assign odds will be made consciously. We’ll no ­longer be gamblers but the casino.