Tag: driverless cars
Will Your Driverless Car Kill You So Others May Live?

Will Your Driverless Car Kill You So Others May Live?

By Eric Schwitzgebel, Los Angeles Times (TNS)

It’s 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can’t get traction. Your car does some calculations: If it continues braking, there’s a 90 percent chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?

This isn’t an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?

The California Department of Motor Vehicles is now trying to draw up safety regulations for autonomous vehicles. These regulations might or might not specify when it is acceptable for collision-avoidance programs to expose passengers to risk to avoid harming others — for example, by crossing the double-yellow line or attempting an uncertain maneuver on ice.

Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufacturers “self-certify” the safety of their vehicles, with substantial freedom to develop collision-avoidance algorithms as they see fit.

That’s far too much responsibility for private companies. Because determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess.

Regulatory agencies will need to set some boundaries. For example, some rules should presumably be excluded as too selfish. Consider the over-simple rule of protecting the car’s occupants at all costs. This would imply that if the car calculates that the only way to avoid killing a pedestrian would involve sideswiping a parked truck, with a 5 percent chance of injury to the car’s passengers, then the car should instead kill the pedestrian.

Other possible rules might be too sacrificial of the passengers. The equally over-simple rule of maximizing lives saved without any special regard for the car’s occupants would unfairly disregard personal accountability. What if other drivers — human drivers — have knowingly put themselves in danger? Should your autonomous vehicle risk your safety, perhaps even your life, because a reckless motorcyclist chose to speed around a sharp curve?

A Mountain View lab must not be allowed to resolve these difficult questions on our behalf.

That said, a good regulatory framework ought to allow some manufacturer variation and consumer choice, within ethical limits. Manufacturers or fleet operators could offer passengers a range of options. “When your child is in the car, our onboard systems will detect it and prioritize the protection of rear-seat passengers!” Cars might have aggressive modes (maximum allowable speed and aggressiveness), safety modes, ethical utilitarian modes (perhaps visibly advertised so that others can admire your benevolence) and so forth.

Some consumer freedom seems ethically desirable. To require that all vehicles at all times employ the same set of collision-avoidance procedures would needlessly deprive people of the opportunity to choose algorithms that reflect their values. Some people might wish to prioritize the safety of their children over themselves. Others might want to prioritize all passengers equally. Some people might wish to choose algorithms more self-sacrificial on behalf of strangers than the government could legitimately require of its citizens.

There will also always be trade-offs between speed and safety, and different passengers might legitimately weigh them differently, as we now do in our manual driving choices.

Furthermore, although we might expect computers to have faster reaction times than people, our best computer programs still lag far behind normal human vision at detecting objects in novel, cluttered environments. Suppose your car happens upon a woman pushing a rack of coats in a windy swirl of leaves. Vehicle owners may insist on some sort of preemptive override, some way of telling their car not to employ its usual algorithm, lest it sacrifice them for a mirage.

There is something romantic about the hand upon the wheel — about the responsibility it implies. But future generations might be amazed that we allowed music-blasting 16-year-olds to pilot vehicles unsupervised at 65 mph, with a flick of the steering wheel the difference between life and death. A well-designed machine will probably do better in the long run.

That machine will never drive drunk, never look away from the road to change the radio station or yell at the kids in the back seat. It will, however, have power over life and death. We need to decide — publicly — how it will exert that power.

ABOUT THE WRITER

Eric Schwitzgebel is a professor of philosophy at UC Riverside and the author of “Perplexities of Consciousness.” He blogs at the Splintered Mind. He wrote this for the Los Angeles Times.

©2015 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Photo: Ali Eminov via Flickr

Shorter Road Ahead For Driverless Cars

Shorter Road Ahead For Driverless Cars

Friends keep orating that driverless cars are something in the far-ahead future. Apparently, they’re not. That future is near.

Exhibit A is Google’s decision to get into the auto insurance business. Now Google wants to be in everyone’s business. But its foray into insuring drivers is highly plausible for a company deep into developing driverless cars.

In a driverless car, you the “operator” may be texting, drinking, or having sex, but if you’re in an accident, it’s the software’s fault, possibly. Another nice thing about driverless cars is that we’re far less likely to be in accidents. That’s because humans and their frailties are taken out of the equation.

Obviously, we’re still in the driver-driven reality, so the new Google auto insurance site will let car owners compare the rates on conventional coverage offered by various companies. Google gets two things out of this. One, it drains more personal data from users. Two, it takes a cut whenever someone picks a policy on its site.

But come the driverless future, the current model of auto insurance goes out the window. Accidents will be the fault of the software or the car’s mechanics. And software will decide whether it or the metal is to blame.

“After an accident, the onboard computer and sensors will be able to determine whether it was caused by a poorly designed algorithm or a parts failure,” Xerox insurance executive Valerie Raburn explains in The Wall Street Journal.

Fewer accidents and fewer humans mean that car insurance payments will go way down. In fact, they could be made to disappear. Raburn thinks that Google will eventually roll the cost of insurance into the price of its driverless cars. Thus, Google gets a third thing out of entering the auto insurance business: a monopoly on coverage.

Some predict that the whole notion of owning a car will dissolve. After all, one could easily summon a robotic chauffeur via an app, the way we can call a cab or an Uber car. (Surge pricing probably will be a lot harder to pull off.)

Two flies in this ointment. For starters, where will you keep the sunscreen, the extra pair of glasses, and the dog pillow, not to mention the yoga mat and cooler stored in the trunk? This is a semi-public space in which you have no hoarding rights.

The other concern is this: Suppose the occupants of the driverless car before you were slobs. You are picked up by a vehicle littered with empty soda cans and discarded tissues, pizza leavings on the seat. Human drivers can police the passengers in back for anti-social behavior.

One supposes there will be sensors and algorithms to detect littering, in addition to reading your mind. Perhaps an ejection seat mechanism can be attached. Just thinking.

That’s down the road. What’s in the now?

Britain has already permitted trial runs of driverless cars in several cities. The thingies on the road are being called not cars but “pods.” They look like lopped-off Fiat 500s, if you can imagine.

In the United States, driverless cars will be tested in a make-believe town — Mcity — to be built by the University of Michigan and the Michigan Department of Transportation. Thrown in the cars’ path will be mechanized pedestrians and bicycles, traffic circles, junk in the right lane, changing road widths, curb cuts, and an enormous collection of signs and traffic signals. If the cars can handle this, they may someday be able to master Shreveport, Louisiana, and Pawtucket, Rhode Island.

Driverless cars are clearly here, almost. Speed the day.

Follow Froma Harrop on Twitter @FromaHarrop. She can be reached at fharrop@gmail.com. To find out more about Froma Harrop and read features by other Creators writers and cartoonists, visit the Creators Web page at www.creators.com. 

Photo: General Physics Laboratory via Flickr