Will Your Driverless Car Kill You So Others May Live?

Will Your Driverless Car Kill You So Others May Live?

By Eric Schwitzgebel, Los Angeles Times (TNS)

It’s 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can’t get traction. Your car does some calculations: If it continues braking, there’s a 90 percent chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?

This isn’t an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?

The California Department of Motor Vehicles is now trying to draw up safety regulations for autonomous vehicles. These regulations might or might not specify when it is acceptable for collision-avoidance programs to expose passengers to risk to avoid harming others — for example, by crossing the double-yellow line or attempting an uncertain maneuver on ice.

Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufacturers “self-certify” the safety of their vehicles, with substantial freedom to develop collision-avoidance algorithms as they see fit.

That’s far too much responsibility for private companies. Because determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess.

Regulatory agencies will need to set some boundaries. For example, some rules should presumably be excluded as too selfish. Consider the over-simple rule of protecting the car’s occupants at all costs. This would imply that if the car calculates that the only way to avoid killing a pedestrian would involve sideswiping a parked truck, with a 5 percent chance of injury to the car’s passengers, then the car should instead kill the pedestrian.

Other possible rules might be too sacrificial of the passengers. The equally over-simple rule of maximizing lives saved without any special regard for the car’s occupants would unfairly disregard personal accountability. What if other drivers — human drivers — have knowingly put themselves in danger? Should your autonomous vehicle risk your safety, perhaps even your life, because a reckless motorcyclist chose to speed around a sharp curve?

A Mountain View lab must not be allowed to resolve these difficult questions on our behalf.

That said, a good regulatory framework ought to allow some manufacturer variation and consumer choice, within ethical limits. Manufacturers or fleet operators could offer passengers a range of options. “When your child is in the car, our onboard systems will detect it and prioritize the protection of rear-seat passengers!” Cars might have aggressive modes (maximum allowable speed and aggressiveness), safety modes, ethical utilitarian modes (perhaps visibly advertised so that others can admire your benevolence) and so forth.

Some consumer freedom seems ethically desirable. To require that all vehicles at all times employ the same set of collision-avoidance procedures would needlessly deprive people of the opportunity to choose algorithms that reflect their values. Some people might wish to prioritize the safety of their children over themselves. Others might want to prioritize all passengers equally. Some people might wish to choose algorithms more self-sacrificial on behalf of strangers than the government could legitimately require of its citizens.

There will also always be trade-offs between speed and safety, and different passengers might legitimately weigh them differently, as we now do in our manual driving choices.

Furthermore, although we might expect computers to have faster reaction times than people, our best computer programs still lag far behind normal human vision at detecting objects in novel, cluttered environments. Suppose your car happens upon a woman pushing a rack of coats in a windy swirl of leaves. Vehicle owners may insist on some sort of preemptive override, some way of telling their car not to employ its usual algorithm, lest it sacrifice them for a mirage.

There is something romantic about the hand upon the wheel — about the responsibility it implies. But future generations might be amazed that we allowed music-blasting 16-year-olds to pilot vehicles unsupervised at 65 mph, with a flick of the steering wheel the difference between life and death. A well-designed machine will probably do better in the long run.

That machine will never drive drunk, never look away from the road to change the radio station or yell at the kids in the back seat. It will, however, have power over life and death. We need to decide — publicly — how it will exert that power.

ABOUT THE WRITER

Eric Schwitzgebel is a professor of philosophy at UC Riverside and the author of “Perplexities of Consciousness.” He blogs at the Splintered Mind. He wrote this for the Los Angeles Times.

©2015 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Photo: Ali Eminov via Flickr

Start your day with National Memo Newsletter

Know first.

The opinions that matter. Delivered to your inbox every morning

Sununu Was The 'Last Reasonable Republican' -- And Now He's Not

Gov. Chris Sununu

Namby, meet pamby. I’m talking, naturally, of Chris Sununu, governor of New Hampshire, who slithered into a Zoom call on This Week with George Stephanopoulos on Sunday to explain why he will be voting for Donald Trump for president come November. Not because Trump doesn’t have any responsibility for the attempted coup and attack on the Capitol on January 6, 2021. He does. Sununu thinks that all the insurrectionists “must be held accountable and prosecuted.” Except one: the man he’s voting for in November.

Keep reading...Show less
History And Terror In The Skies Over Israel

Anti-missile system operating against Iranian drones,seen near Ashkelon, Israel on April 13, 2024

Photo by Amir Cohen/REUTERS

Iran has launched a swarm of missile and drone strikes on Israel from Iranian territory, marking a significant military escalation between the two nations. Israel and Iran have been engaged in a so-called shadow war for decades, with Iranian proxies like Hezbollah rocketing Israel from Lebanon and Syria, and Israel retaliating by launching air strikes on Hezbollah missile sites. Israel has also launched strikes on Iranian targets in other countries, most recently an airstrike on part of the Iranian embassy in Damascus, Syria, which killed several top Iranian “advisers” to its military, including Mohammad Reza Zahedi, a senior officer in Iran’s Quds Force, an espionage and paramilitary arm of Iran’s army.

Keep reading...Show less
{{ post.roar_specific_data.api_data.analytics }}