Teaching machines to make life and death decisions on the road | CBC Radio - Action News
Home WebMail Friday, November 22, 2024, 11:45 AM | Calgary | -10.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Quirks and QuarksBob McDonald's blog

Teaching machines to make life and death decisions on the road

Asking humans about what decisions they'd make is the foundation for 'moral machines'

Asking humans about what decisions they'd make is the foundation for 'moral machines'

MIT has been using it's "Moral Machine" game to help program computers in the kind of emergency decision making humans would like autonomous vehicles to make. (MIT)

Self driving cars are on the way. Autonomous vehicles are already prowling streets andnavigating through busy trafficwhile avoiding other vehicles, objects and, especially, pedestrians. But driving can also involvelife and death decisions. Are machines capable of making moral judgments about who lives and who dies in an accident?

Scientists at the Massachusetts Institute of Technology are trying to solve that problem. And the core of it is teaching computers how humans would like those decisions to be made when things go wrong.

To do that, they devised a computer game called the Moral Machine. It's meant to generate a huge database of how humans choose who should lives and dies in no-win situations.

The game sets up scenarios where the brakes on a driverless car fail as it approaches a group of pedestrians crossing the street. If the car continues in a straight line, pedestrians are killed, if it swerves to avoid them, the car will strike a barrier that would kill passengers in the car. In either case, people die.

A two-door white car with a Google decal is parked in front of a low-rise building
Google's self-driving car is shown during a demonstration at the Google campus in Mountain View, Calif., in May 2015 (Associated Press)

To complicate the problem further, there are a variety of people involved in the scenarios, seniors,babies, young athletes, overweight people, professional executives, the homeless, and health care workers. Different combinations of these people are either the pedestrians or the passengers in the car.

An important aspect is that the game does not ask the player to place themselves in the car, because self-preservation would take over almost every time. This is purely meant to be an objective exercise in deciding which lives are best to save.

The game received overwhelming response from more than two million online participants from 233 countries, producing 40 million moral decisions. When the scientists analyzed the results, they found some interesting cultural differences. People in Asia, for example, had a greater tendency to protect the elderly, while those in southern countries favoured the young.

This raises some interesting questions. Will self driving cars have to be programmed differently in some parts of the world to reflect cultural preferences? Will the cars themselves have to keep track of what types of passengers they carry whether there is a doctor, a teenager or a baby on board and compare their worth to pedestrians in the line of sight?

Of course, it's doubtful a human could make that kind of moral judgment during the few moments they'd have for decision-making before collision. But these are important things to consider if we are going to allow computers to completely take over as drivers.

Currently, self driving cars require a human behind the wheel ready to take over in case of emergencies. But that may not be the case for long.Proponents of autonomous vehicles cite statistics that show most vehicle accidents are caused by human error, so by taking humans out of the driver's seat, lives will be saved.

The car is driven by a computer that steers, starts and stops itself. It navigates using a laser scanner on top of the car, a GPS system and other sensors. A driver sits only for security reasons behind the steering wheel. ((Michael Sohn/Associated Press))

But the move to fully automated computerized vehicles with no human input and no steering wheel is a giant leap. As long as autonomous vehicles operate in complex environmentswith human drivers and pedestrians, there will be situations where the computer will have to take action which could result in death or injury. Who will be responsible if it is determined that the computer made a bad decision?

A possible scenario for the future is self driving cars that will be restricted to their own lanes, possibly with barriers to keep pedestrians and other vehicles out of the equation, much like driverless trains at airports. But as their popularity and safety improves, autonomous and unsupervised vehicles will gradually move onto shared streets.

Of course ultimately the improved decision making of computers may lead to humans being restricted from the roads. Those who wish to drive vehicles themselves will have to go to special parks and drive vintage cars with those old fashioned steering wheels around circular tracks where they're only putting themselves at risk.