Professional Car Driving

@toor2421998
The Moral Dilemma of Autonomous Vehicles

Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.The problem is the way in which the car thinks and makes the decisions, which is unlike humans. It is yet to perfectly integrate with pedestrians and cars driven by human drivers, who at times tend to break rules and even communicate via gestures and eyes at signal-crossing.

For some time now, we've known that the future of mobility is self-driving cars. From Google to Tesla, from startups to automakers, it's the technology that everyone is working on and the first iterations are already out there.

Until now, the market has been focused on the technology—how to create the level of artificial intelligence (AI) that's necessary to drive a car and give drivers the confidence to be mere passengers in their everyday vehicle.

But the focus has changed with MIT's Moral Machine survey. The survey took the old trolley problem and presented it to people as a problem for autonomous vehicles. The original ethical brainteaser proposed that you were in the cab of a trolley about to run over five innocent people tied to the tracks. If you flipped a lever, the trolley would divert to a siding on which one innocent person was standing. What's the ethical thing to do?

Researchers from MIT, France's CNRS and the University of Oregon revamped the trolley problem for the modern age. If you were in a self-driving car that was about to hit 10 pedestrians, would you want that car to swerve and kill you, but save them–or not? When presented with problems like this, most people objectively want the altruistic outcome–in this case, that the car swerve.