The Ethical Algorithm

moral

We’ve reached a defining moment in our collective human history.

It’s time to decide whether we should give machines the option of purposefully killing us – if the alternative results in more human casualties.

The test is simple: if a self-driving car had to choose between ramming into a wall and killing its sole passenger, or flying into a crowd of 10 people, which should the computer choose?

The logical answer, assuming the car has performed optimally, is that this is an unavoidable, tragic accident where someone is going to die. Generally we can agree that, when these are all nameless, faceless strangers, 10 lives are more valuable than 1. But this ‘utilitarian’ approach to human lives is only so clear cut and simple when we pose the question to control groups with little stick figures representing the victims. As soon as this a real situation, it’s hard to find anyone advocating a car that will kamikaze them into a wall at a moment’s notice.

People crave agency, especially in life-or-death situations. It’s the reason people can happily and recklessly drive a car down a highway but sit white-knuckled during an airplane flight – even though the plane is statistically much safer than the car. On the plane, if something goes wrong, you can scream, you can put on your life vest or oxygen mask, but you can’t change the outcome. Your life is in someone else’s hands. With self-driving cars, we’ve taken this a step further – your life is in a computer’s hands.

We indirectly put our lives in computers’ hands all the time, sometimes without realizing it. Algorithms run medical systems, university admissions, life support systems. Robots are doing surgery, your position on the kidney donor recipient list is determined by an algorithm. Computers are working right now to cure diseases that millions of people have, military drones are performing autonomous missions, taking human lives without direct human input. None of this is as immediate to us, as relatable, as the example at the top. The car does not have to kill you. It can easily keep you alive, at the expense of the pedestrians’ lives.

Would you buy a car that would trade your life for someone else’s? I certainly would never get in a car that would trade my life for a stranger’s life.  If one of us has to die, I’d rather it be someone else. I’m not buying a car as a moral statement. I want to live through an accident.

Maybe you consider ten human lives ultimately superior to one human life. That’s a stance a lot of people already disagree with, at least when theirs is the life in question. Human self-preservation is a powerful instinct. While you may be fine with a nameless stranger crashing into a wall to protect 10 nameless strangers, how would you react to hear your loved ones were killed by their vehicle? That they were the only casualties in an accident, quite literally executed by a piece of code.

Personally I think an ethical car would be fine. The backlash for implementing such as system would depend on how good the system was, and whether or not we had ‘smart’ cars on the road or not (more on that later).

The utilitarian argument may still catch on. I suppose a happy medium would be giving the driver control of the car at such critical moments, but people can panic in an emergency and make things worse than they have to be. It’s a losing situation where you want a perfect computer or no computer, but you’re never going to have either of those in your autonomous car.

Uncaught Exceptions

When an algorithm kills someone, who is to blame? The programmer? The manufacturer?

Or is nobody to blame, the passenger assuming all risk by entering the vehicle? In that case, should a child waive their life in the same way? What should the computer do if there are children in the car and the ‘pedestrians’ to be hit are adults?

Is 1 child > 1 adult a true statement?

If a child runs onto the road, should the car risk the lives of the passengers to try to avoid the child, or is it not worth the risk because the child is ‘at fault’?

What if it’s an adult recklessly crossing a highway?

What lengths should the car go to to find the ‘mathematically’ correct option? E.g. the car decides we can have a rollover crash by swerving out of the way to not hit two people, ends up with you in a coma. The alternative would have been to hit them at a reduced speed and probably not kill either, but because 2 > 1 ⇒ you got fucked. Is it your moral duty to pay for other people’s mistakes on the road?

If it were a human driver, most would not hesitate in choosing the option that kept them alive. Should self-driving cars try to emulate human behavior, or aim for a collectivist approach? Or should you just have a choice of ethics modules when you purchase your car?

The answer is that we, as a society, need to be mature enough to address these issues sooner rather than later. Nobody is going to buy a self-driving car that’s programmed to kill you – that is unless every self-driving car is programmed to kill you. It’s an all-or-nothing situation – the first self-driving car that advertises ‘self-preservation’ technology is going to popularize the selfish algorithm over the ethical one. Given the choice of offering a boilerplate collectivist AI or a ‘safety-conscious’ AI that will preserve the lives of the passengers at all costseveryone is going to choose the one that keeps them alive, while simultaneously wishing everyone else had the collectivist AI. Is it short sighted? Maybe. Can you blame us? Not really.

New Software, Same Old Problems

The fact is that a self-driving car, in 2015, with our current technology, is better at (simple) driving than a human driver. The majority of accidents involving these vehicles are caused by human drivers. The importance of reaction time in the act of driving cannot be overstated, and a human will never react as fast as a car CPU. The lack of control of the vehicle is what is keeping us from a massive shift to self-driving vehicles. Our current options allow you to control the vehicle at any time, all well and good until we have people who start sleeping in their cars as they drive.

There are also a plethora of issues that plagues the primitive AI. This article discusses ways self-driving cars lag behind humans in driving skill. For example, if a ball comes bouncing onto a road, a human driver might slow down in anticipation of a child. A computer needs to be explicitly taught this scenario to react in the same way. Because of the magnitude of scenarios that can’t be hardcoded into the car’s operating system, it needs to drive extremely defensively. As soon as it sees the ball, a self-driving car will likely come to a stop until the obstruction is gone, or attempt to go around it at low speed if the roads are clear. Obviously human drivers can also exploit the naivety of autonomous vehicles, cutting them off and generally driving like assholes around them because they know they will always yield.

The first time a human is killed by a self-driving car is going to be a wake-up call for us to start addressing these problems. This is a technological leap forward that will pervade our lives in the future. We are going to need to face some surreal ethical questions very soon, and it’s important we’re all on the same page.

What is your point?

Technology has given us this dilemma – it may also be the safest way out of this dilemma. If autonomous, or at the very least ‘smart’ modules are implemented in all the cars on the road, vehicles can communicate to avoid dangerous scenarios. Swerving to avoid a child that’s run onto the road only to crash head-on into a van is not a scenario that can happen if your car can ‘tell’ the van to brake as soon as your car sees the child. Cars in traffic could interact with each other 250m from each other. Yes, this has terrifying implications regarding the security of these modules and what a malicious person with access to them could do. But we’ve solved one problem, and with cars communicating and operating in tandem we’d help reduce both accidents and gridlock.

Then there are mistakes. Bugs. Glitches. Even in the perfect world of sensor-laden, radio-communicating vehicles, there are going to be errors. We know that the programmers of these vehicles are going to take their job seriously. There will be failsafes and redundancies, they’ll put in some high-level chips and code to deal with rare events. Most problems with your car are going to default to it pulling over and turning off, not veering wildly around the interstate. If it does have to turn off, it can alert all smart cars within a certain area before it shuts down so they can avoid it.

However, even with perfect programmers in a perfect world, we’re still dealing with millions of tons of steel flying around the country under all sorts of weather conditions, on all sorts of roads, in all sorts of scenarios. Safer than human vehicles, sure, but autonomous vehicles are going to lead to people dying or being serious injured. That is guaranteed. That is statistics. How we react to this is the real question. In the end, this is fundamentally an ethical and philosophical problem. The technology will advance in whatever direction we pull it in, but our values and morals are not as flexible and must be hashed out.

If we as a society decide that we can’t stand behind cars killing their occupants for the greater good – there will be no self-driving cars with ethical algorithms. There will be selfish self-driving cars. It’s a hurdle we need to jump, a barrier we need to break through to introduce this technology. The benefits are mind-boggling and stand to change the way we live our lives. Imagine sleeping in your car while it drives you to your next destination. Hostels or hotels are suddenly irrelevant to the frugal traveler. A single family car spends the day driving around, picking up and dropping off family members to work and school, and collecting them at the end of their day. Why fly when your destination is an 8 hour night drive away, in a car that looks more like a mini hotel room than your current car? Instead of truck drivers working ridiculous shifts, we have automated conveys rolling through highways, slashing the price of distribution. Drunk driving is virtually eliminated, just take a nap in the back of your car as it drives you home from the bar. Tell your car to bring you a pizza before picking you up. Why not? It’s the future, and pizza tastes damn good in the future.

There is a lot to take in – and it’s coming soon. Sooner than you think, and we shouldn’t leave these decisions up to the manufacturers or government. We need to discuss and decide what we want out of this technology, we will be the end-users of these cars. This technology will cater to us – if we can make up our minds.