atte juvonen

i blog & i code

Stop bikeshedding AI ethics

2019-04-06discourseai

A self driving car is going down a narrow path when suddenly a baby and an elderly person appear on the road. There is no time to stop and not enough room to turn so the car must choose which of them to run over.

You’ve probably heard a variation of this dilemma many times in many places. But you know where you didn’t hear it? At driving school. That’s right. The institution solely dedicated to training people how to drive safely on the roads didn’t think it was important to teach you how to choose between running over a baby versus running over a grandmum. Do they think that humans will instinctively make the right choice? Let’s be honest: this item was never on the agenda of any meeting at any driving school.

If we are put in a life-or-death situation where we have to make a split second decision, we will likely prioritize our own wellbeing and that of our next of kin over the wellbeing of strangers. Random factors will likely affect our decision a lot as well. We’re not going to produce a balanced, nuanced ethical analysis of the situation in the blink of an eye. And that’s fine. There are more important things to teach at driving school, like how to recognize those pesky symbols that they put up on the side of the road, when it’s your turn to drive through an intersection, or why you should slow down in conditions of reduced visibility. Likewise, there are more important things to get right in a self driving system, like I don’t know, basic object recognition would be a good start.

So if this is not the most important ethics question of the decade, why do people keep bringing it up? Self driving cars are scary, they will bring major change to society, and some serious ethical dilemmas along with that. We want to discuss those ethical questions, maybe even prevent some of the worst catastrophies that may happen, but that’s too difficult. So we talk about something easy instead: baby or grandmum?

This is called bikeshedding. The origin story for this term is quite funny:

Parkinson noticed that a committee whose job is to approve plans for a nuclear power plant may spend the majority of its time on relatively unimportant but easy-to-grasp issues, such as what materials to use for the staff bikeshed, while neglecting the design of the power plant itself, which is far more important but also far more difficult to criticize constructively.

Self driving cars do come with seriously impactful ethical dilemmas, but nobody talks about those, because they are really, really hard and you can’t become an “AI ethics expert” by waiving your hands around them. I’ll name one: when is it ethical to bring the product to market? At any point in time we can say that this self driving system would be safer after 1 more year of development, but we can’t just wait forever - we have to draw a line in the sand somewhere and say that “this expected amount of deaths and injuries is acceptable, we can bring the product to market now”.

Problem is that there is a mismatch in incentives when the company decides where to draw that line. Companies have the incentive to bring their products to market too early because much of the cost resulting from those decisions is externalized (you know, to the people who die), whereas much of the gains are internalized (the emoji-moneybag that the company gains dwarfs any societal value from having self driving cars on the roads at this level of technology). Note that this is not a theoretical problem. We have self driving cars on the roads right now without basic object recognition capability. This is an actual problem causing actual deaths.[1][2]

How do you solve this AI ethics problem? I have no idea. It’s a hard problem. But what if I were to tell you that the employees who code these AI systems often cycle to work and don’t have any kind of… structure that would protect their bikes from the rain? Won’t anybody think of the children bicycles! Now that’s something we can talk about.