THE BIG IDEA: MIT’s Moral Machine

Closeup of male IT engineer's hand switching on server at data center

 big-idea-logo.jpg

I know, I know…you may think I’m really harping on technology behind the autonomous car. Just last month, I was talking about it; and Curious ran other stories about self-driving cars here, here, and here. You could be led to believe I’m obsessive over this innovation, but just this week, we saw in the headlines a follow-up to one of my own Tech Tuesday columns.

So why do I talk about self-driving cars? Because this technology is no longer the stuff of science fiction. Autonomous automobiles are becoming less and less an experiment from Google and finding their way in our lives.

Autonomous cars on a road with visible connectionWith this sort of innovation comes problems. Well, okay, maybe not problems. Dilemmas. That would be a more accurate description. You see, with every problem resolved by technology, new problems and issues are created; and with the self-driving car, it is an issue of morality. If you write software for a self-driving car, you have to program its software to make moral decisions.

How so? Suppose you are driving your car down a hill and see in the distance five people in the crosswalk ahead of you. Suddenly, your brakes fail. You could either hit these pedestrians, killing them on impact; or you could drive headfirst into a freight train or into a wall. Now this decision to sacrifice your car and possibly yourself is your decision as the driver; but with the autonomous car, the software has to decide “Am I going to save the one person in the car and sacrifice five pedestrians, or do I save the pedestrians and sacrifice the one person in the car?”

No pressure.

There are all kinds of moral dilemmas facing autonomous cars; and to makew it more difficult for programmers, there are no “right” answers. It’s a moral choice. It’s all in how you look at it.

Big data futuristic visualization abstract illustrationThis is what inspired MIT to design the “Moral Machine” for these kind of software applications. The Moral Machine presents test audiences with a variety of scenarios—if you are going to kill dogs, or people, all kinds of different things—and you make decisions that are recorded not necessarily by reaction time or a mental process but just by the final choice that the testers chose. Then, the Moral Machine shows decisions compared with other people in this test, and from these decisions will hopefully emerge a pattern for programmers to emulate for future software installations and packages. This MIT initiative wants to provide a platform for building a crowd-sourced opinion on how autonomous devices should deal with moral dilemmas as well as encourage discussion of moral consequences in light of these dilemmas. This website will also serve as a long-running survey on how we collectively think these moral decisions should be made.

I’ve got a feeling we’re going to be seeing a lot of questions like this coming to the forefront of the autonomous car discussion; and yes, you will see a lot of discussion on it here. But if you have a thought on it, drop us a comment here. We at Curious would love to hear what you think.

 


 

shurtz.jpgA research physicist who has become an entrepreneur and educational leader, and an expert on competency-based education, critical thinking in the classroom, curriculum development, and education management, Dr. Richard Shurtz is the president and chief executive officer of Stratfdord University. He has published over 30 technical publications, holds 15 patents, and is host of the weekly radio show, Tech Talk. A noted expert on competency-based education, Dr. Shurtz has conducted numerous workshops and seminars for educators in Jamaica, Egypt, India, and China, and has established academic partnerships in China, India, Sri Lanka, Kurdistan, Malaysia, and Canada.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *