Project Proposal by Martin Stacey


Computing answers to the trolley problem

Software

None

Covers

AI decision making, autonomous vehicles, ethics and regulation of technology

Skills Required

Interest in artificial intelligence, interest in how to make the future work, interest in moral philosophy

Challenge

Conceptual ???? Technical ??? Programming

Brief Description

In moral philosophy, trolley problems are a class of thought experiments designed to challenge your thinking about how to resolve moral dilemmas, and expose moral intuitions. Variations in the form of the problem and in how it is presented can influence how people react, which moral principles they think are relevant, and what they decide, but the general form is this: A train (or a tram) is speeding towards six people who are stuck on the track; no one can stop them from being killed but you, and the only action that you can take is to pull a lever that will send the tram down a different track where it will kill one person, who would have been unharmed if you had done nothing - so what do you do, and why?

What should robots do when confronted by moral dilemmas having the general form of trolley problems, where nothing they can do can avert all harm, but they can make choices about which harms are caused to different people or things? This is question that has interested science fiction writers at least since Isaac Asimov started writing stories about the moral choices of robots in the 1940s and the editor of Astounding Science-Fiction, John W. Campbell, formulated Asimov's Three Laws of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But now with autonomous vehicles moving around on roads with traffic in densely populated areas about to become a widespread commercial technology, we have arrived at the point where we need to care for real about what robots should do about moral dilemmas, and how they can compute the morally least-bad course of action fast enough to execute it, when in a situation where some harm is unavoidable but the robot can influence what harm actually happens. Of course, moral dilemmas will be complicated for robots, as they are for humans, by the uncertainties surrounding the accuracy with which the situation is understood and what the outcomes will be of whatever actions the robot takes.

The challenge of this project is to consider what classes of trolley-problem-like situations an autonomous vehicle or some other kind of autonomous system might find itself in, what kinds of reasoning principles are appropriate in these different situations, and how these reasoning principles can be mapped to computationally feasible decision procedures, as well as the pitfalls and difficulties involved in getting robots to make the choices of actions that we would want them to make. This is going to involve considering what philosophers call metaethics - the question of what morally right behaviur consists in and the question of what principles ought to govern the making of moral choices.

Variant

A different angle to take on this general theme is the question of how to manage and assign moral and/or legal responsibility for the moral actions of autonomous systems that are making explicit or implicit judgements of benefit or harm in selecting actions. This should avoid blindly assuming either that the actions of autonomous systems are their own and don't have human authors who are responsible and culpable, or that they do. How is this influenced by moral behaviour involving explicit moral reasoning or (more likely, in practice in the near future) by teaching machines implicit moral behaviour for what to do in particular situations?

For a challenging development project, you could build a prototype AI decision-making system for reasoning about what to do in the kinds of ethical dilemma situations autonomous vehicles or some other kind of robot might actually meet. This will involve thinking about what ethical principles you want to, or can, build in; or, conversely, what ethical principles are implicit in a decision-making procedure that might be computationally feasible. Depending on which approach you take you might need to think about how to represent moral dilemma situations, or how you can teach implicitly moral action.


Back to