Saturday, August 30, 2014

Towards an Ethical Robot

Several weeks ago I wrote about our work on robots with internal models: robots with a simulation of themselves and their environment inside themselves. I explained that we have built a robot with a real-time Consequence Engine, which allows it to model and therefore predict the consequences of both its own actions, and the actions of other actors in its environment.

To test the robot and its consequence engine we ran two sets of experiments. Our first paper, setting out the results from one of those experiments, has now been published, and will be presented at the conference Towards Autonomous Robotics (TAROS) next week. The paper is called: Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection. Let me now outline the work in that paper.

First here is a simple thought experiment. Imagine a robot that's heading toward a hole in the ground. The robot can sense the hole, and has four possible next actions: stand still, turn toward the left, continue straight ahead, or move toward the right. But imagine there's also a human heading toward the hole, and the robot can also sense the human.

From the robot's perspective, it has two safe options: stand still, or turn to the left. Go straight ahead and it will fall into the hole. Turn right and it is likely to collide with the human.








But if the robot, with its consequence engine, can model the consequences of both its own actions and the human's - another possibility opens up: the robot could sometimes choose to collide with the human to prevent her from falling into the hole.

Here's a simple rule for this behaviour:

IF for all robot actions, the human is equally safe
THEN (* default safe actions *)
    output safe actions
ELSE (* ethical action *)
    output action(s) for least unsafe human outcome(s)

This rule appears to match remarkably well with Asimov’s first law of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The robot will avoid injuring (i.e. colliding with) a human (may not injure a human), but may also sometimes compromise that rule in order to prevent a human from coming to harm (...or, through inaction, allow a human to come to harm). And Asimov's third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Well, we tested this scenario with real robots: one robot with consequence engine plus ethical rule (the A-robot - after Asimov), and another robot acting as a proxy human (the H-robot). And it works!

Here's what the real robot experiment looks like. We don't have a real hole. Instead a virtual hole - the yellow shaded square on the right. We just 'tell' the A-robot where the hole is. We also give the A-robot a goal position - at the top right - chosen so that the robot must actively avoid the hole. The H-robot on the right, acting as a proxy human, doesn't 'see' the hole and just heads straight for it. (Ignore the football pitch markings - we're re-using this handy robo-soccer pitch.)

So, what happens? For comparison we ran two trials, with multiple runs in each trial. In the first trial is just the A-robot, moving toward its goal while avoiding falling into the hole. In the second trial we introduce the H-robot. The graphs below show the robot trajectories, capturing by our robot tracking system, for each run in each of the two trials.

In trial 1, see how the A-robot neatly clips the corner of the hole to reach its goal position. Then in trial 2, see how the A robot initially moves toward it's goal, then notices that the H-robot is in danger of falling into the hole, so it diverts from its trajectory in order to head-off H. By provoking a collision avoidance behaviour by H, A sends it off safely away from the hole, before then resuming its own progress toward its goal position. The A-robot is 100% successful in preventing H from falling into the hole.

At this point we started to write the paper, but felt we needed something more than "we built it and it works just fine". So we introduced a third robot - acting as a second proxy human. So now our ethical robot would face a dilemma - which one should it rescue? Actually we thought hard about this question and decided not to programme a rule, or heuristic. Partly because such a rule should be decided by ethicists, not engineers, and partly because we wanted to test our ethical robot with a 'balanced' dilemma.

We set the experiment up carefully so that the A-robot would notice both H-robots at about the same time - noting that because these are real physical robots then no two experimental runs will be exactly identical. The results were very interesting. Out of 33 runs, 16 times the A-robot managed to rescue one of the H-robots, but not the other, and amazingly, 3 times the A-robot rescued both. In those 3 cases, by chance the A-robot rescued the first H-robot very quickly and there was just enough time to get to the second before it reached the hole. Small differences in the trajectories of H and H2 helped here. But perhaps most interesting were the 14 times when the A-robot failed to rescue either. Why is this, when there is clearly time to rescue one? When we studied the videos, we see the answer. The problem is that the A-robot sometimes dithers. It notices one H-robot, starts toward it but then almost immediately notices the other. It changes its mind. And the time lost dithering means the A-robot cannot prevent either robot from falling into the hole. Here are the results.

Trial 3: a robot with an ethical dilemma. Which to save, H or H2?













Here is an example of a typical run, in which one H-robot is rescued. But note that the A-robot does then turn briefly toward the other H-robot before 'giving-up'.


And here is a run in which the A-robot fails to rescue either H-robot, with really great dithering (or bad, if you're an H-robot).


Is this the first experimental test of a robot facing an ethical dilemma?

We set out to experimentally test our robot with a consequence engine, and ended up building a minimally ethical robot which - remarkably - appears to implement Asimov's first and third laws of robotics. But, as we say in the paper, we're not claiming that a robot which apparently implements part of Asimov’s famous laws is ethical in any formal sense, i.e. that an ethicist might accept. But even minimally ethical robots could be useful. I think our approach is a step in this direction.


Full paper reference:
Winfield AFT, Blum C and Liu W (2014), Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection, pp 85-96 in Advances in Autonomous Robotics Systems, Lecture Notes in Computer Science Volume 8717, Eds. Mistry M, Leonardis A, Witkowski M and Melhuish C, Springer, 2014. Download final draft (pdf).

Acknowledgements:
I am hugely grateful to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian was supported by Dr Wenguo Liu.

Related blog posts:
On internal models, consequence engines and Popperian creatures
Ethical Robots: some technical and ethical challenges

11 comments:

  1. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Thanks Alex. Isn't that exactly what we did in trial 3..? In this trial there are two proxy human robots H and H2, thus presenting the A-robot with a dilemma.

      Delete
    2. Ah yes...I'll nix my idiocy :)

      Delete
    3. EXCELLENT!
      So they are right to put in the air crafts the instruction that one should first put the oxygen mask on his/her own face. "The problem is that the A-robot sometimes dithers!"
      14/33 of total failure!
      Great work, thank you.

      Delete
  2. Hi Jacob, not sure I know how to do that. Thanks for posting a summary on your blog!

    ReplyDelete
  3. https://tinyurl.com/pmmeb9j

    Build a machine with two separate and competing algorithms, for conditioned response and calculation, and a root level imperative for continued operation: survival. The result will be a neurotic machine, "haunted" by past actions.

    Programming a machine for utilitarian responses, a la the "trolley problem" requires a lot of processing capacity, but there's no haunting. Your robots were frozen by competing signals; there was no "mind" to be made up. I'm not sure there's one for animals either, but even if consciousness is epiphenomenal it might not be such a good idea to build robots with the capacity to exhibit it.

    ReplyDelete
    Replies
    1. Thank you for your comment - and the link to your interesting blog.

      You raise some interesting questions. This robot is not neurotic, but it is - in a sense - pathological, since it has no choice but to behave the way it is programmed to. As you say, there is no mind to be made up - I was using a bit too much anthropomorphic licence there. However, the robot does change its decision, if not its 'mind'.

      I agree we should be very cautious - even over the ethics of building ethical robots.

      Delete
  4. Clearly, this is a very primitive and limited view of the laws of robotics from Asimov

    1 has been clarified in various books Asimov laws are very complex, "can be reduced to ..." and is expressed in a simplified form that does not include all the details of the law

    2 different criteria can make a law to strengthen or weaken, making prioritization of activities to be performed

    3 if the result of the prioritization throw a draw to be matched two activities a random element is introduced that prevents the robot from freezing



    "Any modern robot, either of the last century, would weigh such matters quantitatively. Which of the two situations, A or non-A, creates more misery? The robot would take a quick decision and opt for the least misery. The probability that the robot judges that the two mutually exclusive alternatives produce exactly equal amounts of misery is low and even if it were this case, modern robots are equipped with a random factor. If A and not-A produced a misfortune precisely equal to its judgment, the robot chooses one or another of a totally random way and then sticks to it without further hesitation. Of course, it is not produced in him a mental block. "

    Passage: Asimov, Isaac. "The Robots of Dawn."

    ReplyDelete
  5. I had to chuckle about the A-robot and how it dithers. Perhaps robotics do have very human characteristics.

    ReplyDelete
  6. Hi Prof.

    Just to tell you that I selected this paper to be presented as a final work for Artificial Inteligence course from a masters in Cognitive Sciences (FCUL, Lisbon PT).

    Thank you for your research.

    ReplyDelete
    Replies
    1. Hi, great to hear this. Do let me know how the masters goes. Good luck!

      Delete