Sunday, November 30, 2014

Robot simulators and why I will probably reject your paper

Dear robotics and AI researcher

Do you use simulation as a research tool? If you write papers with results based on simulation and submit them for peer-review, then be warned: if I should review your paper then I will probably recommend it is rejected. Why? Because all of the many simulation-based papers I've reviewed in the last couple of years have been flawed. These papers invariably fall into the pattern: propose new/improved/extended algorithm X; test X in simulation S and provide test results T; on the basis of T declare X to work; the end.

So, what exactly is wrong with these papers? Here are my most common review questions and criticisms.
  1. Which simulation tool did you use? Was it a well-known robot simulator, like Webots or Player-Stage-Gazebo, or a custom written simulation..? It's amazing how many papers describe X, then simply write "We have tested X in simulation, and the results are..."

  2. If your simulation was custom built, how did you validate the correctness of your simulator? Without such validation how can you have any confidence in the the results you describe in your paper? Even if you didn't carry out any validation, please give us a clue about your simulator; is it for instance sensor-based (i.e. models specific robot sensors, like infra-red collision sensors, or cameras)? Does it model physics in 3D (i.e. dynamics), or 2D kinematics?

  3. You must specify the robots that you are using to test your algorithm X. Are they particular real-world robots, like e-pucks or the NAO, or are they an abstraction of a robot, i.e. an idealised robot? If the latter describe that idealised robot: does it have a body with sensors and actuators, or is your idealised robot just a point moving in space? How does it interact with other robots and its environment?

  4. How is your robot modelled in the simulator? If you're using a well-know simulator and one if its pre-defined library robots then this is an easy question to answer. But for a custom designed simulator or an idealised robot it is very important to explain how your robot is modelled. Equally important is how your robot model is controlled, since the algorithm X you are testing is - presumably - instantiated or coded within the controller. It's surprising how many papers leave this to the reader's imagination.

  5. In your results section you must provide some analysis of how the limitations of the simulator, the simulated environment and the modelled robot, are likely to have affected your results. It is very important that your interpretation of your results, and any conclusions you draw about algorithm X, explicitly take account of these limitations. All robot simulators, no matter how well proven and well debugged, are simplified models of real robots and real environments. The so-called reality gap is especially problematical if you are evolving robots in simulation, but even if you are not, you cannot confidently interpret your results without understanding the reality gap.

  6. If you are using an existing simulator then specify exactly which version of the simulator you used, and provide somewhere - a link perhaps to a github project - your robot model and controller code. If your simulator is custom built then you need to provide access to all of your code. Without this your work is unrepeatable and therefore of very limited value.
Ok. At this point I should confess that I've made most of these mistakes in my own papers. In fact one of my most cited papers was based on a simple custom built simulation model with little or no explanation of how I validated the simulation. But that was 15 years ago, and what was acceptable then is not ok now.

Modern simulation tools are powerful but also dangerous. Dangerous because it is too easy to assume that they are telling us the truth. Especially beguiling is the renderer, which provides an animated visualisation of the simulated world and the robots in it. Often the renderer provides all kinds of fancy effects borrowed from video games, like shadows, lighting and reflections, which all serve to strengthen the illusion that what we are seeing is real. I puzzle and disappoint my students because, when they proudly show me their work, I insist that they turn off the renderer. I don't want to see a (cool) animation of simulated robots, instead I want to see (dull) graphs or other numerical data showing how the improved algorithm is being tested and validated, in simulation.

An engineering simulation is a scientific instrument* and, like any scientific instrument, it must be (i) fit for purpose, (ii) setup and calibrated for the task in hand, and (iii) understood - especially its limitations - so that any results obtained using it are carefully interpreted and qualified in the light of those limitations.

Good luck with your research paper!


*Engineering Simulations as Scientific Instruments is the working title of a book, edited by Susan Stepney, which will be a major output of the project Complex Systems Modelling and Simulation (CoSMoS).

5 comments:

  1. Really like this article!

    I read it as saying, "Don't overlook any of the practicalities that might undermine your confident projection", in which form it would apply to people trying to mess about with complex systems (life!) in general.

    Top of your Popular Posts list is the lovely "extreme debugging" one:
    http://alanwinfield.blogspot.co.uk/2013/03/extreme-debugging-tale-of-microcode-and.html

    Isn't that about the same thing?

    The details don't always make a difference, but it very much helps to know what details (or what conditions) might make a difference.

    ReplyDelete
    Replies
    1. Thanks Paul for your, as ever, insightful comments. While I see the connections you have drawn, in this post I really do want to focus on the specific dangers of robot simulation, and why you should never believe your simulation results.

      Delete
  2. How do you tackle the "great idea but bad experimental protocol" vs. "mediocre idea with flawless experimental protocol" problem when reviewing a manuscript? What should be published? Ideas or methods. This question is of special importance in robotics because our robots today will be very different from the robots we will have in 20 years. The ideas, however, will probably be more important for the researcher of 2034 than whether the 2014-researcher used Webots or not.

    ReplyDelete
    Replies
    1. Thanks Marco - good question. For the first, I would probably strongly encourage the authors to tighten up their experimental method and re-submit the paper. For the second, I'm afraid a mediocre idea will only ever make a mediocre paper - no matter how stellar the experimental method - but I might recommend they resubmit the paper as a 'methods', rather than new science paper.

      Delete
  3. Alan, you might need to reject a number of your papers that has been published :).

    I agree with the main idea but I also think that we still need some imagination in robotics because no matter what robotics is an interdisciplinary subject so that the reality gap between theory and practice can be big or small. We all know that a simulation tool is a transition between theory and practice and it can be either close to theory or practice sides up to the current state of sub-subject. For example, research on trajectory optimisation of industrial manipulators should be application oriented as this field has been well developed and applied but optimal decentralised control of large-scale multi-robot systems can be done under proof of concepts because we can don't know when we can send a massive group of mobile robots to explore the moon when we even don't know how to achieve precise relative localisation using the current sensor technology.

    I simply think we should let robotics grow naturally as the Darwin natural selection. If findings are suitably alive with current state of the art (both knowledge pool and technology), it is adapted, otherwise it automatically disappears.

    ReplyDelete