This challenge has ended!

This documentation is only for the Real Robot Challenge 2020 which has ended. Following challenges have their own documentation, see the challenge website for more information.

Simulation Phase

Before you dive into the code, please check the rules of the simulation phase here:

Here we explain the quantitative-evaluation procedure of the simulation phase. We provide instructions how to install the simulation package, how to integrate your own code, and how to submit your code and results. We also explain how we will execute your code to verify the reported scores.


Download and install the rrc_simulation repository, as described here: Install Software for Simulation Phase.

Note that the simulation phase will be done completely in Python. In later phases, using the real robot, you will also have the option to use C++.

The task is to manipulate a cube in a given way using the TriFinger robot (see Details of the Tasks). You may use any approach to create a control policy which solves this task. To avoid confusion, no changes are allowed to existing files, please create new files for your code. The only existing files you may and must change are:

  1. rrc_simulation/scripts/ This file will be executed during evaluation, so you must replace the placeholder policy with your policy. You may also replace our gym environment with your own if you wish to e.g. use a different action space, observation space or reward function. The most important thing in this script is that it stores the action log at the end, which will be used for evaluation according to our gym environment.

  2. rrc_simulation/environment.yml: You will replace this file with your own conda environment, necessary to execute your code. You can do this e.g. with

    conda env export > environment.yml

    or by creating the file manually.

Once you are done developing, please run the evaluation command:

rrc_evaluate path/to/output_dir

in the rrc_simulation/scripts directory. This will execute the file discussed above for multiple goals. Then it will simulate the action sequence stored by and print the resulting score in the terminal.


Note that rrc_evaluate will execute in the folder from which it is called. You may wish to develop your file in different folder, but at the end you must place the file to be used for evaluation at rrc_simulation/scripts/, since this is the path we will assume.

Finally, you will create a submission containing

  1. your score, which you copied from the terminal and you will enter into a form

  2. your rrc_simulation folder (as a .zip) containing

    • the unchanged original files

    • the modified environment.yml

    • the modified rrc_simulation/scripts/

    • your code

  3. and the output_dir (as a .zip) generated by rrc_evaluate.

Below, we describe how we will evaluate the submission. We highly recommend that participants go through exactly these steps with their tentative submission to ensure that everything will run smoothly on our side.

On our side, we will then

  1. download your folder to a computer with the specs described below,

  2. install the conda environment according to the same instructions as you did (see Install Software for Simulation Phase),

  3. execute rrc_evaluate in the rrc_simulation/scripts folder. If it does not terminate within 1 hour, we will abort it and assign the minimum score.

  4. We then compare the score obtained to the reported score to ensure that they are identical (up to variations due to the random seed).

  5. We will also verify all the action logs to ensure that simulation parameters have not been modified by the user code.

Evaluation Computer Specs

  • CPU: 8 cores, 3GHz

  • RAM: 64GB

  • GPU: will not be available during the evaluation (you may of course use the GPU for training on your side)

  • OS: Ubuntu 18.04 or MacOS (you will be able to choose)