Pre-Stage in Simulation

Before you dive into the code, please check the rules of the simulation phase here: https://real-robot-challenge.com/simulation_phase.

Here we explain the quantitative-evaluation procedure of the pre-stage. We provide instructions how to install the simulation package, how to integrate your own code, and how to submit your code and results. We also explain how we will execute your code to verify the reported scores.

Install the Software

We are using Singularity to provide a portable, reproducible environment for running code on the robots and in simulation. All you need is a Linux computer with Singularity installed and our Singularity image (for both see About Singularity).

You can also install the simulation locally (without Singularity) for development, see documentation of the trifinger_simulation package (make sure to use the “real_robot_challenge_2021” branch of the repository in this case!). Note, however, that we will only use Singularity for evaluation, so before submitting, please make sure that your code is working with Singularity.

Procedure

The easiest way to get started is to fork our example package and use it as a base for your own package. Note that the pre-stage will be done completely in Python. In later stages, using the real robot, you will also have the option to use C++.

The task is to manipulate a cube in a given way using the TriFinger robot (see Task 1: Move Cube on Trajectory). You may use any approach to create a control policy which solves this task.

Development

We are using colcon as build tool, so ideally your package should follow the structure of a ROS 2 Python package (if you are starting with the rrc_example_package, this is already the case).

For instructions on how to set up the workspace, build within Singularity and run one of the example scripts, see Build and Run Code in Singularity.

Evaluation

Once you are done developing, you will have to update the file evaluate_policy.py in the root of your package and run the evaluation script (scripts/rrc_evaluate_prestage.py in the rrc_example_package) to execute it on multiple trajectories and compute the corresponding reward.

This will generate a directory with result files, including logs from the evaluation and the resulting reward. These files have to be submitted together with your code.

For more details see Evaluation Procedure.

Submitting Your Results

Finally, you will create a submission containing:

  1. Your score, as it is reported in the generated file “reward.json” (found in the “output” directory).

  2. Your code (as a .zip) containing everything that is needed to run the evaluation (you don’t need to submit code that is only needed for training). This should be exactly the directory that you passed with --package to the evaluation script.

  3. The “output” directory (as a .zip) that was generated inside the directory you passed to --output-dir (the one including the reward.json). Please do not include the other generated directories (“build”, “install”, …) here.

  4. The Singularity image that you used for evaluation. If you don’t submit an image, we assume you used the standard challenge image without any customisation.

Below, we describe how we will evaluate the submission. We highly recommend that participants go through exactly these steps with their tentative submission to ensure that everything will run smoothly on our side.

On our side, we will then

  1. download your files to a computer with the specs described below,

  2. execute rrc_evaluate_prestage.py in the same way as described above. If it does not terminate within 1 hour, we will abort it and assign the minimum score.

  3. We then compare the score obtained to the reported score to ensure that they are in the same order of magnitude (up to variations due to the random seed).

  4. We will also verify the action logs in the original Singularity image (for this step no custom dependencies are required) to ensure that simulation parameters have not been modified in any way.

Evaluation Computer Specs

The machine on which submissions will be evaluated has the following specs:

  • CPU: 16 cores, 3 GHz

  • RAM: 32 GB

  • GPU: will not be available during the evaluation (you may of course use the GPU for training on your side)