****************************************** How to Locally Run Your Code in Simulation ****************************************** You can run your code locally in simulation using the same environment as on the real robot. This way you can verify that everything is generally working before making an actual submission to the robot. Requirements ============ - A computer running Linux with Python 3, :doc:`Singularity <../singularity>` and `ROS 2`_ (we tested on Ubuntu 18.04 with ROS Dashing, more recent versions may work as well). - Your code needs to be provided in a git repository following the structure described in :doc:`user_code_structure`. - The Singularity image used by the submission system. See :ref:`singularity_download_image`. - The code for executing the job: trifingerpro_runner_ Execute Code ============ To execute your code, use the script ``run_simulation.py`` from the trifinger_runner package. You need to pass as arguments the path to the output directory where the results will be stored, the git repository, the singularity image that is used for execution and the name of the task. Example: .. code-block:: bash cd trifingerpro_runner ./run_simulation.py --output-dir ~/output \ --repository git@github.com:myuser/myrepo.git \ --backend-image path/to/rrc2021.sif \ --task MOVE_CUBE_ON_TRAJECTORY For a list of all options use ``--help``. See :ref:`list_of_generated_files` for a description of the files that are written to the specified ``--output-dir``. For the repository, you can also specify the absolute path to a local repository, then you don't need to push every change to the server before testing (you still need to commit, though!). You may specify a git branch using ``--branch``. If not set, the default branch of the repository is used. If you are using a modified Singularity image for your code, you need to specify this with ``--user-image``. Note that for ``--backend-image`` you should always use the unmodified standard image that is provided by us, to ensure that you have the same conditions as on our side. Visualization ------------- You can enable visualization using the ``--sim-visualize`` flag. There are a few things to consider, though: - You will need to export the ``DISPLAY`` environment variable into the Singularity container. To do this, execute the following *before* running ``run_in_simulation.py`` (you can put it in your ``.bashrc``, then you don't need to remember it every time):: export SINGULARITYENV_DISPLAY=$DISPLAY - If running on a machine which uses Nvidia drivers, it may be necessary to also pass the ``--singularity-nv`` flag. See :ref:`singularity_nv`. Limitations =========== There are some limitations to the simulation which you need to keep in mind when using it: - In this setup the simulation unfortunately runs rather slow, so depending on your hardware, the simulated robot may not run at 1 kHz but a bit slower. The camera/object observations are synchronised accordingly. - **No camera images are rendered!** Rendering of the camera images is very slow, so it would mess up the timing of the whole setup. Therefore the cameras are disabled by default in this simulation. The camera observations are still provided as they also contain the object position for the cube task, but the images inside the observations are not set. If really needed, you can enable rendering by adding the ``--sim-render-images`` flag, but as mentioned above, this will slow down the simulation significantly. .. _trifingerpro_runner: https://github.com/open-dynamic-robot-initiative/trifingerpro_runner .. _ROS 2: http://ros.org