*********************** Stage 2: Rearrange Dice *********************** .. image:: images/trifingerpro_with_dice.jpg :alt: TriFinger robot with dice forming the letters RRC. In this stage, the task is to arrange a number of dice in a given, randomly generated pattern. See :doc:`task_dice` for the general task description. About the Dice ============== .. image:: images/dice_closeup.jpg :alt: close-up photo of the dice :width: 50% :align: center The dice are regular D6. They all look the same, so no distinction is made between the different dice. - Width: 22 mm - Weight: ~12 g .. the other dice: 25mm/~15.5g Detecting the Dice in the Camera Images ======================================= In this stage, we don't provide a ready-to-use object tracking anymore, so you will have to find the dice yourselves, using the camera images. However, we do provide you with a function to segment pixels that belong to dice from the background (the same function is also used for evaluation): .. image:: images/dice_segmentation_example.jpg :alt: Example of the colour segementation of the dice. :align: center .. autofunction:: trifinger_object_tracking.py_lightblue_segmenter.segment_image Usage Example: .. literalinclude:: examples/dice_segmentation.py Reward Computation ================== The reward is computed using :func:`trifinger_simulation.tasks.rearrange_dice.evaluate_state`. It expects as input a list of "goal masks" as well as a list of segmentation masks from the actual scene. Below is an example including the necessary initialisation of the goal masks. Instead you may also use the `RealRobotRearrangeDiceEnv class of the example package `_ which implements this and provides a ``compute_reward`` method. .. literalinclude:: examples/rearrange_dice_compute_reward.py Format of goal.json =================== By default a random pattern will be sampled when you execute a job on the robot. If you want to test specific cases, you can specify a fixed pattern by adding a file ``goal.json`` to your repository (see :ref:`user_code_goal_json`). For the task of this stage, the format of that file has to be as follows: .. code-block:: { "goal": [ [-0.066, 0.154, 0.011], [-0.131, -0.11, 0.011], [0.022, 0.176, 0.011], ... ] } Each element of the list corresponds to the (x, y, z)-position of one die. The number of goal positions needs to match the number of dice (see :attr:`~trifinger_simulation.tasks.rearrange_dice.NUM_DICE`). Using the Real Robot ==================== See :doc:`submission_system/index`. Changes Compared to Stage 1 =========================== Note that there are a few important differences compared to stage 1: - We don't provide integrated object tracking for the dice. Due to this, the camera observations don't contain an ``object_pose`` anymore. Unfortunately, this means that different classes are needed the robot frontend and the log readers, see :doc:`submission_system/robot_interface` and :doc:`submission_system/log_files` for more information. - If you are using :class:`trifinger_simulation.TriFingerPlatform` for local training/testing, you need to set the ``object_type`` argument to ``DICE``: .. code-block:: Python import trifinger_simulation platform = trifinger_simulation.TriFingerPlatform( visualization=True, object_type=trifinger_simulation.trifinger_platform.ObjectType.DICE, )