*************** Robot Interface *************** We provide an easy-to-use interface for safely controlling the robot either with C++ or Python 3. .. note:: There is a small API change compared to the Simulation Phase: The object tracking was combined with the camera observations. So there is no ``get_object_pose`` anymore but instead the camera observation contains a field ``object_pose`` (see `Observations`_). Quick Start Example =================== Sending actions to and getting observations from the robot is very easy. See the following example that simply sends a constant position command: Python: .. literalinclude:: ../examples/trifinger_platform_frontend.py C++: .. literalinclude:: ../examples/trifinger_platform_frontend.cpp When using C++ you need to add the package ``robot_fingers`` as build dependency to your package. On Time Series and Time Relation of Actions and Observations ============================================================ All data transfer between the front end (= user code) and the back end (= robot hardware) goes through so called time series. When calling :meth:`~robot_fingers.TriFingerPlatformFrontend.append_desired_action`, the action is not applied immediately but is *appended* to the time series of desired actions. The back end runs in a loop at around 1 kHz and in each iteration it takes the next action from that time series and applies it on the robot. The time index ``t`` returned by the function indicates the time step at which the given action will be applied. All the get methods (e.g. :meth:`~robot_fingers.TriFingerPlatformFrontend.get_robot_observation`) require a "time index" as argument. If the specified time step has already passed, they immediately return the value from the corresponding step. If it lies in the future, the method will block and wait until the specified time step is reached and then return. Note that only the last 1000 elements are kept in the buffer. Trying to access an older time index results in an exception. Time Relation of Actions and Observations ----------------------------------------- The time index ``t`` returned by :meth:`~robot_fingers.TriFingerPlatformFrontend.append_desired_action` identifies the time step at which the given action is sent to the robot, that is at this moment the action *starts* to being applied. The observation that is returned by :meth:`~robot_fingers.TriFingerPlatformFrontend.get_robot_observation` also belongs to this very moment. This means the observation of step ``t`` belongs to the moment when the action of step ``t`` just starts to being applied, that is this observation is not yet affected by that action! Send Action to Start Backend ---------------------------- In the beginning of the program execution, the back end is idle and waiting for the first action. Only after the first action is received, the loop is started that applies actions and writes observations to the time series. .. important:: This means **you first have to send an action before you can read the first observation!** There are applications where an observation is needed before sending the first real action (e.g. when the action depends on the current position). A safe solution in this case is to start with a zero-torque action: Python: .. code-block:: Python # an action without arguments defaults to zero torque zero_torque_action = robot_interfaces.trifinger.Action() t = frontend.append_desired_action(zero_torque_action) first_observation = frontend.get_robot_observation(t) C++: .. code-block:: C++ Action zero_torque_action = Action::Zero(); auto t = frontend.append_desired_action(zero_torque_action); auto first_observation = frontend.get_robot_observation(t); When Next Action Is Not Provided In Time ---------------------------------------- If the back end reaches a time step ``t`` but the user did not yet provide an action for this time step (e.g. because the user code is running slower than 1 kHz), the back end automatically sets the desired action for step ``t`` to the same as the one of ``t - 1``. This is indicated to the user through the :attr:`~robot_interfaces.Status.action_repetitions` field in the status message which contains the number of times the current action has been repeated. Safety Checks: Difference between Desired and Applied Action ============================================================ The *desired* action that the user sends through the front end is not necessarily the same as the one that is actually *applied* to the robot. The reason is that before applying the action, some safety checks are performed that may modify the action. The purpose of these checks is to make sure that the robot does not break no matter what actions are sent to it. The actually applied action for a given time index ``t`` is returned the method :meth:`~robot_fingers.TriFingerPlatformFrontend.get_applied_action`. Further there is an equivalent method :meth:`~robot_fingers.TriFingerPlatformFrontend.get_desired_action` which returns the desired action (i.e. the one sent by the user) of that time step. The following safety checks are performed: - Position Limits: Prevent the joints from moving out of their allowed range. - Velocity Damping: Prevents too fast movements. - Torque Limits: Limit the torque to avoid damage on collisions and overheating. For more details on the safety checks see :doc:`about_platform`. Robot Frontend API ================== Below is the API documentation of the Python API. For C++ see the C++ documentation of the packages robot_interfaces_ and robot_fingers_ (names of classes and methods are the same). .. autoclass:: robot_fingers.TriFingerPlatformFrontend :members: Order of Joints --------------- Values like angular position, velocity or torque of the joints are represented as a 1-dimensional vector with one element per joint. The order of the joints in these vectors is as follows (see :doc:`about_platform` for the position of the fingers in the platform): 0. Finger 0, upper joint 1. Finger 0, middle joint 2. Finger 0, lower joint 3. Finger 120, upper joint 4. Finger 120, middle joint 5. Finger 120, lower joint 6. Finger 240, upper joint 7. Finger 240, middle joint 8. Finger 240, lower joint Actions ------- .. autoclass:: robot_interfaces.trifinger.Action :members: Observations ------------ .. autoclass:: robot_interfaces.trifinger.Observation :members: .. autoclass:: trifinger_object_tracking.py_tricamera_types.TriCameraObjectObservation :members: .. autoclass:: trifinger_object_tracking.py_object_tracker.ObjectPose :members: .. autoclass:: trifinger_cameras.camera.CameraObservation :members: Status ------ Each status message contains the following fields: If there is an error reported in the status, the robot is not in an operational state anymore. Trying to append another action will result in an exception in this case. .. autoclass:: robot_interfaces.Status .. attribute:: action_repetitions :type: int Number of times the current action has been repeated. See `When Next Action Is Not Provided In Time`_. .. autoattribute:: error_status .. autoattribute:: error_message .. autoclass:: robot_interfaces.Status.ErrorStatus Direct Simulation API ===================== You can use the API described above in simulation to test your code before submitting it to the actual robots, see :doc:`run_in_simulation_locally`. Hoever, in case you want to pre-train in simulation, this might be a bit cumbersome. The ``trifinger_simulation`` package contains a class ``TriFingerPlatform``. This is an updated version of :class:`rrc_simulation.TriFingerPlatform` which was used in the simulation phase. It can be used as a replacement for :class:`robot_fingers.TriFingerPlatformFrontend`. Gym Environment =============== In the example package rrc_example_package_, we provide an updated version of the ``CubeEnv`` which was used for the simulation phase. The relevant differences are: - It is called RealRobotCubeEnv_ to indicate the difference and the arguments are a bit different. - By default it uses :class:`robot_fingers.TriFingerPlatformFrontend`, so it can be used for the real robot. - In the observation of step ``t`` it provides the robot observation of this step, not of step ``t + 1`` (because the latter is not possible on the real system). Instead we add the current action to observation as we describe it in `this paper `_. See `usage example `_. The ``RealRobotCubeEnv`` serves as an example. You may use it as is but you can also modify according to your needs. Use Gym Environment with direct simulation ------------------------------------------ To use the `Direct Simulation API`_ in the Gym environment, you only need to replace ``self._reset_platform_frontend()`` with ``self._reset_direct_simulation()`` in the ``reset()`` method (see `here `_). .. _robot_interfaces: https://open-dynamic-robot-initiative.github.io/code_documentation/robot_interfaces/docs/doxygen/html/index.html .. _robot_fingers: https://open-dynamic-robot-initiative.github.io/code_documentation/robot_fingers/docs/doxygen/html/index.html .. _rrc_example_package: https://github.com/rr-learning/rrc_example_package .. _RealRobotCubeEnv: https://github.com/rr-learning/rrc_example_package/blob/master/python/rrc_example_package/cube_env.py