Robot Interface

We provide an easy-to-use interface for safely controlling the robot either with C++ or Python 3.

Quick Start Example

Sending actions to and getting observations from the robot is very easy. See the following example that simply sends a constant position command:

Python:

import robot_interfaces
import robot_fingers


if __name__ == "__main__":
    # for stage 1
    robot = robot_fingers.TriFingerPlatformWithObjectFrontend()
    # for stage 2
    # robot = robot_fingers.TriFingerPlatformFrontend()

    position = [
         0.0,  # Finger 0, Upper Joint
         0.9,  # Finger 0, Middle Joint
        -1.7,  # Finger 0, Lower Joint
         0.0,  # Finger 120, Upper Joint
         0.9,  # Finger 120, Middle Joint
        -1.7,  # Finger 120, Lower Joint
         0.0,  # Finger 240, Upper Joint
         0.9,  # Finger 240, Middle Joint
        -1.7,  # Finger 240, Lower Joint
    ]

    action = robot_interfaces.trifinger.Action(position=position)

    while True:
        t = robot.append_desired_action(action)
        robot_observation = robot.get_robot_observation(t)
        print(robot_observation.position)

C++:

#include <robot_fingers/trifinger_platform_frontend.hpp>

using namespace robot_fingers;

int main()
{
    // for stage 1
    auto frontend = TriFingerPlatformFrontend();
    // for stage 2
    // auto frontend = TriFingerPlatformWithObjectFrontend();

    TriFingerPlatformFrontend::Action::Vector position;
    position << 0.0, 0.9, -1.7, 0.0, 0.9, -1.7, 0.0, 0.9, -1.7;
    auto action = TriFingerPlatformFrontend::Action::Position(position);

    while (true)
    {
        auto t = frontend.append_desired_action(action);
        auto robot_observation = frontend.get_robot_observation(t);

        std::cout << robot_observation.position << std::endl;
    }

    return 0;
}

When using C++ you need to add the package robot_fingers as build dependency to your package.

TriFingerPlatformFrontend vs TriFingerPlatformWithObjectFrontend

The examples above use different frontend types for stages 1 and 2. This is because in stage 1 object tracking of the cube is provided and integrated into the camera observation, while in stage 2 no object tracking is provided (thus resulting in a different observation type). The API of both classes is identical with the only difference is that in the WithObject version the camera observations are of type TriCameraObjectObservation while for the other it is TriCameraObservation.

On Time Series and Time Relation of Actions and Observations

All data transfer between the front end (= user code) and the back end (= robot hardware) goes through so called time series. When calling append_desired_action(), the action is not applied immediately but is appended to the time series of desired actions. The back end runs in a loop at around 1 kHz and in each iteration it takes the next action from that time series and applies it on the robot. The time index t returned by the function indicates the time step at which the given action will be applied.

All the get methods (e.g. get_robot_observation()) require a “time index” as argument. If the specified time step has already passed, they immediately return the value from the corresponding step. If it lies in the future, the method will block and wait until the specified time step is reached and then return. Note that only the last 1000 elements are kept in the buffer. Trying to access an older time index results in an exception.

Time Relation of Actions and Observations

The time index t returned by append_desired_action() identifies the time step at which the given action is sent to the robot, that is at this moment the action starts to being applied. The observation that is returned by get_robot_observation() also belongs to this very moment. This means the observation of step t belongs to the moment when the action of step t just starts to being applied, that is this observation is not yet affected by that action!

Send Action to Start Backend

In the beginning of the program execution, the back end is idle and waiting for the first action. Only after the first action is received, the loop is started that applies actions and writes observations to the time series.

Important

This means you first have to send an action before you can read the first observation!

There are applications where an observation is needed before sending the first real action (e.g. when the action depends on the current position). A safe solution in this case is to start with a zero-torque action:

Python:

# an action without arguments defaults to zero torque
zero_torque_action = robot_interfaces.trifinger.Action()
t = frontend.append_desired_action(zero_torque_action)
first_observation = frontend.get_robot_observation(t)

C++:

Action zero_torque_action = Action::Zero();
auto t = frontend.append_desired_action(zero_torque_action);
auto first_observation = frontend.get_robot_observation(t);

When Next Action Is Not Provided In Time

If the back end reaches a time step t but the user did not yet provide an action for this time step (e.g. because the user code is running slower than 1 kHz), the back end automatically sets the desired action for step t to the same as the one of t - 1.

This is indicated to the user through the action_repetitions field in the status message which contains the number of times the current action has been repeated.

Safety Checks: Difference between Desired and Applied Action

The desired action that the user sends through the front end is not necessarily the same as the one that is actually applied to the robot. The reason is that before applying the action, some safety checks are performed that may modify the action. The purpose of these checks is to make sure that the robot does not break no matter what actions are sent to it.

The actually applied action for a given time index t is returned the method get_applied_action(). Further there is an equivalent method get_desired_action() which returns the desired action (i.e. the one sent by the user) of that time step.

The following safety checks are performed:

  • Position Limits: Prevent the joints from moving out of their allowed range.

  • Velocity Damping: Prevents too fast movements.

  • Torque Limits: Limit the torque to avoid damage on collisions and overheating.

For more details on the safety checks see About the Robot Platform.

Robot Frontend API

Below is the API documentation of the Python API. For C++ see the C++ documentation of the packages robot_interfaces and robot_fingers (names of classes and methods are the same).

class robot_fingers.TriFingerPlatformFrontend
class Action(torque=[0] * n_joints, position=[nan] * n_joints, position_kp=[nan] * n_joints, position_kd=[nan] * n_joints)

Action with desired torque and (optional) position.

The resulting torque command sent to the robot is:

torque_command = torque + PD(position)

To disable the position controller, set the target position to NaN. The controller is executed joint-wise, so it is possible to run it only for some joints by setting a target position for these joints and setting the others to NaN.

The specified torque is always added to the result of the position controller, so if you only want to run the position controller, make sure to set torque to zero for all joints.

Parameters
  • torque – Desired torque.

  • position – Desired position. Set values to NaN to disable position controller for the corresponding joints

  • position_kp – P-gains for the position controller. Set to NaN to use default values.

  • position_kd – D-gains for the position controller. Set to NaN to use default values.

property position

List of desired positions, one per joint. If set, a PD position controller is run and the resulting torque is added to torque. Set to NaN to disable position controller (default).

property position_kd

D-gains for position controller, one per joint. If NaN, default is used.

property position_kp

P-gains for position controller, one per joint. If NaN, default is used.

property torque

List of desired torques, one per joint.

append_desired_action(action: robot_interfaces.trifinger.Action)int

Append a desired action to the action time series.

Append an action to the “desired actions” time series. Note that this does not block until the action is actually executed. The time series acts like a queue from which the actions are taken one by one to send them to the actual robot. It is possible to call this method multiple times in a row to already provide actions for the next time steps.

The time step at which the given action will be applied is returned by this method.

Parameters

desired_action – The action that shall be applied on the robot. Note that the actually applied action might be different (see get_applied_action()).

Returns

Time step at which the action will be applied.

get_applied_action(t: int)robot_interfaces.trifinger.Action

Get actually applied action of time step t.

The applied action is the one that was actually applied to the robot based on the desired action of that time step. It may differ from the desired one e.g. due to safety checks which limit the maximum torque and enforce the position limits.

If t is in the future, this method will block and wait.

get_camera_observation(t: int)

Get camera images of time step t.

If t is in the future, this method will block and wait.

Parameters

t – Time index of the robot time series. This is internally mapped to the corresponding time index of the camera time series.

Returns

Camera images of time step t.

Raises

Exception – if t is too old and not in the time series buffer anymore.

get_current_timeindex()int

Get the current time index.

get_desired_action(t: int)robot_interfaces.trifinger.Action

Get desired action of time step t.

This corresponds to the action that was appended using append_desired_action(). Note that the actually applied action may differ, see get_applied_action().

If t is in the future, this method will block and wait.

get_robot_observation(t: int)robot_interfaces.trifinger.Observation

Get robot observation of time step t.

If t is in the future, this method will block and wait.

Parameters

t – Index of the time step.

Returns

Robot observation of time step t.

Raises

Exception – if t is too old and not in the time series buffer anymore.

get_robot_status(t: int)robot_interfaces.Status

Get robot status of time step t.

If t is in the future, this method will block and wait.

get_timestamp_ms(t: int)float

Get timestamp in milliseconds of time step t.

If t is in the future, this method will block and wait.

wait_until_timeindex(t: int)

Wait until time step t is reached.

class robot_fingers.TriFingerPlatformWithObjectFrontend

This API of this class is the same as robot_fingers.TriFingerPlatformFrontend with the only difference that the type of the camera observation is TriCameraObjectObservation (including the object pose). See also TriFingerPlatformFrontend vs TriFingerPlatformWithObjectFrontend.

Order of Joints

Values like angular position, velocity or torque of the joints are represented as a 1-dimensional vector with one element per joint. The order of the joints in these vectors is as follows (see About the Robot Platform for the position of the fingers in the platform):

  1. Finger 0, upper joint

  2. Finger 0, middle joint

  3. Finger 0, lower joint

  4. Finger 120, upper joint

  5. Finger 120, middle joint

  6. Finger 120, lower joint

  7. Finger 240, upper joint

  8. Finger 240, middle joint

  9. Finger 240, lower joint

Actions

class robot_interfaces.trifinger.Action(torque=[0] * n_joints, position=[nan] * n_joints, position_kp=[nan] * n_joints, position_kd=[nan] * n_joints)

Action with desired torque and (optional) position.

The resulting torque command sent to the robot is:

torque_command = torque + PD(position)

To disable the position controller, set the target position to NaN. The controller is executed joint-wise, so it is possible to run it only for some joints by setting a target position for these joints and setting the others to NaN.

The specified torque is always added to the result of the position controller, so if you only want to run the position controller, make sure to set torque to zero for all joints.

Parameters
  • torque – Desired torque.

  • position – Desired position. Set values to NaN to disable position controller for the corresponding joints

  • position_kp – P-gains for the position controller. Set to NaN to use default values.

  • position_kd – D-gains for the position controller. Set to NaN to use default values.

property position

List of desired positions, one per joint. If set, a PD position controller is run and the resulting torque is added to torque. Set to NaN to disable position controller (default).

property position_kd

D-gains for position controller, one per joint. If NaN, default is used.

property position_kp

P-gains for position controller, one per joint. If NaN, default is used.

property torque

List of desired torques, one per joint.

Observations

class robot_interfaces.trifinger.Observation
property position

List of angular joint positions [rad], one per joint.

property tip_force

Measurements of the push sensors at the finger tips, one per finger. Ranges between 0 and 1.

property torque

List of torques [Nm], one per joint.

property velocity

List of angular joint velocities [rad/s], one per joint.

class trifinger_cameras.py_tricamera_types.TriCameraObservation

Observation from the three cameras.

property cameras

List of observations from cameras ‘camera60’, ‘camera180’ and ‘camera300’ (in this order)

Type

List[CameraObservation]

class trifinger_object_tracking.py_tricamera_types.TriCameraObjectObservation

Observation from the three cameras, including the estimated object pose provided by the integrated object tracker.

property cameras

List of observations from cameras ‘camera60’, ‘camera180’ and ‘camera300’ (in this order)

Type

List[CameraObservation]

property filtered_object_pose

Filtered estimated object pose.

Type

ObjectPose

property object_pose

Estimated object pose.

Type

ObjectPose

class trifinger_object_tracking.py_object_tracker.ObjectPose
property confidence

Confidence measure of the object tracker. Between 0 and 1.

property orientation

Orientation quaternion (x, y, z, w) in the world frame.

property position

Position (x, y, z) in the world frame.

class trifinger_cameras.camera.CameraObservation
property image

The image.

property timestamp

Timestamp when the image was acquired.

Status

Each status message contains the following fields:

If there is an error reported in the status, the robot is not in an operational state anymore. Trying to append another action will result in an exception in this case.

class robot_interfaces.Status
action_repetitions: int

Number of times the current action has been repeated. See When Next Action Is Not Provided In Time.

error_status

Current error status.

Type

ErrorStatus

get_error_message()

Get error message. Only meaningful if error_status != NO_ERROR

class robot_interfaces.Status.ErrorStatus

Members:

NO_ERROR : Indicates that there is no error.

DRIVER_ERROR : Error from the low level robot driver (e.g. some hardware failure).

BACKEND_ERROR : Error from the robot back end (e.g. some communication issue).

Direct Simulation API

You can use the API described above in simulation to test your code before submitting it to the actual robots, see How to Locally Run Your Code in Simulation. However, in case you want to pre-train in simulation, this might be a bit cumbersome. For this case it might be better to use trifinger_simulation.TriFingerPlatform from the pre-stage which provides mostly the same API as robot_fingers.TriFingerPlatformFrontend.

Gym Environment

In the example package rrc_example_package, we provide a Gym environment RealRobotCubeTrajectoryEnv which works mostly the same as the SimCubeTrajectoryEnv with some modifications for the real robot. The relevant differences are:

  • Instead of the simulation, it uses robot_fingers.TriFingerPlatformFrontend, so it can be used for the real robot.

  • In the observation of step t it provides the robot observation of this step, not of step t + 1 (because the latter is not possible on the real system). Instead we add the current action to observation as we describe it in this paper.

See usage example.

For Stage 2, there is another environment RealRobotRearrangeDiceEnv which is mostly the same but with some modifications for the “rearrange dice” task.

These environments serve as example. You may them it as they are but you can also modify according to your needs.