Control API functions description¶
Using your own laptop¶
You shouldn't use your own laptop to run CM2C as it requires a real-time kernel (which the computers kukarm and truyere have) and you will only operate 1 arm.
But for information i will let this documentation here.
First you will need a real-time kernel to be able to communicate with the pandas (PREEMPT_RT).
The procedure is described here: https://frankaemika.github.io/docs/installation_linux.html
More informations in french can be found here: https://bidouilledebian.wordpress.com/2018/05/31/compiler-son-noyau/
The second link might actually be better.
Note: avoid the use of the fakeroot command if you have errors.
You will also need to configure the network (ethernet).
- In the ipv4 tab, choose manual configuration.
- Adress = 172.17.1.1
- Mask = 255.255.255.0
That's all.
You can ping the robot's IP to see if you are able to communicate with it.
- Panda1 = 172.17.1.2
- Panda2 = 172.17.1.3
Launching the controllers¶
This is done by simply using a launch file once the ROS environment is set.
These launch files are located in the launch directory of the CM2C package.
They will run an admittance_joint_trajectory_controller node.
So if kukarm is operating panda_1 run:
roslaunch admittance_joint_trajectory_controller panda_1.launch
and this on truyere:
roslaunch admittance_joint_trajectory_controller panda_2.launch
Important: These launch files are not identical, and the panda_1.launch must be launched before the panda_2.launch.
Once the controllers are running there is nothing else to do regarding the control side and you can launch the planning stack and refer to the planning API.
Utilities and control API outside of the Cooperative Manipulation scope¶
Note: It is to be noted that functionalities presented below have been mainly developed during the pro-act project for debugging purpose etc.
At the moment i cannot guarantee that this code is still working.
In case it is not functional, it shouldn't require a lot of work to debug it. Probably some naming issues etc. You can also go back to older commits.
CM2C_API¶
Outside of the admittance_joint_trajectory_controller which is the main node of the package containing the controller, there is some utilities developed for debugging etc.
One is the cm2c_api node.
Once the admittance_joint_trajectory_controller node is running, you can launch the cm2c_api node in another terminal:
rosrun admittance_joint_trajectory_controller cm2c_api
This node simply propose some basic functionalities to move the arm, like the MoveJoints action. With this action you can move a list of joints either with absolute or relative values:
rosaction call /admittance_joint_trajectory_controller/MoveJoints "{joint_names: [panda_joint6], joint_values: [0.2], duration: 5.0, abs: false}" rosaction call /admittance_joint_trajectory_controller/MoveJoints "{joint_names: [panda_joint4, panda_joint5, panda_joint6], joint_values: [0.5,-0.6, 0.8], duration: 5.0, abs: false}" rosaction call /admittance_joint_trajectory_controller/MoveJoints "{joint_names: [panda_1_joint4, panda_1_joint5, panda_1_joint6], joint_values: [-0.5,0.5, -0.5], duration: 5.0, abs: false}"
Replaying a trajectory learnt by hand¶
It is possible to record some way-points by moving the arm by manually. Basically by pressing buttons located at the end effector it is possible to record some way-points and to replay the trajectory via the franka web interface. You can also save a file containing those way-points but it cannot be reused as it is. So i developed some functionalities to replay with ROS a trajectory learnt by hand. Note: This code is old and might not be working anymore.
Quickly learn a task by hand¶
- Access the robot server interface: https://172.17.1.2/desk/ or https://172.17.1.3/desk/
- Create a new task.
- Add an app to the task (joint or cartesian motion)
- Open the app (click on it).
- Be sure that the led indicator of the arm is white, indicating that the robot can be moved by hand.
- To record way points move the arm in the desired configuration, then push 2 times the enter button (the button with an "o"), and then press 1 time the save button (the one with the checkmark). Repeat as many times as needed. You can get more information on the purpose of each button by looking at the page 37 of this manual: https://redmine.laas.fr/attachments/download/2984/pandaUserHandbook.pdf
Replay a trajectory learnt by hand with ROS¶
The way-points recorded are saved in a .task file. We want to parse this file, and replay the trajectory by using ROS.
For interpolating the way-points i used softMotion, a library for trajectory generation that is developed at LAAS.
We can use again the cm2c_api node to achieve that. Here are some examples of actions and services:
rosrun admittance_joint_trajectory_controller cm2c_api rosservice call /admittance_joint_trajectory_controller/SetSpeedLimits "speed: 0.15" rosservice call /admittance_joint_trajectory_controller/SetAccLimits "speed: 0.35" rosservice call /admittance_joint_trajectory_controller/SetJerkLimits "speed: 0.25" rosaction call /admittance_joint_trajectory_controller/SmTrajGen "/home/kdesorme/panda_tasks/slidingDemo.task"
Note: You might find a trajectoryParser.py file in the scripts folder. This launch a ROS python node that basically does the same thing but it is not used anymore cm2c_api propose more functionalities. I will let it here as the python code can be interesting.
Updated by Kévin Desormeaux over 3 years ago · 4 revisions