Programming a Service client for ROBOTIS Open Manipulator

Programming a Service client for ROBOTIS Open Manipulator

Hello ROS Developers!

In this post, we are going to show a way of creating a custom program to send desired positions with respect to the task space of the Open Manipulator end effector.

There are many possibilities to program this robot. For the sake of simplicity, we will use a single service available to move the arm to a specific position.

Although, the step-by-step explanation will show you how to use everything this simulation provides.


 

Configuring the Environment

If you haven’t configured the environment from the previous ROSJect, please open it using this link.

Though if you prefer to learn how to configure the environment and the simulation from scratch, start from this post.


 

How to interact with the robot – Choosing a specific service

After having the simulation running, you need to launch the controller node. This node will provide the services that send references to the real robot or the simulation (in this case, it is the simulation).

user:~$ roslaunch open_manipulator_controller open_manipulator_controller.launch use_platform:=false

Then, in another web shell, we are able to check the services available:

user:~$ rosservice list

We have many services available, let’s check the one we will use, which is  /open_manipulator/goal_task_space_path_position_only.

In order to call for the service, we need to know its structure:

user:~$ rosservice type /open_manipulator/goal_task_space_path_position_only

open_manipulator_msgs/SetKinematicsPose

We have to deal with this kind of message: open_manipulator_msgs/SetKinematicsPose

Its structure is:

user:~$ rossrv show open_manipulator_msgs/SetKinematicsPose

string planning_group
string end_effector_name
open_manipulator_msgs/KinematicsPose kinematics_pose
  geometry_msgs/Pose pose
    geometry_msgs/Point position
      float64 x
      float64 y
      float64 z
    geometry_msgs/Quaternion orientation
      float64 x
      float64 y
      float64 z
      float64 w
  float64 max_accelerations_scaling_factor
  float64 max_velocity_scaling_factor
  float64 tolerance
float64 path_time
---
bool is_planned

 

Let’s code!

Certainly, we need a simple ROS node that will act as a service client. The basic structure goes below:

#!/usr/bin/env python

import sys
import rospy
from open_manipulator_msgs.srv import *

def set_position_client(x, y, z, time):
    ...

def usage():
    return "%s [x y z]"%sys.argv[0]

if __name__ == "__main__":

    if len(sys.argv) == 5:
        x = float(sys.argv[1])
        y = float(sys.argv[2])
        z = float(sys.argv[3])
        time = float(sys.argv[4])
    else:
        print usage()
        sys.exit(1)
    print "Requesting [%s, %s, %s]"%(x, y, z)
    response = set_position_client(x, y, z, time)
    print "[%s %s %s] returns [%s]"%(x, y, z, response)

For instance, this is a template took from the ROS wiki tutorial, modified in such a way we only need to implement the service client.

Let’s check it line-by-line:

def set_position_client(x, y, z, time):
    service_name = '/open_manipulator/goal_task_space_path_position_only'

    rospy.wait_for_service(service_name)

    try:
        set_position = rospy.ServiceProxy(service_name, SetKinematicsPose)

        arg = SetKinematicsPoseRequest()
        arg.end_effector_name = 'gripper'
        arg.kinematics_pose.pose.position.x = x
        arg.kinematics_pose.pose.position.y = y
        arg.kinematics_pose.pose.position.z = z
        arg.path_time = time
        resp1 = set_position(arg)
        print 'Service done!'
        return resp1
    except rospy.ServiceException, e:
        print "Service call failed: %s"%e
        return False

It is defined the service name we want to call. Then, we wait for the service to get ready.

To start the try/catch block, we create a proxy object, that represents the service server.

A new variable will contain the required parameters for the service. We need to fill it with the arguments of the function, except for the end_effector_name, which will always be the same for this robot: gripper.

We finally call for the server (set_position) and wait for its response!


 

Demonstration

Before pressing the play button of the simulation, run the controllers in the simulation mode:

user:~$ roslaunch open_manipulator_controller open_manipulator_controller.launch use_platform:=false

Now, start the simulation with the play button.

Let’s try sending some commands to the robot, one just after another:

user:~$ rosrun programming_manipulator test.py -0.1 0.1 0.16 1

Then:

user:~$ rosrun programming_manipulator test.py 0.1 0.3 0.18 1

The result must be something like below:


 

Conclusion

This is how you can start programming for the Open Manipulator robot.

Don’t forget, if you missed any of the steps above, you can get a copy of the generated ROSJect along this post: http://www.rosject.io/l/11185626/

If you like this kind of post, share it and leave a comment!

See you in the next one!

Cheers

ROBOTIS Open Manipulator on Ubuntu18.04 – ROS Melodic

ROBOTIS Open Manipulator on Ubuntu18.04 – ROS Melodic

Hello ROS Developers!

In this post we are going to setup the simulation of the robot OpenMANIPULATOR-X from Robotis company. All the steps were based on their official documents

Let’s start creating a new ROSJect, we are going to use Ubuntu 18.04 + ROS2 Eloquent, which is also a ROS 1 Melodic environment.


 

Configuring environment

First we are going to configure our environment for the simulation. Let’s change the $ROS_PACKAGE_PATH in order to avoid the public simulations available in ROSDS.

Let’s recompile the simulation_ws on the folder /home/user/simulation_ws.

user:~$ cd ~/simulation_ws
user:~/simulation_ws$ rm -rf build devel
user:~/simulation_ws$ source /opt/ros/melodic/setup.bash
user:~/simulation_ws$ catkin_make
user:~/simulation_ws$ source devel/setup.bash

And then recompile catkin_ws as well.

user:~$ cd ~/catkin_ws
user:~/catkin_ws$ rm -rf build devel
user:~/catkin_ws$ source ~/simulation_ws/devel/setup.bash
user:~/catkin_ws$ catkin_make

We are ready to download the simulation.


 

Downloading and compiling Open Manipulator X

Let’s clone the necessary repositories to our workspace, inside ~/simulation_ws/src

user:~$ cd ~/simulation_ws/src
user:~/simulation_ws/src$ git clone https://github.com/ROBOTIS-GIT/DynamixelSDK.git
user:~/simulation_ws/src$ git clone https://github.com/ROBOTIS-GIT/dynamixel-workbench.git
user:~/simulation_ws/src$ git clone https://github.com/ROBOTIS-GIT/dynamixel-workbench-msgs.git
user:~/simulation_ws/src$ git clone https://github.com/ROBOTIS-GIT/open_manipulator.git
user:~/simulation_ws/src$ git clone https://github.com/ROBOTIS-GIT/open_manipulator_msgs.git
user:~/simulation_ws/src$ git clone https://github.com/ROBOTIS-GIT/open_manipulator_simulations.git
user:~/simulation_ws/src$ git clone https://github.com/ROBOTIS-GIT/robotis_manipulator.git

After that, get one folder up and re-compile the simulation_ws workspace.

user:~/simulation_ws/src$ cd ..
user:~/simulation_ws$ catkin_make

You must be ready to launch the simulation at this point!


 

Launching the simulation

Let’s go on the ROSDS way of launching simulations. Open the simulation menu and press Choose launch file..

Choose the file open_manipulator_gazebo.launch from the package open_manipulator_gazebo

 

 

Press Start simulation and the web gazebo client must open the simulation:

Great! Let’s play with it a little bit!


 

Controllers and GUI Operator

With the simulation ready, let’s start two other programs using the shell.

In a first web shell, shell 1, launch the controllers for gazebo simulator. At this moment, we are telling ROS it’s not a real robot, but a simulated one:

user:~$ roslaunch open_manipulator_controller open_manipulator_controller.launch use_platform:=false

In another shell, shell 2, open the graphical interface for controlling the robot:

user:~$ roslaunch open_manipulator_control_gui open_manipulator_control_gui.launch

Your environment should be more or less like below:


 

Testing it!

Now, let’s send some commands to the robot. Check the GIF below. The actions are:

  • Play the simulation
  • Start the timer inside the GUI
  • Enable actuator inside the GUI
  • Send the commands: open and close the gripper
  • Send Home pose
  • Send Init pose
  • Send Custom pose


 

Conclusion

In a few steps we achieved to have the simulation and a demo working!

Don’t forget to check the official documentation of the robot: http://emanual.robotis.com/docs/en/platform/openmanipulator_x/overview/#overview

If you miss any of the steps, you can have a copy of the ROSJect generated for this post, click here!

Please, leave your comments and share it if you like this kind of post!

See you in the next one!

Cheers!

Creating a Restaurant Robot Simulation, Part 1- #ROS Projects

Creating a Restaurant Robot Simulation, Part 1- #ROS Projects

.

 

 

 

 

 

 

 

 

What we are going to learn

  • How to add modifications to the 3D model.
  • How to create a load sensor plugin for gazebo.
  • How to set up navigation, create a map and save table waypoints.

In summary, we will learn how to create a simulation that allows us to simulate a robot that serves in a restaurant.

List of resources used in this post

 

Open the ROSject

In order to see the simulation, you have to click on the ROSject link (http://www.rosject.io/l/ded2912/) to get a copy of the ROSject. You can now open the ROSject by clicking on the Open ROSject button.

Open ROSject - Barista Robot

Open ROSject – Barista Robot

 

Launching the simulation

With the ROSject open, you can launch the simulation by clicking Simulation, then Choose Launch File. Then you select the main_simple.launch file from the barista_gazebo package.

If everything went ok, you should now have the simulation like in the image below:

Barista Robot in ROSDS

Barista Robot in ROSDS

 

The launch file we just selected can be found on ~/simulation_ws/src/barista/barista_gazebo/launch/main_simple.launch  and has the following content:

<?xml version="1.0" encoding="UTF-8"?>
<launch>
  <arg name="pause" default="false"/>
  <arg name="x" default="0"/>
  <arg name="y" default="0"/>
  <arg name="z" default="0.0"/>
  <arg name="roll" default="0"/>
  <arg name="pitch" default="0"/>
  <arg name="yaw" default="0" />

  <include file="$(find barista_worlds)/launch/start_world_simple10x10.launch">
    <arg name="pause" value="$(arg pause)" />

    <arg name="put_robot_in_world" value="true" />
    <arg name="put_robot_in_world_package" value="barista_gazebo" />
    <arg name="put_robot_in_world_launch" value="put_robot_in_world.launch" />

    <arg name="x" value="$(arg x)" />
    <arg name="y" value="$(arg y)" />
    <arg name="z" value="$(arg z)" />
    <arg name="roll" value="$(arg roll)"/>
    <arg name="pitch" value="$(arg pitch)"/>
    <arg name="yaw" value="$(arg yaw)"/>
  </include>
  
  <include file="$(find barista_description)/launch/rviz_localization.launch"/>


</launch>

As we can see in the launch file, it spawns the robot and opens RViz (Robot Visualization). You can see RViz by opening the Graphical Tools (Tools -> Graphical Tools).

Barista robot in RViz, in ROSDS

Barista robot in RViz, in ROSDS

Where is the Barista robot defined

If you look at the launch file mentioned earlier, you can see that we use the put_robot_in_world.launch file to spawn the robot. The content of the file is:

<launch>
  <arg name="base" default="barista"/>
  <arg name="stacks" default="hexagons"/>
  <arg name="3d_sensor" default="asus_xtion_pro"/>

  <arg name="x" default="0"/>
  <arg name="y" default="0"/>
  <arg name="z" default="0.5"/>
  <arg name="roll" default="0"/>
  <arg name="pitch" default="0"/>
  <arg name="yaw" default="0" />
  
  <arg name="urdf_file" default="$(find xacro)/xacro.py '$(find barista_description)/urdf/barista/$(arg base)_$(arg stacks)_$(arg 3d_sensor).urdf.xacro'" />
  <param name="robot_description" command="$(arg urdf_file)" />

  <node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher">
    <param name="publish_frequency" type="double" value="30.0" />
  </node>

  <!-- Gazebo model spawner -->
  <node name="spawn_$(arg base)_model" pkg="gazebo_ros" type="spawn_model"
        args="-urdf -param robot_description -model $(arg base) -x $(arg x) -y $(arg y) -z $(arg z) -R $(arg roll) -P $(arg pitch) -Y $(arg yaw)"/>

  <!-- Diagnostics publication to simulate the kobuki mobile base -->
  <node name="sim_diagnostics_pub_node" pkg="barista_gazebo" type="sim_diagnostics_pub.py"
        respawn="false" output="screen">
  </node>
  <!-- Bumper/cliff to pointcloud (not working, as it needs sensors/core messages) -->
  <!-- include file="$(find turtlebot_bringup)/launch/includes/kobuki/bumper2pc.launch.xml"/-->
</launch>

We are not going to explain this file in detail because it will take us a long time. If you need further understanding of URDF files, we highly recommend the URDF for Robot Modeling course on Robot Ignite Academy.

In this file, we essentially load the barista_hexagons_asus_xtion_pro.urdf.xacro file located on the ~/simulation_ws/src/barista/barista_description/urdf/barista/ folder.

At the end of that file we can see that we basically spawn the robot and load the sensors:

<?xml version="1.0"?>
<!--
    - Base      : barista
    - Stacks    : hexagons
    - 3d Sensor : laser-hokuyo
-->    
<robot name="barista" xmlns:xacro="http://ros.org/wiki/xacro">

  <xacro:include filename="$(find turtlebot_description)/urdf/common_properties.urdf.xacro"/>
  <xacro:include filename="$(find turtlebot_description)/urdf/turtlebot_properties.urdf.xacro"/>
  <!-- Bases -->
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_kobuki.urdf.xacro" />
  <!-- Stacks -->
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_hexagons.urdf.xacro"/>
  <!-- 3D Sensors -->
  <!-- Barista Mods -->
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_mod.urdf.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_hokuyo.urdf.xacro" />

  <!-- Load Sensor and Common Macros and properties -->
  <xacro:include filename="$(find barista_description)/urdf/barista/macros.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/properties.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/materials.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_loadsensor.xacro" />
  
  <barista_kobuki/>
  <stack_hexagons parent="base_link"/>
  <barista_mod  bottom_parent="plate_middle_link" top_parent="plate_top_link"/>
  <barista_hokuyo  parent="plate_middle_link" x_hok="0.116647" y_hok="0.0" z_hok="0.045"/>

  <barista_loadsensor  parent="barista_top_link"
                       x_loadsensor="0.014395"
                       y_loadsensor="0.0"
                       z_loadsensor="${0.082804+(loadsensor_height/2.0)}"
                       r="${loadsensor_radius}"
                       l="${loadsensor_height}"
                       mass="${loadsensor_mass}"/>
  
</robot>

The barista_loadsensor macro is defined in the barista_loadsensor.xacro.xacro file of the barista_description package, and has the following content:

<?xml version="1.0" ?>
<!-- 
  This is the load sensor used for detecting elements positioned ontop
 -->
<robot name="loadsensor" xmlns:xacro="http://ros.org/wiki/xacro">


  <xacro:macro name="barista_loadsensor" params="parent x_loadsensor y_loadsensor z_loadsensor r l mass">
    
    <link name="loadsensor_link">
	    <inertial>
		    <mass value="${mass}"/>
		    <origin xyz="0 0 0" rpy="0 0 0"/>
		    <cylinder_inertia mass="${mass}" r="${r}" l="${l}" />
	    </inertial>
	    <collision>
		    <origin xyz="0 0 0" rpy="0 0 0"/>
		    <geometry>
			    <cylinder length="${l}" radius="${r}"/>
		    </geometry>
	    </collision>
	    <visual>
		    <origin xyz="0 0 0" rpy="0 0 0"/>
		    <geometry>
			    <cylinder length="${l}" radius="${r}"/>
		    </geometry>
			<material name="red"/>
	    </visual>
    </link>

    <joint name="loadsensor_joint" type="fixed">
    	<origin xyz="${x_loadsensor} ${y_loadsensor} ${z_loadsensor}" rpy="0 0 0"/>
    	<parent link="${parent}"/>
    	<child link="loadsensor_link"/>
    </joint>
    

    <gazebo reference="loadsensor_link">
      <mu1 value="2000.0"/>
      <mu2 value="1000.0"/>
      <kp value="${kp}" />
      <kd value="${kd}" />
		<material>Gazebo/Red</material>
      <sensor name="loadsensor_link_contactsensor_sensor" type="contact">
        <always_on>true</always_on>
        <contact>
          <collision>base_footprint_fixed_joint_lump__loadsensor_link_collision_10</collision>
        </contact>
        <plugin name="loadsensor_link_plugin" filename="libgazebo_ros_bumper.so">
          <bumperTopicName>loadsensor_link_contactsensor_state</bumperTopicName>
          <frameName>loadsensor_link</frameName>
        </plugin>
      </sensor>
  	</gazebo>
  </xacro:macro>
  
  
  
</robot>

An important part of this file is the section that contains:

<plugin name="loadsensor_link_plugin" filename="libgazebo_ros_bumper.so">
          <bumperTopicName>loadsensor_link_contactsensor_state</bumperTopicName>
          <frameName>loadsensor_link</frameName>
        </plugin>

The plugin libgazebo_ros_bumper.so configures everything that has to do with the collisions of the load sensor.

You can see that it sets the topic /loadsensor_link_contactsensor_state to publish the data related to the load sensor.

You can see the data being published with:

rostopic echo /loadsensor_link_contactsensor_state

You can see that this topic alone publishes too much data.

Since we have to show data in the web app that we are going to create, we use the /diagnostics topic to simulate some elements, like the battery, for example, in the format that the real robot uses.

Showing the Battery status

In the barista_gazebo package, we created the sim_pick_go_return_demo.launch file. There you can see that we spawn the battery_charge_pub.py file, which basically subscribes to the /diagnostics topic and publishes into the /battery_charge topic just basic info about the battery status.

Let’s launch that file to launch the main systems:

roslaunch barista_gazebo sim_pick_go_return_demo.launch

If you open a new shell, and type the command below, you should get the battery load:

rostopic echo /battery_charge

Bear in mind that in this case, the value is simulated, given that we are still not using a real robot.

In order to test that the load_sensor works, It will be easier if you check the video https://youtu.be/Uv9yF5zifz4?t=1348 that we created.

ROS Navigation

If you look at the file sim_pick_go_return_demo.launch we mentioned earlier, in that file we are launching the navigation system, more specifically on the sections shown below:

<!-- Start the navigation systems -->
  <include file="$(find costa_robot)/launch/localization_demo.launch">
    <arg name="map_name" value="simple10x10"/>
    <arg name="real_barista" value="false"/>
  </include>

and

 <!-- We start the waypoint generator to be able to reset tables on the fly -->
  <node pkg="ros_waypoint_generator"
          type="ros_waypoint_generator"
          name="ros_waypoint_generator_node">
    </node>

To start creating the map, please consider reloading the simulation (Simulations -> Change Simulation -> Choose Launch file -> main_simple.launch).

The files related to mapping can be found on the costa_robot package. There you can find the following files:

  • gmapping_demo.launch
  • localization_demo.launch
  • save_map.launch
  • start_mapserver.launch

To create a map, after making sure the simulation is already running, we run the following:

roslaunch costa_robot gmapping_demo.launch

Now we have the lasers so that we can create the map. To see the lasers, please check RViz using Tools -> Graphical Tools.

You can now move the robot around by running the command below in a new shell (Tools -> Shell), to generate the map:

roslaunch turtlebot_teleop sim_keyboard_teleop.launch

While we are moving around, the map is being generated.

Then, to save the generated map, we run:

roslaunch costa_robot save_map.launch map_name:="new_map"

The new map should have been generated at ~/simulation_ws/src/barista/costa_robot/map/generated/new_map.pgm.

Now that we have the map, let’s generate the Waypoints. For that, we first launch the localization using the generated map.

roslaunch costa_robot localization_demo.launch map_name:="new_map" real_barista:="false"

Now we generate the waypoints:

roslaunch barista_gazebo start_save_waypoints_system.launch map_name:="new_map" real_barista:="false"

In order to save the waypoints, in RViz we click Publish Point and click where the robot is, then we run:

rosservice call /waypoints_save_server "name: 'HOME'"

This saves the current location of the robot as HOME. We can now move the robot around and save the new location. To move the robot around, remember that we use:

roslaunch turtlebot_teleop sim_keyboard_teleop.launch

Now we click Publish Point in RViz again and click where the robot is located and save the waypoints as Table1, for example:

rosservice call /waypoints_save_server "name: 'TABLE1'"

Great, so far we know how to start the map, start the localization and save the waypoints.

Youtube video

It may happen that you couldn’t reproduce some of the steps mentioned here. If this is the case, remember that we have the live version of this post on YouTube. Also, if you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

How to Install ROS 2 Crystal over Ubuntu 18.04 and ROS Melodic

How to Install ROS 2 Crystal over Ubuntu 18.04 and ROS Melodic

Hey ROS Developers!

In this post, we are going step-by-step to install ROS 2 Crystal in a fresh Ubuntu 18.04.

I’m using a regular desktop computer, the one I use to support me on ROS development.

This tutorial is a report of my personal experience as a ROS developer.

All the steps were taken from the official tutorial: https://index.ros.org/doc/ros2/Installation/Crystal/Linux-Install-Binary

Let’s do it!

Step 1 – Configuration

The first step is adding the ROS repository to your ubuntu sources.list.

user:~$ sudo apt update && sudo apt install curl gnupg2 lsb-release
(multiple lines from apt update)...
user:~$ curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
OK
user:~$ sudo sh -c 'echo "deb [arch=amd64,arm64] http://packages.ros.org/ros2/ubuntu `lsb_release -cs` main" > /etc/apt/sources.list.d/ros2-latest.list'

Great!

Step 2 – Download

A bit different from ROS 1 (Melodic), we don’t install from using sudo apt, instead, we download a file.

It was taken from: https://github.com/ros2/ros2/releases/tag/release-crystal-20191212

In my case, for Ubuntu 18.04 desktop, I have chosen ros2-crystal…amd64.zip

Save it into your ~/Downloads folder.

Now, let’s create a new folder to keep the files:

user:~$ mkdir ~/ros2_crystal
user:~$ cd ~/ros2_crystal

Extract the files there:

user:~/ros2_crystal$ tar xf ros2-crystal-20191212-linux-xenial-amd64.tar.bz2

 


 

Installing and initializing rosdep

Let’s update ubuntu’s repo once more:

user:~$ sudo apt update

We need to install python-rosdep package, the one in charge of managing ROS dependencies:

user:~$ sudo apt install -y python-rosdep

Then we initialize rosdep and update it:

user:~$ sudo rosdep init
user:~$ rosdep update

Finally, we install some dependencies for this specific version:

user:~$ cd ~/ros2_crystal
user:~$ source /opt/ros/melodic/setup.bash
user:~/ros2_crystal$ CHOOSE_ROS_DISTRO=crystal
user:~/ros2_crystal$ ROS_PYTHON_VERSION=3
user:~/ros2_crystal$ rosdep install --from-paths ros2-linux/share --ignore-src --rosdistro $CHOOSE_ROS_DISTRO -y --skip-keys "console_bridge fastcdr fastrtps libopensplice67 libopensplice69 osrf_testing_tools_cpp poco_vendor rmw_connext_cpp rosidl_typesupport_connext_c rosidl_typesupport_connext_cpp rti-connext-dds-5.3.1 tinyxml_vendor tinyxml2_vendor urdfdom urdfdom_headers"

 

Installing the Autocomplete

In the case of this tutorial, we are working with Ubuntu 18.04 and ROS 1 Melodic, so the autocomplete installation goes like this:

user:~$ sudo apt install python3-pip
user:~$ sudo pip3 install argcomplete

 

Trying it!

Finally, let’s try some examples!

We have some nodes ready to be used from the installation process.

Open a shell, let’s call it Shell 1:

user:~$ . ~/ros2_crystal/ros2-linux/setup.bash
user:~$ ros2 run demo_nodes_cpp talker

You must have the following output:

[INFO] [talker]: Publishing: 'Hello World: 1'
[INFO] [talker]: Publishing: 'Hello World: 2'
[INFO] [talker]: Publishing: 'Hello World: 3'
[INFO] [talker]: Publishing: 'Hello World: 4'
[INFO] [talker]: Publishing: 'Hello World: 5'
[INFO] [talker]: Publishing: 'Hello World: 6'
[INFO] [talker]: Publishing: 'Hello World: 7'
[INFO] [talker]: Publishing: 'Hello World: 8'
[INFO] [talker]: Publishing: 'Hello World: 9'

And a second shell: Shell 2

user:~$ . ~/ros2_crystal/ros2-linux/setup.bash
user:~$ ros2 run demo_nodes_py listener

The output must be something like:

[INFO] [listener]: I heard: [Hello World: 8]
[INFO] [listener]: I heard: [Hello World: 9]
[INFO] [listener]: I heard: [Hello World: 10]
[INFO] [listener]: I heard: [Hello World: 11]
[INFO] [listener]: I heard: [Hello World: 12]
[INFO] [listener]: I heard: [Hello World: 13]
[INFO] [listener]: I heard: [Hello World: 14]

Conclusion

Finally, this is the basic process of installation we achieved with the following environment:

  • Ubuntu 18.04
  • ROS 1 Melodic
  • ROS 2 Crystal

If you like this kind of post, don’t forget to share it.

Leave a comment so we take your feedback to improve our blog posts.

Cheers!

 

Jupyter ROS: embedding ROS Data Visualization on Jupyter notebooks

Jupyter ROS: embedding ROS Data Visualization on Jupyter notebooks

.

 

 

 

 

 

 

What we are going to learn

Learn how to embed ROS data visualization inside a Jupyter notebook.

List of resources used in this post

Opening the ROSject

In today’s post, we are going to learn how to see ROS Data inside a Jupyter Notebook using the Jupyter ROS package. Let’s start by clicking on the ROSject link (http://www.rosject.io/l/d4b0981/). You should now have a copy of the ROSject.

Now, let’s open the notebook by clicking the Open ROSject button.

Open ROSject - Juypter ROS Demo on ROSDS

Open ROSject – Juypter ROS Demo on ROSDS

When you open the ROSject, a default notebook will be automatically open with the instructions on how to show ROS Data in Jupyter.

The demos are possible thanks to the Jupyter-ROS project developed by Wolf Vollprecht from Quantstack. All credit goes to him and his team.

What follows is the list of ROS enabled notebook demos that work off-the-shelf by using the ROSDS integrated simulations. The original notebook demos were created by Wolf. In The Construct, we only have added the explanations required to understand the code and launch the simulations, so you can have a live demo without having to install and configure your computer.

The ROSject provides everything already installed and configured. Hence, you don’t need to install Jupyter ROS, you only need to learn how to use it… and then use it! (for instructions about how to install on your own computer, please check the author’s repo)

You can freely use all the material from the ROSject to create your own notebooks with ROS.

List of demos

When you open the notebook, the list of demos you have are:

  • ROS 3D Grid: about how to include an interactive 3D grid inside a notebook.
  • ROS Robot: about how to add the robot model inside the 3D grid
  • ROS Laser Scan: about how to show the robot laser on the 3D grid.
  • ROS TEB Demo: about how to add interactive markers to the 3D viewer

Where to find the notebooks inside the rosject

All the notebook files are contained in the ROSject.

In order to see the actual files, use the IDE (top menu, Tools->IDE). Then navigate through the folders up to notebook_ws->notebooks.

Jupyter ROS - List of files in ROSDS

Jupyter ROS – List of files in ROSDS

 

How to modify the provided notebooks directly in ROSDS

Use the Jupyter notebook tool to edit the notebooks included in this rosject (or to create new ones).In case you close the notebook, you can re-open it by going to the top menu and selecting Tools->Jupyter Notebook.
Once you have the notebook open, edit at your own will. Check the Jupyter documentation to understand all the possible options.

ROS Laser Scan demo

In the notebooks of the ROSject, we have four demos, but in this post, we are going to show only how to show Laser Scan in the notebook to make things simpler.

In order to get this working, we are going to use the simulation of a Turtlebot.

Remember that you have to have the ROS Bridge running in order for Jupyter ROS to work. If you haven’t started it yet, launch it now.

  1. Go to the ROSDS TAB in your browser
  2. On the top menu, select Tools->Shell
  3. On the opened shell type the following command:
roslaunch ~/notebook_ws/notebooks/launch/bridge.launch --screen

Launching the simulation

We’ll need someone to publish the URDF and the TF. For that, we are going to use a Gazebo simulation. Let’s start a Turtlebot2 simulation.
  1. Go to the ROSDS TAB and go to its top menu.
  2. Select Simulations.
  3. On the panel that appears, click on the label that says Select world.
  4. Then on the drop-down menu that appears, move down until you see the AR Drone world.
  5. On the panel that appears, click on the label that says Select robot.
  6. Then on the drop-down menu that appears, move down until you see the Turtlebot 2.
  7. Click on it and then press Start simulation
Turtlebot in a green area in ROSDS

Turtlebot in a green area in ROSDS

 

A new window should appear and load the simulation. Once loaded you should see something similar to this:

Turtlebot opened in a green area in ROSDS

Turtlebot opened in a green area in ROSDS

First, Start the demo

First, import the required classes from Jupyter ROS.

Click on the next cell and then press Shift+Enterto activate the Python code.IMPORTANT: the import of such a class can take some time!. You will know that the code is still running because the number on the left of the cell has changed to a * character. Do not move to the next step until the * is gone.

try:
    from sidecar import Sidecar
except:
    pass
    
from jupyros import ros3d

Second, create an instance of the viewer, connect to ROS and get the TF

Click on the next cell and then press Shift+Enterto activate the Python code.

v = ros3d.Viewer()
rc = ros3d.ROSConnection()
tf_client = ros3d.TFClient(ros=rc, fixed_frame='/base_footprint')

Connect to the topic of the laser

Click on the next cell and then press Shift+Enterto activate the Python code.
laser_view = ros3d.LaserScan(topic="/kobuki/laser/scan", ros=rc, tf_client=tf_client)

Fourth, get the 3D grid and the robot model

Click on the next cell and then press Shift+Enterto activate the Python code.

g = ros3d.GridModel()
import os
urdf = ros3d.URDFModel(ros=rc, tf_client=tf_client, path=os.environ.get('JUPYROS_ASSETS_URL', 'http://localhost:3000'))
v.objects = [g, laser_view, urdf]

Fifth, visualize everything

Click on the next cell and then press Shift+Enterto activate the Python code

try:
    sc = Sidecar(title='Sidecar Output')
    with sc:
        display(v)
except:
    display(v)

Sixth, change the visualization style (if required)

Click on the next cell and then press Shift+Enterto activate the Python code.

v.layout.height= '1000px'
g.color = '#CCC'
laser_view.point_size = 0.1
laser_view.static_color = "green"

Move the robot around and watch the laser change!

Open a new shell (Tools -> Shell) and type the following command to make the robot move around:

rostopic pub /cmd_vel geometry_msgs/Twist "linear:
  x: 0.2
  y: 0.0
  z: 0.0
angular:
  x: 0.0
  y: 0.0
  z: 0.3"

That is it. If you did everything ok, you should see the robot moving around in the jupyter notebook.

Learn more about Jupyter ROS

If you want to learn more about Jupyter ROS, here we have the interview we made to Wolf for the ROS Developers Podcast where he explains the ins and outs of Jupyter ROS.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Overlaying ROS Workspaces

Overlaying ROS Workspaces

One of the critical topics of ROS, overlaying workspaces, something that can be confusing even for those who are working with ROS for some time. In this post, we are going to clarify why it happens and how to manage it.


 

ROSDS Initial environment

As usual, we are going to work with ROSDS, the ROS Development Studio. Creating a new ROSJect, you will have 2 workspaces in your environment, from scratch. One more designed to store ROSDS public simulations and ROS pre-installed packages. Let check this out!

Open a web shell and type the following command:

user:~$ echo $ROS_PACKAGE_PATH

You must see something like:

/home/user/catkin_ws/src:/home/user/simulation_ws/src:/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

The workspaces are separated by “:”, let’s check one by one, following the order (which is very important!)

  1. /home/user/catkin_ws/src
  2. /home/user/simulation_ws/src
  3. /home/simulations/public_sim_ws/src
  4. /opt/ros/kinetic/share

The order was pre-defined by TheConstruct engineers team, to make it suitable for working with public simulations and custom simulations from the user. The order means:

ROS commands are going to look for packages starting from the workspace /home/user/catkin_ws/src, then /home/user/simulation_ws/src and so on. Remember, you can NOT have packages with the same name in the same workspace. But you can have packages with the same name in different workspaces! If the same package exists in two or more workspaces, the first one will be used.

So, if you want to overwrite a simulation from /home/simulations/public_sim_ws/src, you can do just creating/cloning the package with the same name at /home/user/simulation_ws/src.


 

Re-defining the $ROS_PACKAGE_PATH

“What if I want to add a new workspace to $ROS_PACKAGE_PATH?

This is an ENVIRONMENT VARIABLE, so is it just a matter of export it the way I want? WRONG!

The ENVIRONMENT VARIABLE is generated by the devel/setup.bash file from each workspace. It means this is just one of the results of sourcing a workspace!

If you need to re-define your $ROS_PACKAGE_PATH, you need to it in a safe/correct way, let’s call it. It is like described below:

  1. Source the installation path of ROS:
  2. Recompile the workspace you want just after the installation folder:
  3. Source its setup.bash:

At this moment, your $ROS_PACKAGE_PATH must be:

/home/user/my_ros_workspace/src:/opt/ros/kinetic/share

You only have 1 workspace defined.


 

Re-defining the $ROS_PACKAGE_PATH – Practical example

Let’s try it in ROSDS. First, I will create a new workspace:

user:~$ mkdir -p my_ros_workspace/src
user:~$ cd my_ros_workspace/src
user:~/my_ros_workspace/src$ catkin_init_workspace
user:~/my_ros_workspace/src$ cd ..
user:~/my_ros_workspace$ source /opt/ros/kinetic/setup.bash
user:~/my_ros_workspace$ catkin_make

Let’s source the devel/setup.bash of the new workspace and check our $ROS_PACKAGE_PATH:

user:~/my_ros_workspace$ source devel/setup.bash
user:~/my_ros_workspace$ echo $ROS_PACKAGE_PATH

And the result must be:

/home/user/my_ros_workspace/src:/opt/ros/kinetic/share

Even though we have many other workspaces defined, our $ROS_PACKAGE_PATH considers only one workspace! That’s because we have sourced just this workspace’s devel/setup.bash.


 

Overlaying workspaces

Now, let’s do some more advanced. We want to have more workspaces in our $ROS_PACKAGE_PATH. But let’s check something before:

Execute the following:

user:~$ source catkin_ws/devel/setup.bash
user:~$ echo $ROS_PACKAGE_PATH

You must see:

/home/user/catkin_ws/src:/home/user/simulation_ws/src:/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

You still have the previous $ROS_PACKAGE_PATH defined, but only for the catkin_ws. The conclusion is each workspace has its own $ROS_PACKAGE_PATH defined.

Now, we are going to add public_sim_ws to our new workspace. This is the way to make the public simulations provided by TheConstruct available for your new workspace.

user:~$ source /home/simulations/public_sim_ws/devel/setup.bash
user:~$ echo $ROS_PACKAGE_PATH

You must have:

/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

It means ROS is only considering the public_sim_ws. Let’s put our new workspace on top of it.

user:~$ cd my_ros_workspace/
user:~/my_ros_workspace$ rm -rf build devel
user:~/my_ros_workspace$ catkin_make
user:~/my_ros_workspace$ source devel/setup.bash

And you must have a completely new $ROS_PACKAGE_PATH

/home/user/my_ros_workspace/src:/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

Conclusion

This is how you can work with multiple workspaces. Bare in mind ROS creates rules to give priority, but you are the one in charge of configuring the order of your workspaces!

Don’t forget to check the official reference: http://wiki.ros.org/catkin/Tutorials/workspace_overlaying

And if you like this kind of post, don’t forget to share it.

Leave a comment for us to know your personal feedback!

Cheers!

Pin It on Pinterest