ROS Mini Challenge #9 – Fusing data to improve robot localization with ROS

ROS Mini Challenge #9 – Fusing data to improve robot localization with ROS

 

 

 

 

 

 

 

 

 

What we are going to learn

Learn how to improve robot localization with data from different sensors

List of resources used in this post

Where to find the code

Once you open the ROSject (buff.ly/2RifSjn), you will get a copy of it. You just have to click Open. Once open, inside the catkin_ws folder, you will find all the ROS packages associated with this challenge. You can have a look at it using the IDE, for instance.

To see the full code, open the IDE by going to the top menu and select Tools->IDE.

Code Editor (IDE) - ROSDS

Code Editor (IDE) – ROSDS

Launching the simulation

Go to the top menu and select Simulations. On the menu that appears, click on the Choose launch file… button.
Choose lunch file to open simulation in ROSDS

Choose lunch file to open simulation in ROSDS

 

Now, from the launch files list, select the launch file named rotw9.launch from the sumit_xl_course_basics package.

ROS Mini Challenge #9 - Select launch file

ROS Mini Challenge #9 – Select launch file

Finally, click on the Launch button to start the simulation. In the end, you should get something like this:
Summit XL gazebo simulation in ROSDS, ROSject of the week

Summit XL gazebo simulation in ROSDS, ROSject of the week

 

The problem to be solved

As you can see, this ROSject contains 1 package inside its catkin_ws workspace: rotw9_pkg. This package contains a launch file and a configuration file, which are used to start an ekf localization node. This node is provided by the robot_localization package, and its main purpose is to fuse different sensor data inputs in order to improve the localization of a robot.

In our case, we are going to fuse Odometry data (which has been tweaked) with Imu data (which is correct). So, the main purpose of this challenge is to improve the Summit XL odometry data, since the initial one is a little bit distorted.

In order to test that it works as expected, first of all, we need to know the odometry topics. For this, we can use the following command:

rostopic list | grep odom
This command will give us the following topics:

/noisy_odom
/robotnik_base_control/odom

For this challenge, we are going to use the /noisy_odom topic, which basically is the original odometry data with some extra noise added to it.

You can have a look at the current odometry using RViz (rosrun rviz rviz ) and adding an Odometry display. You should get something similar to this:
Noisy Odometry example in ROSDS

Noisy Odometry example in ROSDS

As you can see, the Odometry readings are not very stable.
So now let’s start our node in order to correct the odometry readings with the following command:

roslaunch rotw9_pkg start_ekf_localization.launch
This command will generate a new Odometry topic, named /odometry/filtered, which will contain the resulting Odometry data (fusing the /noisy_odom data with the Imu data).
If you visualize this new Odometry, you will get something like this. As you can see, the new Odometry is much more stable than the original one. Great!
Correct (expected) Odometry data in ROSDS

Correct (expected) Odometry data in ROSDS

NOTE: In order to properly visualize the differences between the 2 Odometry readings, you should modify the arrow size and color on the different displays.

Solving the ROS Mini Challenge

Ok, so… where’s the problem? If you have tried to reproduce the steps described above you have already seen that it DOES NOT WORK. When you run the programs introduced above, they just don’t work. The /odometry/filtered is not shown as expected in RViz (the red arrow that appears in RViz).
Let’s start by having a look at the files inside the rotw9_pkg package, to figure out where the error (or errors) are.
If we look at the start_ekf_localization.launch  file, everything seems fine:
<launch>

<!-- Run the EKF Localization node -->
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization">
<rosparam command="load" file="$(find rotw9_pkg)/config/ekf_localization.yaml"/>
</node>

</launch>

Since the launch file loads the rotw9_pkg/config/ekf_localization.yaml file, let’s have a look at it:

#Configuation for robot odometry EKF
#
frequency: 50

two_d_mode: true

publish_tf: false

odom_frame: odom
base_link_frame: base_link
world_frame: odom

odom0: /noisy_odom
odom0_config: [true, true, false,
               false, false, true,
               true, true, false,
               false, false, true,
               false, false, false]
odom0_differential: false

imu0: /imu/data
imu0_config: [false, false, false,
              false, false, true,
              false, false, false,
              false, false, true,
              true, false, false]
imu0_differential: false

process_noise_covariance": [0.05, 0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0.05, 0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0.06, 0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0.03, 0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0.03, 0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0.06, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0.025, 0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0.025, 0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0.04, 0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0.01, 0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0.01, 0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0.02, 0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0.01, 0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0.01, 0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0.015]


initial_estimate_covariance: [1e-9, 0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    1e-9, 0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    1e-9, 0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    1e-9, 0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    1e-9, 0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    1e-9, 0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    1e-9, 0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    1e-9, 0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    1e-9, 0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    1e-9,  0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     1e-9,  0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     1e-9,  0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     1e-9, 0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    1e-9, 0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    1e-9]

We are going to explain in deeper the configurations in the YAML file, otherwise, the post would be really big. So, in case you want to understand it better, we highly recommend the Fuse Sensor Data to Improve Localization course in Robot Ignite Academy (www.robotigniteacademy.com/en/).

Ok, let’s go straight to the point.

In the YAML file above, the parameters we are going to focus on are:

odom_frame: odom
base_link_frame: base_link
world_frame: odom

We see that the are links named odom and base_link. We have to make sure these links exist in the simulation. Let’s do that by generating a TF Tree with the commands below:

cd ~/catkin_ws/src
rosrun tf view_frames

The commands above generate a file named frames.pdf that contains the full TF Tree of the simulation, with all the connections, the frame names, etc.

You can easily download the frames.pdf using the IDE (Code Editor) by selecting it and clicking Download.

After opening that file, we can see the names of the links:

TF Tree for Summit XL Simulation

TF Tree for Summit XL Simulation

As we can see, the Odom link is actually named summit_xl_a_odom, and the base_link is named summit_xl_a_base_link. Let’s then fix the Yaml file with the correct values and save the file:

odom_frame:summit_xl_a_odom
base_link_frame:summit_xl_a_base_link
world_frame:summit_xl_a_odom

Now we can try to launch our localization node again:

roslaunch rotw9_pkg start_ekf_localization.launch

Unfortunately, the red arrow in RViz (topic /odometry/filtered) is still not as we expect.

Let’s now check the odom0_config settings in the YAML file:

odom0: /noisy_odom
odom0_config: [true, true, false,
               false, false, true,
               true, true, false,
               false, false, true,
               false, false, false]
odom0_differential: false

The error is actually in that matrix.

The first row goes for X, Y and X positions. The second for ROW, PITCH and YAW. The third is for the linear values, the fourth the angular velocities and the final row is for the linear acceleration.

The problem is basically because we are inputting some noise in the X and Y positions. More details on this can be found in the documentation of the robot_localization package. If we just set those values to false, we have the following:

odom0: /noisy_odom
odom0_config: [false, false, false,
               false, false, true,
               true, true, false,
               false, false, true,
               false, false, false]
odom0_differential: false

If we now try to launch our localization node again, we should see the correct values in RViz:

roslaunch rotw9_pkg start_ekf_localization.launch

Remember that if you need a deeper understanding on fusing sensor data, the Fuse Sensor Data to Improve Localization course in Robot Ignite Academy (www.robotigniteacademy.com/en/) can help you.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

 

ROS Mini Challenge #8 – Extracting the highest value from laser readings

ROS Mini Challenge #8 – Extracting the highest value from laser readings

 

 

 

 

 

 

 

 

 

What we are going to learn

Learn how to read data from a laser topic and extract the highest value.

List of resources used in this post

 

Where to find the code

Once you open the ROSject (http://buff.ly/2PU4ucB), you will get a copy of it. You just have to click Open. Once open, inside the catkin_ws folder, you will find all the ROS packages associated with this challenge. You can have a look at it using the IDE, for instance.

To see the full code, open the IDE by going to the top menu and select Tools->IDE

Code Editor (IDE) - ROSDS

Code Editor (IDE) – ROSDS

Launching the simulation

Go to the top menu and select Simulations. On the menu that appears, click on the Choose launch file… button.

Choose lunch file to open simulation in ROSDS

Choose launch file to open simulation in ROSDS

Now, from the launch files list, select the launch file named rotw5.launch from the rosbot_gazebo package.

 

Finally, click on the Launch button to start the simulation. In the end, you should get something like this:

Turtlebot robot inside a wall in ROSDS

Turtlebot robot inside a wall in ROSDS

 

The problem to be solved

As you can see, this ROSject contains 1 package inside its catkin_ws workspace: rotw8_pkg. This package contains a very simple Python script, which contains a Python class named Challenge(), with some functions in it.

The main purpose of this script is to get the highest value from all the laser readings, and also the position in the array of this particular reading.

In order to test that it works as expected, all you have to do is the following:

First, make sure you source your workspace so that ROS can find your package:
source ~/catkin_ws/devel/setup.bash

 

Now, start the TF Broadcaster and Listener with the following command:
rosrun rotw8_pkg rotw8_code.py

If everything is correct, you will get a message like this in the Shell:

 

ROS Mini Challenge #8 - Expected output

ROS Mini Challenge #8 – Expected output

NOTE: The numbers in the value of the maxim and the position in the array may vary a little bit from the ones in the image above (but not much).

Solving the ROS Mini Challenge

Ok, so… where’s the problem? If you have tried to reproduce the steps described above you have already seen that it DOES NOT WORK. When you run the programs introduced above, the values you get are not the correct ones at all. So… what’s going on?

When you launch rosrun rotw8_pkg rotw8_code.py, the output is not the expected one:

$ rosrun rotw8_pkg rotw8_code.py
Highest value is 4.50035715103 and it is in the position 0 of the array.

The difference is almost 2 meters, and position 0 (zero) is really different from 202, which is the more or less the expected one. The reason why we expect something like position 202 is because, on the right of the robot we have the position 0 in the scan, in the left, we have position 719, right in front we have 360, so the highest distance should be close to the exit, which is around the position 202. You may understand better by looking at the image below:

Laser range positions in the turtlebot simulation in ROSDS

Laser range positions in the turtlebot simulation in ROSDS

 

If we check the ~/catkin_ws/src/rotw8_pkg/src/rotw8_code.py file, its original code is:

#! /usr/bin/env python

import rospy
import time
from sensor_msgs.msg import LaserScan

class Challenge:
    def __init__(self):
        self.sub = rospy.Subscriber("/kobuki/laser/scan", LaserScan, self.laser_callback)
        self.laser_msg = LaserScan()
    def laser_callback(self, msg):
        self.laser_msg = msg

    def get_laser_full(self):
        time.sleep(1)
        return [self.laser_msg.ranges[0], self.laser_msg.ranges[719]]

    def get_highest_lowest(self):
        l = self.get_laser_full()
        i = 0
        maxim = -1
        for value in l:
            if value >= maxim and str(value) != "inf":
                maxim = value
                max_pos = i
            i = i + 1

        print "Highest value is " + str(maxim) + " and it is in the position " + str(max_pos) + " of the array."

if __name__ == '__main__':
    rospy.init_node('rotw8_node', anonymous=True)
    challenge_object = Challenge()
    try:
        challenge_object.get_highest_lowest()

    except rospy.ROSInterruptException:
        pass

If we look carefully, we can notice that the problem is in the get_laser_full function

 def get_laser_full(self):
        time.sleep(1)
        return [self.laser_msg.ranges[0], self.laser_msg.ranges[719]]

As we can see, it only returns two values of the laser.

To solve the problem we just change this function by:

 def get_laser_full(self):
        time.sleep(1)
        return self.laser_msg.ranges

If you now save the file and run the “rosrun rotw8_pkg rotw8_code.py” command again, now we should get the desired output:

$ rosrun rotw8_pkg rotw8_code.py
Highest value is 7.23973226547 and it is in the position 203 ofthe array.

Hey, now the value is the correct one. The problem was really easy, wasn’t it?

If you are still struggling to understand the code, or you want to master your ROS Skills, I highly recommend you take some courses in Robot Ignite Academy: http://www.robotigniteacademy.com/en/

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Developing Web Interfaces For ROS Robots #4 – Streaming robot’s camera on the web page

Developing Web Interfaces For ROS Robots #4 – Streaming robot’s camera on the web page

Hello ROS Developers,

This is the 4th of 4 posts of the series Developing web interfaces for ROS Robots.

In this post, we are going to stream the images of the robot’s camera on the webpage.

 


1 – Loading a new JavaScript library

Before we go to the camera streaming server or the code to open its connection, we need to import the library that is in charge of making everything possible. That is the MJpeg library.

Let’s import it in our <head> section:

    ...

    <script src="https://cdn.jsdelivr.net/npm/eventemitter2@5.0.1/lib/eventemitter2.min.js">
    </script>
    <script type="text/javascript" src="https://static.robotwebtools.org/mjpegcanvasjs/current/mjpegcanvas.min.js">
    </script>

</head>

 


2 – Adding the camera viewer element to the page

Now for the webpage, we need a new element there which will define where the camera images are going to be shown:

Let’s use the div row we have created for the buttons (in the previous post) to embed the camera. It must be replaced by the following code:

        <div class="row">
            <div class="col-md-6 row">
                <div class="col-md-12 text-center">
                    <h5>Commands</h5>
                </div>

                <!-- 1st row -->
                <div class="col-md-12 text-center">
                    <button @click="forward" :disabled="loading || !connected" class="btn btn-primary">Go straight</button>
                    <br><br>
                </div>

                <!-- 2nd row -->
                <div class="col-md-4 text-center">
                    <button @click="turnLeft" :disabled="loading || !connected" class="btn btn-primary">Turn left</button>
                </div>
                <div class="col-md-4 text-center">
                    <br>
                    <button @click="stop" :disabled="loading || !connected" class="btn btn-danger">Stop</button>
                    <br>
                    <br>
                </div>
                <div class="col-md-4 text-center">
                    <button @click="turnRight" :disabled="loading || !connected" class="btn btn-primary">Turn right</button>
                </div>

                <!-- 3rd row -->
                <div class="col-md-12 text-center">
                    <button @click="backward" :disabled="loading || !connected" class="btn btn-primary">Go straight</button>
                </div>
            </div>

            <div class="col-md-6">
                <div id="mjpeg"></div>
            </div>
        </div>

Yes, the camera code is only the final part:

    ...
    <div class="col-md-6">
        <div id="mjpeg"></div>
    </div>
</div>

3 – Webserver for image streaming

The same way we need start rosbridge server, we need to start a specific ros process to stream the images:

rosrun web_video_server web_video_server _port:=11315

We are specifying the port at the end because of ROSDS, it provides us a specific address to this port. If you are working local, you can access it at locahost:<port>. The port number is shown in the terminal you run it.

 


4 – Adjusting JavaScript code

In our main.js file, let’s add the camera connector. It’s basically a way to make the pre-defined element (in our HTML) to receive the images.

  • Create a new method:
        setCamera: function() {
            console.log('set camera method')
            this.cameraViewer = new MJPEGCANVAS.Viewer({
                divID: 'mjpeg',
                host: '54.167.21.209',
                width: 640,
                height: 480,
                topic: '/camera/rgb/image_raw',
                port: 11315,
            })
        },

Notice that you have to change the attributes host, topic and port accordling to your configuration.

The attributes width and height are going to be applied to the size of the element.

  • Call the method whenever the rosbridge server is connected (inside the callback)
            this.ros.on('connection', () => {
                this.logs.unshift((new Date()).toTimeString() + ' - Connected!')
                this.connected = true
                this.loading = false
                this.setCamera()
            })

Pay attention to the IP we are using in the code! You must have a different IP than the one used in this post. If you are using ROSDS, get yout public IP executing the following command:

public_ip

If you are on your local computer, it must be:

localhost

5 – Testing the webpage

At this point you must have something similar to the image below:

We have used a different world this time to have some obstacles that we can identify in the camera. You can switch between many other simulations to see different results on the camera!


ROSJect created along with the post:

http://www.rosject.io/l/ddd21cc/

[ROS Mini Challenge] #7 – make a robot follow another robot

[ROS Mini Challenge] #7 – make a robot follow another robot

In this post, we will see how to make a robot follow another robot. We’ll make the iRobot follow the big turtle all around the world when it moves, using ROS TF broadcaster and listener nodes.

PS: This ROS project is part of our ROS Mini Challenge series, which gives you an opportunity to win an amazing ROS Developers T-shirt! This challenge is already solved. For updates on future challenges, please stay tuned to our Twitter channel.

Step 1: Grab a copy of the ROS Project containing the code for the challenge

Click here to get your own copy of the project. If you don’t have an account on the ROS Development Studio, you will need to create one. Once you create an account or log in, we will copy the project to your workspace. That done, open your ROSject using the Open button. This might take a few moments, please be patient.

You should now see a notebook with detailed instructions about the challenge. This post includes a summary of these instructions as well as the solution to the challenge.

PS: Please ignore the Claim your Prize! section because…well…you are late the party 🙂

Step 2: Start the Simulation and get the robots moving

  1. Click on the Simulations menu and then Choose launch file . In the dialog that appears, select rotw7.launch under turtle_tf_3d package. Then click the Launch button. You should see a Gazebo window popup showing the simulation.
  2. Get the robots moving. Pick a Shell from the Tools menu and run the following commands:
user:~$ source ~/catkin_ws/devel/setup.bash
user:~$ roslaunch rotw7_pkg irobot_follow_turtle.launch

At this point, you should already see the iRobot moving towards the big turtle.

Nothing happened? Heck, we gotta fix this! Let’s do that in the next section.

Step 3: Let’s find the problem

So the robots didn’t move as we expected. And we had this error message:

[INFO] [1580892397.791963, 77.216000]: Retrieveing Model indexes
[INFO] [1580892397.860043, 77.241000]: Robot Name=irobot, is NOT in model_state, trying again

The error message above says it cannot find the model name specified in the code, so let’s check that up. Fire up the IDE from the Tools menu and browse to the directory catkin_ws/src/rotw7_pkg/scripts. We have two Python scripts in there:

  • turtle_tf_broadcaster.py
  • turtle_tf_listener.py

The robot model names are specified on line 19 of turtle_tf_broadcaster.py:, in the publisher_of_tf function:

robot_name_list = ["irobot","turtle"]

Let’s check if we can find these robots in the simulation, using a Gazebo service:

user:~$ rosservice call /gazebo/get_world_properties "{}"
sim_time: 424.862
model_names: [ground_plane, coke_can, turtle1, turtle2]
rendering_enabled: True
success: True
status_message: "GetWorldProperties: got properties"

So we see that the names we specified are not in the simulation! The robots we need are turtle2 and turtle1.

Also, on line 19 of turtle_tf_listener.py, the code is publishing the “cmd_vel” (the topic that moves the robot) of the following robot:

turtle_vel = rospy.Publisher('/cmd_vel', geometry_msgs.msg.Twist,queue_size=1)

But, which of the turtles is the follower, and what is the correct topic for its “cmd_vel”? We have a hint from the launch file irobot_follow_turtle.launch:

<?xml version="1.0" encoding="UTF-8"?>
<launch>
    <include file="$(find rotw7_pkg)/launch/run_turtle_tf_broadcaster.launch"/>
    <include file="$(find rotw7_pkg)/launch/run_turtle_tf_listener.launch">
        <arg name="model_to_be_followed_name" value="turtle1" />
        <arg name="follower_model_name" value="turtle2" />
    </include>
</launch>

So the follower is turtle2. Now, let’s check what it’s “cmd_vel” topic is. It’s specified as /cmd_vel in the code, but is this true? Let’s check the list of topics:

user:~$ rostopic list
#...
/turtle1/cmd_vel
/turtle2/cmd_vel

Probably, it’s /turtle2/cmd_vel. How do we know? Let’s publish to both /cmd_vel and /turtle2/cmd_vel and see which works.

ser:~$ rostopic pub /cmd_vel
Display all 152 possibilities? (y or n)
user:~$ rostopic pub /turtle2/cmd_vel geometry_msgs/Twist "linear:
  x: 0.2
  y: 0.0
  z: 0.0
angular:
  x: 0.0
  y: 0.0
  z: 0.0"
publishing and latching message. Press ctrl-C to terminate
^Cuser:~$ rostopic pub /turtle2/cmd_vel geometry_msgs/Twist "linr:r
  x: 0.0
  y: 0.0
  z: 0.0
angular:
  x: 0.0
  y: 0.0
  z: 0.0"
publishing and latching message. Press ctrl-C to terminate

Publishing to /turtle2/cmd_vel works. /cmd_vel didn’t work.

Step 4: Let’s fix the problem

We saw the problems in Step 3, now let’s implement the fix and test again.

On line 19 of turtle_tf_broadcaster.py, change the list to reflect the real turtle names:

robot_name_list = ["turtle1","turtle2"]

Also, on line 19 of turtle_tf_listener.py, change /cmd_velto /turtle2/cmd_vel:

turtle_vel = rospy.Publisher('/turtle2/cmd_vel', geometry_msgs.msg.Twist,queue_size=1)

Now rerun the commands to move the robots:

user:~$ source ~/catkin_ws/devel/setup.bash
user:~$ roslaunch rotw7_pkg irobot_follow_turtle.launch

You should now see the iRobot moving towards the big turtle. Now you can start moving the Turtle using the keyboard. Pick another Shell from the Tools menu and run the following command:

user:~$ roslaunch turtle_tf_3d turtle_keyboard_move.launch

Move the big turtle around with the keyboard, and you should see that the iRobot follows it. Done, that’s an example of how to make a robot follow another robot.

Extra: Video of this post

We made a video showing how we solved this challenge and made the iRobot follow another robot. If you prefer “sights and sounds” to “black and white”, here you go:

Related Resources

Feedback

Did you like this post? Do you have any questions about the explanations? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.

If you want to learn about other ROS or ROS2 topics, please let us know in the comments area and we will do a video or post about it.

ROS Mini Challenge #6 – detect the position of a robot using ROS Service

ROS Mini Challenge #6 – detect the position of a robot using ROS Service

 

 

 

 

 

 

 

 

What we are going to learn

Learn how to get the position and orientation of a robot, when someone asks through ROS Services.

List of resources used in this post

 

Opening the ROSject

Once you clicked in the ROSject link provided (https://buff.ly/2Pd5msM) you will get a copy of the ROSject. You can then click Open to have it open.

Where to find the code

Once you open the ROsject, in the ~/catkin_ws folder, you will find all the ROS packages associated with this challenge. You can have a look at it using the IDE, for instance.

To see the full code, open the IDE by going to the top menu and select Tools->IDE

Code Editor (IDE) - ROSDS

Code Editor (IDE) – ROSDS

Launching the Simulation

Go to the top menu and select Simulations. On the menu that appears, click on the Choose launch file… button.

Choose lunch file to open simulation in ROSDS

Choose lunch file to open simulation in ROSDS

 

Now, from the launch files list, select the launch file named rotw5.launch from the rosbot_gazebo package.

ROS Mini Challenged #6 – launch file

 

Finally, click on the Launch button to start the simulation. In the end, you should get something like this:

Husky Simulation in ROSDS

 

The problem to solve

As you can see, this ROSject contains 1 package inside its catkin_ws workspace: rotw6_pkg. This package contains a couple of Python scripts (get_pose_service.py and get_pose_client.py), which contains a Service Server and a Client, respectively.

So, the main purpose of this Service is to be able to get the Pose (position and orientation) of the Husky robot when called. In order to get the Pose of the robot, all you have to do is the following:

First, make sure you source your workspace so that ROS can find your package:
source ~/catkin_ws/devel/setup.bash

 

Now, start your Service Server with the following command:
rosrun rotw6_pkg get_pose_service.py

 

Finally, just launch your Service Client with the following command:

rosrun rotw6_pkg get_pose_client.py

 

Now, in the Shell where you started the Service Server, you should see the following:

Get robot pose service in ROSDS

NOTE: Here, it’s VERY IMPORTANT to note that it’s only retrieving the position and orientation of the robot, without any additional data.

Solving the ROS Mini Challenge

Ok, so… where’s the problem? If you have tried to reproduce the steps described above you have already seen that it DOES NOT WORK. When you run the Service Client, you are getting some errors that aren’t supposed to be there. So… what’s going on?

When we launch the service client, we get something like the error below:

user:~$ rosrun rotw6_pkg get_pose_client.py
Traceback (most recent call last):
  File "/home/user/catkin_ws/src/rotw6_pkg/src/get_pose_client.py", line 11, in <module>
    result = get_pose_client(get_pose_request_object)
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 435, in __call__
    return self.call(*args, **kwds)
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 495, in call
    service_uri = self._get_service_uri(request)
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 444, in _get_service_uri
    raise TypeError("request object is not a valid request message instance")
TypeError: request object is not a valid request message instanc

The original content of the get_pose_client.py in the rotw6_pkg package is:

#! /usr/bin/env python

import rospy
from std_srvs.srv import Empty

rospy.init_node('rotw6_client')
rospy.wait_for_service('/get_pose_service')
get_pose_client = rospy.ServiceProxy('/get_pose_service', Empty)
get_pose_request_object = Empty()

result = get_pose_client(get_pose_request_object)

The problem is on the line get_pose_request_object = Empty(). We need to use EmptyRequest instead. The correct client, then, would be:

#! /usr/bin/env python

import rospy
from std_srvs.srv import Empty, EmptyRequest


rospy.init_node('rotw6_client')
rospy.wait_for_service('/get_pose_service')
get_pose_client = rospy.ServiceProxy('/get_pose_service', Empty)
get_pose_request_object = EmptyRequest()

result = get_pose_client(get_pose_request_object)

If we now run the client again, we should see no errors:

rosrun rotw6_pkg get_pose_client.py

and in the output of the server, we have:

$ rosrun rotw6_pkg get_pose_service.py
Robot Pose:
header:  seq: 86596  stamp:    secs: 3032    nsecs: 830000000
  frame_id: "odom"
child_frame_id: "base_link"
pose:
  pose:
    position:
      x: -0.0712665171853
      y: -0.00030466544589
      z: 0.0
    orientation:
      x: 0.0
      y: 0.0      z: -0.493612905474      w: 0.869681723132  covariance: [86.67158253691281, -6.76509699953108e-05, 0.0, 0.0, 0.0, -0.010918187001924307, -6.765096999530603e-05, 86.68358224257844, 0.0, 0.0, 0.0, -0.18378969144765284, 0.0, 0.0, 4.997917968019755e-07
, -4.557105661285207e-23, 3.4006027312434455e-20, 0.0, 0.0, 0.0, -4.5571056612852066e-23, 4.99583853513
7798e-07, 2.1597386386459286e-32, 0.0, 0.0, 0.0, 3.4006027312434443e-20, -6.711057929105504e-33, 4.995838535137798e-07, 0.0, -0.010918187001923645, -0.18378969144767304, 0.0, 0.0, 0.0, 103.95483332789887]
twist:
  twist:
    linear:
      x: -1.972482338e-05
      y: 0.0
      z: 0.0
    angular:
      x: 0.0
      y: 0.0
      z: 0.0292241612329
  covariance: [0.0008855670954413368, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0008855670954413368, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4.996877927028334e-07, -7.07772753671664e-33, 5.268413186594352e-30, 0.0, 0.0, 0.0, -7.077727536716642e-33, 4.987546680559569e-07, 1.3419297548549267e-41, 0.0, 0.0, 0.0, 5.268413186594355e-30, -4.178170793934322e-42, 4.987546680559569e-07, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 9.592381167731944e-07]

as you can see, we have too much data in the output. We just need to show the position and orientation. The reason why we have so much data is that in the server code (get_pose_service.py) we print the message directly. The original server code is:

#! /usr/bin/env python

import rospy
from std_srvs.srv import Empty, EmptyResponse
from geometry_msgs.msg import PoseWithCovarianceStamped, Pose
from nav_msgs.msg import Odometry

robot_pose = Pose()

def service_callback(request):
    print "Robot Pose:"
    print robot_pose
    return EmptyResponse()

def sub_callback(msg):
    global robot_pose
    robot_pose = msg


rospy.init_node('rotw6_service')
my_service = rospy.Service('/get_pose_service', Empty , service_callback)
sub_pose = rospy.Subscriber('/odometry/filtered', Odometry, sub_callback)
rospy.spin()

The message type is Odometry. With rosmsg show Odometry  we can see this message type has a lot of fields:

$ rosmsg show Odometry
[nav_msgs/Odometry]:
std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
string child_frame_id
geometry_msgs/PoseWithCovariance pose
  geometry_msgs/Pose pose
    geometry_msgs/Point position
      float64 x
      float64 y
      float64 z
    geometry_msgs/Quaternion orientation
      float64 x
      float64 y
      float64 z
      float64 w
  float64[36] covariance
geometry_msgs/TwistWithCovariance twist
  geometry_msgs/Twist twist
    geometry_msgs/Vector3 linear
      float64 x
      float64 y
      float64 z
    geometry_msgs/Vector3 angular
      float64 x
      float64 y
      float64 z
  float64[36] covariance

In the message structure above, we are only interested in the position and orientation. To achieve that, in the server we can just change the sub_callback function from:

def sub_callback(msg):
    global robot_pose
    robot_pose = msg

to:

def sub_callback(msg):
    global robot_pose
    robot_pose = msg.pose.pose

as easy as that. Pretty easy, right?

If for any reason you are still struggling to understand or memorize the steps reproduced here, I highly recommend you taking our ROS courses at www.robotigniteacademy.com/en/ .

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

 

Developing Web Interfaces For ROS Robots #3 – Building a web joystick to control the robot

Developing Web Interfaces For ROS Robots #3 – Building a web joystick to control the robot

Hello ROS Developers,

This is the 3rd of 4 posts of the series Developing web interfaces for ROS Robots.

In this post, we are going to create a “joystick” to send velocity commands to a robot from the webpage.


1 – Creating the HTML

First we are going to create the buttons that the user will click on to move the robot.

Just after the logs section, let’s add the following:

                    ...
                    ...
	            <h3>Log messages</h3>
                    <div>
                        <p v-for="log of logs">{{ log }}</p>
                    </div>
		</div>
	</div>

        <hr>

        <div class="row">
            <div class="col-md-12 text-center">
                <h5>Commands</h5>
            </div>

            <!-- 1st row -->
            <div class="col-md-12 text-center">
                <button class="btn btn-primary">Go forward</button>
                <br><br>
            </div>

            <!-- 2nd row -->
            <div class="col-md-4 text-center">
                <button class="btn btn-primary">Turn left</button>
            </div>
            <div class="col-md-4 text-center">
                <button class="btn btn-danger">Stop</button>
                <br><br>
            </div>
            <div class="col-md-4 text-center">
                <button class="btn btn-primary">Turn right</button>
            </div>

            <!-- 3rd row -->
            <div class="col-md-12 text-center">
                <button class="btn btn-primary">Go backward</button>
            </div>
        </div>

This must provide an interface like below:


2 – Code to send the commands

Now, in our main.js file, let’s create some new methods to perform the actions we have just idealized. Inside the methods attribute we already have 2 functions. Let’s create the ones below:

        setTopic: function() {
            this.topic = new ROSLIB.Topic({
                ros: this.ros,
                name: '/cmd_vel',
                messageType: 'geometry_msgs/Twist'
            })
        },
        forward: function() {
            this.message = new ROSLIB.Message({
                linear: { x: 1, y: 0, z: 0, },
                angular: { x: 0, y: 0, z: 0, },
            })
            this.setTopic()
            this.topic.publish(this.message)
        },
        stop: function() {
            this.message = new ROSLIB.Message({
                linear: { x: 0, y: 0, z: 0, },
                angular: { x: 0, y: 0, z: 0, },
            })
            this.setTopic()
            this.topic.publish(this.message)
        },
        backward: function() {
            this.message = new ROSLIB.Message({
                linear: { x: -1, y: 0, z: 0, },
                angular: { x: 0, y: 0, z: 0, },
            })
            this.setTopic()
            this.topic.publish(this.message)
        },
        turnLeft: function() {
            this.message = new ROSLIB.Message({
                linear: { x: 0.5, y: 0, z: 0, },
                angular: { x: 0, y: 0, z: 0.5, },
            })
            this.setTopic()
            this.topic.publish(this.message)
        },
        turnRight: function() {
            this.message = new ROSLIB.Message({
                linear: { x: 0.5, y: 0, z: 0, },
                angular: { x: 0, y: 0, z: -0.5, },
            })
            this.setTopic()
            this.topic.publish(this.message)
        },

We have defined a common method setTopic that is used by the others: forwardbackward, stop, turnLeft and turnRight.

The task inside of each one is always the same. Define a message to send, set the topic and publish the message.

The only thing that changes from one to another are the values of the Twist message.


3 – Integrating HTML and JavaScript

Now that we have the elements in our page and the methods that perform the actions, let’s make them work together.

In order to do so, we need to assign the methods to the “click” events of each button. It goes like this:

            <!-- 1st row -->
            <div class="col-md-12 text-center">
                <button @click="forward" :disabled="loading || !connected" class="btn btn-primary">Go forward</button>
                <br><br>
            </div>

            <!-- 2nd row -->
            <div class="col-md-4 text-center">
                <button @click="turnLeft" :disabled="loading || !connected" class="btn btn-primary">Turn left</button>
            </div>
            <div class="col-md-4 text-center">
                <button @click="stop" :disabled="loading || !connected" class="btn btn-danger">Stop</button>
                <br><br>
            </div>
            <div class="col-md-4 text-center">
                <button @click="turnRight" :disabled="loading || !connected" class="btn btn-primary">Turn right</button>
            </div>

            <!-- 3rd row -->
            <div class="col-md-12 text-center">
                <button @click="backward" :disabled="loading || !connected" class="btn btn-primary">Go backward</button>
            </div>

Notice that we are not only defining the @click attribute, but also the “:disabled”. It means that the button will be disabled if one of the conditions are true: Loading or not connected.


4 – Final result

At this point, you must have the following result with your webpage:


ROSJect created along with the post:

http://www.rosject.io/l/ddbab4e/

Pin It on Pinterest