Step 1. Create a project in ROS Development Studio(ROSDS)
ROSDS helps you follow our tutorial in a fast pace without dealing without setting up an environment locally. If you haven’t had an account yet, you can create a free account here.
Step 2. Create package and file
Let’s create a package for testing in RDS by typing the following command.
cd ~/catkin_ws/src
catkin_create_pkg logs_test roscpp
Then we create a source file under the logs_test/src directory called logs.cpp with the following content.
#include <ros/ros.h>
#include <ros/console.h>
#include <stdlib.h>
int main(int argc, char** argv) {
if (ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Debug))
{
ros::console::notifyLoggerLevelsChanged();
}
ros::init(argc,argv,"log_demo");
ros::NodeHandle nh;
ros::Rate loop_rate(0.5); // We create a Rate object of 2Hz
while (ros::ok()) //Endless loop until Ctrl+c
{
ROS_DEBUG("This is a DEBUG message");
ROS_INFO("This is a INFO message");
ROS_WARN("This is a WARN message");
ROS_ERROR("This is a ERROR message");
ROS_FATAL("This is a FATAL message");
loop_rate.sleep();
ros::spinOnce();
}
return 0;
}
In the CMakeLists.txt, please change the following content in build part.
...
## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
add_executable(logs_test src/logs.cpp)
...
## Add cmake target dependencies of the executable
## same as for the library above
add_dependencies(logs_test ${logs_test_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Specify libraries to link a library or executable target against
target_link_libraries(logs_test
${catkin_LIBRARIES}
)
...
Then we can compile the code with the following command
cd ~/catkin_ws
catkin_make
While waiting for compilation, let’s create a launch file to launch the code much easier. Please create a launch folder under the logs_test directory and create a file called logs_test.launch under it with the following code.
Then we can run the following command in shell to launch the node.
roslaunch logs_test logs_test.launch
Since we set the lowest level logger in our code(DEBUG) we’ll see all 5 levels logs are printing. If you change it(e.g. to ERROR) and compile again, you should only see the ERROR and FATAL message.
Edit by: Tony Huang
Feedback
Did you like this video? Do you have questions about what is explained? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS topics, please let us know on the comments area and we will do a video about it.
In this new video series, we are going to learn how to use the openai_ros package created by TheConstruct. You will learn to use it with all the robots that we give suport to. You will learn in this video divided into two parts, how to use openai_ros with Turtlebot2. You will learn how to setup everything to use our package and make it learn how to move around a maze with very little coding.
Step 1. Create a project in ROS Development Studio(ROSDS)
ROSDS helps you follow our tutorial in a fast pace without dealing without setting up an environment locally. If you haven’t had an account yet, you can create a free account here.
Step 2. Clone the repository and create a package for the project
At first, we have to clone the repositories that we need for this project
cd ~/simulation_ws/src
git clone https://bitbucket.org/theconstructcore/open_ai_gym_construct.git
cd ~/catkin_ws/src
git clone https://bitbucket.org/theconstructcore/openai_ros.git
You can launch the simulation environment from simulations->Select launch file->gym_construct-main.launch
Inside the open_ai ROS package, we need to specify turtlebot2_maze.py as our task.
We’ll start by creating the folders for the project
cd ~/catkin_ws/src
mkdir openai_examples_projects/turtle2_open_ai_ros_example
Then create three folder config, launch, and scripts under the directory. Inside the launch folder, let’s create a script called start_training.launch with the following content
<launch>
<!-- This version uses the openai_ros environments -->
<rosparam command="load" file="$(find turtle2_openai_ros_example)/config/turtlebot2_openai_qlearn_params.yaml" />
<!-- Launch the training system -->
<node pkg="turtle2_openai_ros_example" name="turtlebot2_maze" type="start_qlearning.py" output="screen"/>
</launch>
This launch file will load the parameters first for the algorithm, you can define the parameter in a file called turtlebot2_openai_qlearn_params.yaml in the config folder with the following content
turtlebot2: #namespace
running_step: 0.04 # amount of time the control will be executed
pos_step: 0.016 # increment in position for each command
#qlearn parameters
alpha: 0.1
gamma: 0.7
epsilon: 0.9
epsilon_discount: 0.999
nepisodes: 500
nsteps: 10000
number_splits: 10 #set to change the number of state splits for the continuous problem and also the number of env_variable splits
running_step: 0.06 # Time for each step
wait_time: 0.1 # Time to wait in the reset phases
n_actions: 4 # We have 3 actions, Forwards,TurnLeft,TurnRight,Backwards
n_observations: 6 # We have 6 different observations
speed_step: 1.0 # Time to wait in the reset phases
linear_forward_speed: 0.5 # Spwwed for ging fowards
linear_turn_speed: 0.05 # Lienare speed when turning
angular_speed: 0.3 # Angular speed when turning Left or Right
init_linear_forward_speed: 0.0 # Initial linear speed in which we start each episode
init_linear_turn_speed: 0.0 # Initial angular speed in shich we start each episode
new_ranges: 5 # How many laser readings we jump in each observation reading, the bigger the less laser resolution
min_range: 0.5 # Minimum meters below wich we consider we have crashed
max_laser_value: 6 # Value considered Ok, no wall
min_laser_value: 0 # Value considered there is an obstacle or crashed
desired_pose:
x: 5.0
y: 0.0
z: 0.0
forwards_reward: 5 # Points Given to go forwards
turn_reward: 1 # Points Given to turn as action
end_episode_points: 200 # Points given when ending an episode
Then it will launch the node for qlearning, let’s create two files under the scripts folder. The first one called qlearn.py with the following content which is implemented by Victor Mayoral Vilches. We won’t go into detail here.
'''
Q-learning approach for different RL problems
as part of the basic series on reinforcement learning @
Inspired by https://gym.openai.com/evaluations/eval_kWknKOkPQ7izrixdhriurA
@author: Victor Mayoral Vilches <victor@erlerobotics.com>
'''
import random
class QLearn:
def __init__(self, actions, epsilon, alpha, gamma):
self.q = {}
self.epsilon = epsilon # exploration constant
self.alpha = alpha # discount constant
self.gamma = gamma # discount factor
self.actions = actions
def getQ(self, state, action):
return self.q.get((state, action), 0.0)
def learnQ(self, state, action, reward, value):
'''
Q-learning:
Q(s, a) += alpha * (reward(s,a) + max(Q(s') - Q(s,a))
'''
oldv = self.q.get((state, action), None)
if oldv is None:
self.q[(state, action)] = reward
else:
self.q[(state, action)] = oldv + self.alpha * (value - oldv)
def chooseAction(self, state, return_q=False):
q = [self.getQ(state, a) for a in self.actions]
maxQ = max(q)
if random.random() < self.epsilon:
minQ = min(q); mag = max(abs(minQ), abs(maxQ))
# add random values to all the actions, recalculate maxQ
q = [q[i] + random.random() * mag - .5 * mag for i in range(len(self.actions))]
maxQ = max(q)
count = q.count(maxQ)
# In case there're several state-action max values
# we select a random one among them
if count > 1:
best = [i for i in range(len(self.actions)) if q[i] == maxQ]
i = random.choice(best)
else:
i = q.index(maxQ)
action = self.actions[i]
if return_q: # if they want it, give it!
return action, q
return action
def learn(self, state1, action1, reward, state2):
maxqnew = max([self.getQ(state2, a) for a in self.actions])
self.learnQ(state1, action1, reward, reward + self.gamma*maxqnew)
The next one is the training script called start_qlearning.py with the following content
#!/usr/bin/env python
import gym
import numpy
import time
import qlearn
from gym import wrappers
# ROS packages required
import rospy
import rospkg
# import our training environment
from openai_ros.turtlebot2 import turtlebot2_maze
if __name__ == '__main__':
rospy.init_node('turtlebot2_maze_qlearn', anonymous=True, log_level=rospy.WARN)
# Create the Gym environment
env = gym.make('TurtleBot2Maze-v0')
rospy.loginfo("Gym environment done")
# Set the logging system
rospack = rospkg.RosPack()
pkg_path = rospack.get_path('my_turtlebot2_maze')
outdir = pkg_path + '/training_results'
env = wrappers.Monitor(env, outdir, force=True)
rospy.loginfo("Monitor Wrapper started")
last_time_steps = numpy.ndarray(0)
# Loads parameters from the ROS param server
# Parameters are stored in a yaml file inside the config directory
# They are loaded at runtime by the launch file
Alpha = rospy.get_param("/turtlebot2/alpha")
Epsilon = rospy.get_param("/turtlebot2/epsilon")
Gamma = rospy.get_param("/turtlebot2/gamma")
epsilon_discount = rospy.get_param("/turtlebot2/epsilon_discount")
nepisodes = rospy.get_param("/turtlebot2/nepisodes")
nsteps = rospy.get_param("/turtlebot2/nsteps")
running_step = rospy.get_param("/turtlebot2/running_step")
# Initialises the algorithm that we are going to use for learning
qlearn = qlearn.QLearn(actions=range(env.action_space.n),
alpha=Alpha, gamma=Gamma, epsilon=Epsilon)
initial_epsilon = qlearn.epsilon
start_time = time.time()
highest_reward = 0
# Starts the main training loop: the one about the episodes to do
for x in range(nepisodes):
rospy.logdebug("############### START EPISODE=>" + str(x))
cumulated_reward = 0
done = False
if qlearn.epsilon > 0.05:
qlearn.epsilon *= epsilon_discount
# Initialize the environment and get first state of the robot
observation = env.reset()
state = ''.join(map(str, observation))
# Show on screen the actual situation of the robot
# env.render()
# for each episode, we test the robot for nsteps
for i in range(nsteps):
rospy.logwarn("############### Start Step=>" + str(i))
# Pick an action based on the current state
action = qlearn.chooseAction(state)
rospy.logwarn("Next action is:%d", action)
# Execute the action in the environment and get feedback
observation, reward, done, info = env.step(action)
rospy.logwarn(str(observation) + " " + str(reward))
cumulated_reward += reward
if highest_reward < cumulated_reward:
highest_reward = cumulated_reward
nextState = ''.join(map(str, observation))
# Make the algorithm learn based on the results
rospy.logwarn("# state we were=>" + str(state))
rospy.logwarn("# action that we took=>" + str(action))
rospy.logwarn("# reward that action gave=>" + str(reward))
rospy.logwarn("# episode cumulated_reward=>" + str(cumulated_reward))
rospy.logwarn("# State in which we will start next step=>" + str(nextState))
qlearn.learn(state, action, reward, nextState)
if not (done):
rospy.logwarn("NOT DONE")
state = nextState
else:
rospy.logwarn("DONE")
last_time_steps = numpy.append(last_time_steps, [int(i + 1)])
break
rospy.logwarn("############### END Step=>" + str(i))
#raw_input("Next Step...PRESS KEY")
# rospy.sleep(2.0)
m, s = divmod(int(time.time() - start_time), 60)
h, m = divmod(m, 60)
rospy.logerr(("EP: " + str(x + 1) + " - [alpha: " + str(round(qlearn.alpha, 2)) + " - gamma: " + str(
round(qlearn.gamma, 2)) + " - epsilon: " + str(round(qlearn.epsilon, 2)) + "] - Reward: " + str(
cumulated_reward) + " Time: %d:%02d:%02d" % (h, m, s)))
rospy.loginfo(("\n|" + str(nepisodes) + "|" + str(qlearn.alpha) + "|" + str(qlearn.gamma) + "|" + str(
initial_epsilon) + "*" + str(epsilon_discount) + "|" + str(highest_reward) + "| PICTURE |"))
l = last_time_steps.tolist()
l.sort()
# print("Parameters: a="+str)
rospy.loginfo("Overall score: {:0.2f}".format(last_time_steps.mean()))
rospy.loginfo("Best 100 score: {:0.2f}".format(reduce(lambda x, y: x + y, l[-100:]) / len(l[-100:])))
env.close()
Line 20 is the most important part of this script. It will register the training environment.
From line 52 to line 102 is the training process of the algorithm. For each timestep, the algorithm will decide the action to take based on the current state. After an action is taken. The observation is measured and the reward is calculated. The algorithm will keep learning to get the highest possible reward.
After that, you can compile the package and launch the training with the following command.
cd ~/catkin_ws
catkin_make
source devel/setup.bash
roslaunch turtle2_openai_ros_example start_training.launch
You should see the turtlebot start to moving and reset if it touches the wall.
In this example, we use the environment defined by openai_ros package, but what if I want to do some different tasks? e.g. change the actions the robot can take?
Let’s create our own environment called my_turtlebot2_maze.py under the script folder with the following content
import rospy
import numpy
from gym import spaces
from openai_ros import turtlebot2_env
from gym.envs.registration import register
timestep_limit_per_episode = 10000 # Can be any Value
register(
id='MyTurtleBot2Maze-v0',
entry_point='my_turtlebot2_maze:MyTurtleBot2MazeEnv',
timestep_limit=timestep_limit_per_episode,
)
class MyTurtleBot2MazeEnv(turtlebot2_env.TurtleBot2Env):
def __init__(self):
"""
This Task Env is designed for having the TurtleBot2 in some kind of maze.
It will learn how to move around the maze without crashing.
"""
# Only variable needed to be set here
number_actions = rospy.get_param('/turtlebot2/n_actions')
self.action_space = spaces.Discrete(number_actions)
# We set the reward range, which is not compulsory but here we do it.
self.reward_range = (-numpy.inf, numpy.inf)
#number_observations = rospy.get_param('/turtlebot2/n_observations')
"""
We set the Observation space for the 6 observations
cube_observations = [
round(current_disk_roll_vel, 0),
round(y_distance, 1),
round(roll, 1),
round(pitch, 1),
round(y_linear_speed,1),
round(yaw, 1),
]
"""
# Actions and Observations
self.linear_forward_speed = rospy.get_param('/turtlebot2/linear_forward_speed')
self.linear_turn_speed = rospy.get_param('/turtlebot2/linear_turn_speed')
self.angular_speed = rospy.get_param('/turtlebot2/angular_speed')
self.init_linear_forward_speed = rospy.get_param('/turtlebot2/init_linear_forward_speed')
self.init_linear_turn_speed = rospy.get_param('/turtlebot2/init_linear_turn_speed')
self.new_ranges = rospy.get_param('/turtlebot2/new_ranges')
self.min_range = rospy.get_param('/turtlebot2/min_range')
self.max_laser_value = rospy.get_param('/turtlebot2/max_laser_value')
self.min_laser_value = rospy.get_param('/turtlebot2/min_laser_value')
# We create two arrays based on the binary values that will be assigned
# In the discretization method.
laser_scan = self._check_laser_scan_ready()
num_laser_readings = len(laser_scan.ranges)/self.new_ranges
high = numpy.full((num_laser_readings), self.max_laser_value)
low = numpy.full((num_laser_readings), self.min_laser_value)
# We only use two integers
self.observation_space = spaces.Box(low, high)
rospy.logdebug("ACTION SPACES TYPE===>"+str(self.action_space))
rospy.logdebug("OBSERVATION SPACES TYPE===>"+str(self.observation_space))
# Rewards
self.forwards_reward = rospy.get_param("/turtlebot2/forwards_reward")
self.turn_reward = rospy.get_param("/turtlebot2/turn_reward")
self.end_episode_points = rospy.get_param("/turtlebot2/end_episode_points")
self.cumulated_steps = 0.0
# Here we will add any init functions prior to starting the MyRobotEnv
super(MyTurtleBot2MazeEnv, self).__init__()
def _set_init_pose(self):
"""Sets the Robot in its init pose
"""
self.move_base( self.init_linear_forward_speed,
self.init_linear_turn_speed,
epsilon=0.05,
update_rate=10)
return True
def _init_env_variables(self):
"""
Inits variables needed to be initialised each time we reset at the start
of an episode.
:return:
"""
# For Info Purposes
self.cumulated_reward = 0.0
# Set to false Done, because its calculated asyncronously
self._episode_done = False
def _set_action(self, action):
"""
This set action will Set the linear and angular speed of the turtlebot2
based on the action number given.
:param action: The action integer that set s what movement to do next.
"""
rospy.logdebug("Start Set Action ==>"+str(action))
# We convert the actions to speed movements to send to the parent class CubeSingleDiskEnv
if action == 0: #FORWARD
linear_speed = self.linear_forward_speed
angular_speed = 0.0
self.last_action = "FORWARDS"
elif action == 1: #LEFT
linear_speed = self.linear_turn_speed
angular_speed = self.angular_speed
self.last_action = "TURN_LEFT"
elif action == 2: #RIGHT
linear_speed = self.linear_turn_speed
angular_speed = -1*self.angular_speed
self.last_action = "TURN_RIGHT"
elif action == 3: #RIGHT
linear_speed = -self.linear_turn_speed
angular_speed = 0.0
self.last_action = "BACKWARDS"
# We tell TurtleBot2 the linear and angular speed to set to execute
self.move_base(linear_speed, angular_speed, epsilon=0.05, update_rate=10)
rospy.logdebug("END Set Action ==>"+str(action))
def _get_obs(self):
"""
Here we define what sensor data defines our robots observations
To know which Variables we have acces to, we need to read the
TurtleBot2Env API DOCS
:return:
"""
rospy.logdebug("Start Get Observation ==>")
# We get the laser scan data
laser_scan = self.get_laser_scan()
discretized_observations = self.discretize_observation( laser_scan,
self.new_ranges
)
rospy.logdebug("Observations==>"+str(discretized_observations))
rospy.logdebug("END Get Observation ==>")
return discretized_observations
def _is_done(self, observations):
if self._episode_done:
rospy.logerr("TurtleBot2 is Too Close to wall==>")
else:
rospy.logerr("TurtleBot2 is Ok ==>")
return self._episode_done
def _compute_reward(self, observations, done):
if not done:
if self.last_action == "FORWARDS":
reward = self.forwards_reward
elif self.last_action == "BACKWARDS":
reward = -1*self.forwards_reward
else:
reward = self.turn_reward
else:
reward = -1*self.end_episode_points
rospy.logdebug("reward=" + str(reward))
self.cumulated_reward += reward
rospy.logdebug("Cumulated_reward=" + str(self.cumulated_reward))
self.cumulated_steps += 1
rospy.logdebug("Cumulated_steps=" + str(self.cumulated_steps))
return reward
# Internal TaskEnv Methods
def discretize_observation(self,data,new_ranges):
"""
Discards all the laser readings that are not multiple in index of new_ranges
value.
"""
self._episode_done = False
discretized_ranges = []
mod = len(data.ranges)/new_ranges
rospy.logdebug("data=" + str(data))
rospy.logwarn("new_ranges=" + str(new_ranges))
rospy.logwarn("mod=" + str(mod))
for i, item in enumerate(data.ranges):
if (i%mod==0):
if item == float ('Inf') or numpy.isinf(item):
discretized_ranges.append(self.max_laser_value)
elif numpy.isnan(item):
discretized_ranges.append(self.min_laser_value)
else:
discretized_ranges.append(int(item))
if (self.min_range > item > 0):
rospy.logerr("done Validation >>> item=" + str(item)+"< "+str(self.min_range))
self._episode_done = True
else:
rospy.logwarn("NOT done Validation >>> item=" + str(item)+"< "+str(self.min_range))
return discretized_ranges
Please notice the ID defined in line 10 is the one we are going to use
In line 15, you can see that the environment is inherited from the turtlebot2_env from the openai_ros package. By doing that, you don’t need to define how to move the robot or get the sensor data since the openai_ros package already defined it for you.
Then let’s set up a new training script and launch file. We’ll call the new training script my_start_qlearning_maze.py with same same content as start_qlearning.py but with the following changes
...
from gym import wrappers
# ROS packages required
import rospy
import rospkg
# import our training environment
import my_turtlebot2_maze
if __name__ == '__main__':
rospy.init_node('example_turtlebot2_maze_qlearn', anonymous=True, log_level=rospy.WARN)
# Create our own Gym environment
env = gym.make('MyTurtleBot2Maze-v0')
rospy.loginfo("Gym environment done")
...
Here we directly import the environment from the previous script and use the ID it defined. The last step is to create the start_training_maze.launch file under the launch folder with the following content.
<launch>
<!-- This version uses the openai_ros environments -->
<rosparam command="load" file="$(find turtle2_openai_ros_example)/config/turtlebot2_openai_qlearn_params.yaml" />
<!-- Launch the training system -->
<node pkg="turtle2_openai_ros_example" name="example_turtlebot2_maze_qlearn" type="my_start_qlearning_maze.py" output="screen"/>
</launch>
This launch file will launch the node which uses the last script for training.
You can define different action or reward as you want, but remember to also change the n_actions parameter in the turtlebot2_openai_qlearn_params.yaml because it is used by the openai gym.
Have fun!
If you want to learn more about OpenAI gym, please check our OpenAI Gym for Robotics 101 course. You’ll learn how to create environments and train robot for different task in detail.
In this video we are going to see how to programmatically modify a robot’s coordinates when it arrives at a known checkpoint. We will use the same technique as clicking on the 2D Pose Estimate in RViz, but with a Python script. This is useful for the robot to know its exact location whenever it arrives at a checkpoint.
This is a video trying to answer the following question posted at the ROS answers forum.
Step 1. Create a project in ROS Development Studio(ROSDS)
ROSDS helps you follow our tutorial in a fast pace without dealing without setting up an environment locally. If you haven’t had an account yet, you can create a free account here.
Step 2. Start simulation
In this post, we’re going to start bt running turtlebot navigation simulation first with the following commands. Please open an empty gazebo simulation environment from tools->gazebo and then execute the commands in 3 different shells(you can find them in Tools->Shell).
roslaunch turtlebot_navigation_gazebo main.launch
This command spawns the turtlebot and world in gazebo
The third one launches the rviz tool for visualization. You can open the graphical window from Tools->graphical tools.
Step 3. Create a new package for the task
In rviz, you can click 2D post Estimate button to estimate the current location of the robot. Our idea is whenever the robot detects a RFID, it will update it’s current location. When you click the button, it publishes a message for you to the topic /initialpose. To check the type of the message, type rostopic info /initialpost , you’ll see that the message type is geometry_msgs/PoseWithCovarianceStamped. If we want to update the pose, we have to be able to send this message.
Let’s create a package to do this task.
cd ~/catkin_ws/src
catkin_create_pkg set_point rospy
Under the package directory, create a script folder and put our script called main.py inside it with the following content
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import PoseWithCovarianceStamped
from tf.transformations import quaternion_from_euler
rospy.init_node('init_pos')
pub = rospy.Publisher('/initialpose', PoseWithCovarianceStamped, queue_size = 10)
rospy.sleep(3)
checkpoint = PoseWithCovarianceStamped()
checkpoint.pose.pose.position.x = 0.0
checkpoint.pose.pose.position.y = 0.0
checkpoint.pose.pose.position.z = 0.0
[x,y,z,w]=quaternion_from_euler(0.0,0.0,0.0)
checkpoint.pose.pose.orientation.x = x
checkpoint.pose.pose.orientation.y = y
checkpoint.pose.pose.orientation.z = z
checkpoint.pose.pose.orientation.w = w
print checkpoint
pub.publish(checkpoint)
In this script, we set the initial point to x,y,z all equal to zero. If you run the script, you’ll see that the robot model moves to the origin of the map!
Edit by: Tony Huang
// RELATED LINKS
▸ Original question: https://answers.ros.org/question/278551/how-do-i-modifychange-base_footprint-by-program/
▸ ROS Development Studio (RDS): https://goo.gl/bUFuAq
▸ Robot Ignite Academy: https://goo.gl/6nNwhs
—
Feedback
—
Did you like this video? Do you have questions about what is explained? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS topics, please let us know on the comments area and we will do a video about it.
Python vs C++ – see the major difference between running code in the two major languages used by ROS. Let’s go!
Step 1: Create an account and Login to Robot Ignite Academy
On Robot Ignite Academy, you get access to the best online ROS courses and environment. No need to install and set up ROS locally – the only thing you need is a browser!
You’ll now have access to the simulation screen with a Notebook, an Editor, four Shells, and a Simulation Window. We are only using the Editor and Shells for this demo.
Step 2: Create a ROS package and some Python and C++ code
2. 1 Create a package by typing the following command in the shell
cd ~/catkin_ws/src
catkin_create_pkg obiwan rospy roscpp
2.2 Then in the src folder under the obiwan package. Let’s put the two source file obiwancpp.cpp and obiwanpy.py.
#! /usr/bin/env python
import rospy
rospy.init_node("Obiwan")
rate = rospy.Rate(2)
while not rospy.is_shutdown():
print "Help me Obi-Wan Kenobi, you're my only hope"
rate.sleep()
#include <ros/ros.h>
int main(int argc, char** argv) {
ros::init(argc,argv,"ObiWan");
ros::NodeHandle nh;
ros::Rate loop_rate(2);
while(ros::ok())
{
ROS_INFO("Help me Obi-Wan Kenobi, you'are my only hope");
}
return 0;
}
2.3 Type the following command to compile the package
cd ~/catkin_ws
catkin_make
source devel/setup.bash
2.4 After compile, let’s try to run the python script first
rosrun obiwan obiwanpy.py
You should see the output. The script executed without any problem.
2.5 Now try to run the C++ file in the same way. I bet it didn’t run! It turns out, to use the C++ file, you have to uncomment the following part in CMakeLists.txt.
Then compile and run again. You’ll see the output now.
Step 3: Master the Concept – Python vs C++ in ROS
The main difference between using C++ and Python in ROS is that you have to write down comments every time to include all the source files in CMakeLists.txt to compile and generate the executable.
QED 🙂
Extras: Video
Here is a sights and sounds version of the post, in case you prefer to see the action that way!
Further learning
If you are a ROS beginner and would like to learn more about programming ROS, please check one of the courses below, depending on which programming language you’re more familiar with:
In this video we are going to show how to solve the problem of a pioneer 3dx gazebo robot that still moves even after releasing the keyboard, using keyboard twist teleop.
Step 1. Create a project in ROS Development Studio(ROSDS)
ROSDS helps you follow our tutorial in a fast pace without dealing without setting up an environment locally. If you haven’t had an account yet, you can create a free account here. Let’s create a new project and call it pioneer3dx_do_not_stop.
Step 2. Clone the repository
To reproduce the problem, we have to clone it from the original repository. Create a folder called p3dx under the simulation_ws/src and run the following commands to clone the repository.
cd simulation_ws/src/
mkdir p3dx && cd p3dx
git clone https://github.com/RafBerkvens/ua_ros_p3dx.git
Then you can run the simulation from Simulations->select launch file->gazebo.launch
You can try to control the robot by sending the command to the /p3dx/cmd_vel topic. Then you’ll see the problem. The robot won’t stop even when we stop sending command.
To fix this, we comment out these lines in the /p3dx/p3dx_description/urdf/pioneer3dx.gazebo file
We want to change the controller which support timeout control and we also have to specify transmission property. Several parts have been changed in the repository. You can find the changes in detail here. We also need a parameter file for the controller. Create a file called control.yaml under the /p3dx/p3dx_control/ directory with the following content.
We also have to modify the gazeno.launch file like this to launch the controller.
<launch>
<!-- these are the arguments you can pass this launch file, for example
paused:=true -->
<arg name="paused" default="false" />
<arg name="use_sim_time" default="true" />
<arg name="gui" default="true" />
<arg name="headless" default="false" />
<arg name="debug" default="false" />
<!-- We resume the logic in empty_world.launch, changing only the name of
the world to be launched -->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<!--<arg name="world_name" value="$(find p3dx_gazebo)/worlds/p3dx.world" />-->
<arg name="debug" value="$(arg debug)" />
<arg name="gui" value="$(arg gui)" />
<arg name="paused" value="$(arg paused)" />
<arg name="use_sim_time" value="$(arg use_sim_time)" />
<arg name="headless" value="$(arg headless)" />
</include>
<group ns="/p3dx">
<!-- Load the URDF into the ROS Parameter Server -->
<param name="robot_description"
command="$(find xacro)/xacro.py --inorder '$(find p3dx_description)/urdf/pioneer3dx.xacro'" />
<!-- Run a python script to the send a service call to gazebo_ros to spawn
a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model"
respawn="false" output="screen" args="-urdf -param robot_description -model p3dx" />
<rosparam command="load" file="$(find p3dx_control)/config/control.yaml" />
<node name="base_controller_spawner" pkg="controller_manager" type="spawner"
args="--namespace=/p3dx
p3dx_joint_publisher
p3dx_velocity_controller
--shutdown-timeout 3"
output="screen"/>
<!-- ros_control p3rd launch file -->
<!-- <include file="$(find p3dx_control)/launch/control.launch" /> -->
</group>
</launch>
Now you launch the simulation again and execute the following command rosrun teleop_twist_keyboard teleop_twist_keyboard.py cmd_vel:=/p3dx/p3dx_velocity_controller/cmd_vel . You’ll see that the robot stops when you release the key. You can also change the time for timeout in the control.yaml file.
If you are interested in this topic, please check our ROS control 101 course, which explains the ros controller much into detail.
Edit by: Tony Huang
—–
Did you like the video? If you did please give us a thumbs up and remember to subscribe to our channel and press the bell for a new video every day. Either you like it or not, please share your thoughts and questions in the comments area.
Step 1. Create a project in ROS Development Studio(ROSDS)
ROSDS helps you follow our tutorial in a fast pace without dealing without setting up an environment locally. If you haven’t had an account yet, you can create a free account here. Let’s create a new project and call it moveit_qa.
Step 2. Use the moveit package
In this tutorial, we have already preconfigured the fetch robot and moveit package for it. If you want to know how to configure the moveit package, please check our ROS Manipulation in 5 days course. Then we create a file called execute_trajectories.py inside the fetch_moveit_config folder with the following content.
Then we run the script again. The robot should do several movements as we planned.
Want to learn more?
If you want to learn more about the motion planning with moveit package, please check our ROS Manipulation in 5 Days course.
Edit by: Tony Huang
Feedback
Did you like this video? Do you have questions about what is explained? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS topics, please let us know on the comments area and we will do a video about it.