AMR software architecture: ROS drivers, the robot model… and a bonus

When programming a mobile robot, the system complexity can be somewhat overwhelming. Are you running into this? You’re not the first, and you won’t be the last. Others have learned their lessons the hard way, so you don’t have to. There’s more to learn than can be taught in one simple blog post, so we’ll just start with the basics. If you’ve assembled or bought a robot, the first step is to create a foundation. On top of this, you can build autonomous functionality. In this post, we’ll go through what that means, and how it helps you to build great robots.

Drivers

Before you start working on the autonomy of an autonomous mobile robot, you need to be able to read data from sensors, and send control setpoints to motors and other actuators. This is the foundation on top of which you can build autonomous functionality. This makes hardware drivers arguably the most important components of the software system. They make sure that the bits and bytes sent out by your sensors are converted into something the rest of your software can handle. 

The output of such a driver is preferably something generic. That means it’s not specific for the sensor, but rather for the type of sensor. For example, laser scanners come in many different shapes, sizes and brands. However, they all output some array of distance information spread out evenly over a field of view. So although every manufacturer has their own communication protocol, the sensor data can be represented in the same form.

scan:
  ranges: [...]
  intensities: [...]
  min_angle: -x
  max_angle: y
  angle_increment: z

This is useful, because the other subsystems such as obstacle avoidance can now rely on this data structure. This enables you to switch sensors if necessary, without the need to rewrite your obstacle avoidance. Such an abstraction can be made for every sensor and actuator, leaving you with a generic interface to the robot. ROS provides a plethora of data types (messages) for commonly used sensors called the sensor_msgs. You might also want to take a look at the geometry_msgs, or the nav_msgs (especially Odometry).

Robot model

With generalized interfaces to the sensors and actuators, we are not yet ready to control the vehicle. To make sure we are interpreting the data correctly, and sending the right commands to the motors, we need to know where they are, and how they are mounted. We can do this with a model of the robot. The model can best be defined such that the rest of the subsystems can use it for their calculations. For example: localization needs to know where the Lidar is, and navigation where the wheels are mounted.

ROS standardizes the URDF, or Universal Robot Description Format as a format for representing models of robots. It allows you to model sensors, mounted to links, connected through joints in any conceivable way. You can build arbitrary robot models with it. 

It might seem cumbersome to build this model of the robot just to configure some distances and angles at first. The good thing is that a design change in the mechanical department only results in changes in the robot model. The rest of the software configuration is automatically updated.

Bonus: Simulation

Great! We have now generalized the entire AGV. We can now use the model, and the generalized sensor and actuator interfaces for a nice bonus feature: a simulator.

Gazebo is a spin-off project from ROS, providing an open source simulator specifically designed to mimic robots. Plugins for simulating sensors such as cameras, lidar and GPS, as well as actuators are available. These plugins use the same data structures as our drivers. This allows you to build your autonomy software on top of this simulator as if it were the real robot. Once you are convinced that it works well in simulation, you can use the same control software on the physical robot.

Of course, you may want to limit the level of detail for the simulation. You could model physics up to friction forces and rotating wheels of a mobile base. But this is very hard to model accurately. As such, at RUVU, we usually don’t bother with simulation of physics at all. It is never close enough to real life to use it for tuning the navigation controllers. And for high level logic, you don’t need such detail. Instead, we just apply the navigation control signal to the simulated vehicle directly. This results in “perfect” tracking behavior, but it still allows for testing all kinds of high level algorithms and logic. This saves valuable test time on the real robot.

Your turn!

We’ve seen that a set of good drivers allows us to swap out sensors from different manufacturers without changing the actual application software. A robot model allows us to test different designs without the need to reconfigure the entire application software. As a bonus, we get the ability to simulate the robot as if it were the real thing. Once you have this set up, you can start building the autonomous application on top. First in simulation, and afterwards move to the real robot without changing the software. 

As you can see, these are all measures to limit the complexity of the overall system. If you use these architecture choices, whether using ROS or not, it might cost a little more time starting up. However, it saves a great deal more when deep in the project, or when developing your second autonomous mobile robot. If you have any questions about drivers, how to model a robot, or what this simulator thing is all about, don’t hesitate to contact us. RUVU is here to help you out.