How to get your robot from A to B

In e-commerce, warehousing, and logistics, margins are shrinking. It’s harder and harder to come by affordable personnel for harvesting and processing greenhouse produce. The same holds for nurses in hospitals and personnel in the manufacturing industry. These developments push the demand for automation, including Autonomous Mobile Robots (AMR). Many leading automation companies seem to be investing in the development of such solutions. But what should you think about when controlling such a system? How to get it from A to B?

Divide and conquer

First of all, let’s subdivide the problem into smaller bits that are easier to tackle individually. To get a robot from one place to the other autonomously, it first needs to know where it is at the moment. In academia, this is called the localization problem: finding out the position and orientation of a robot with respect to a certain frame of reference. 

The next step is to find a path from the current position (A) to the goal position (B). This is called navigation. Just like in your car, the navigation system gives you a set of instructions to get from where you are, to where you want to go. Also just like in your car, it does not tell you how to go about avoiding local static or moving obstacles such as pedestrians, and it does not take into account the size or shape of your vehicle. This is up to the driver to figure out based on his most recent observations. 

For this, we add another layer: the local path controller. It optimizes between keeping track of the path from the navigation planner, avoiding obstacles (typically at all cost), getting closer to the goal faster, and any number of other factors you may wish to take into account.


In traditional AGVs, where a magnetic tape or induction wire provided guidance, localization was not necessary. The vehicle simply follows the guide, and when indicated by discrete signals such as RFID tags, it performs some form of action. In AMRs, this is no longer the case. As there is no physical guidance, the AMR needs to know where it is and how it’s oriented to know where to go.

The most commonly used technique for robot localization indoors uses LiDAR: a laser scanner providing distance measurements to any object within range. These distance measurements give an image of the direct surroundings of the vehicle, which can be compared to a predefined map to find the position. By cleverly using the known initial pose, and the motion recorded by encoders on the wheels, we can also distinguish between places that look alike from the sensor’s point of view. This technique is often confused with SLAM. Although they are related, SLAM (Simultaneous Localization and Mapping) is the creation of the localization map, while also localizing in it. A resulting map can be used for localization.

An automated vehicle using a laser scanner to determine its position in a map.

Of course, this method only works when the direct surroundings of the vehicle remain similar to the map. In more dynamic environments, this method degrades down to the point where robots can get lost. In recent years, many different technologies have arisen as alternatives. Ultra-Wideband technology, using fixed beacons and robot-mounted tags, serves as an indoor alternative to GPS. Also, sensors have been developed for tracking and recognizing segments of the floor to provide a global position. Outdoors, of course, global satellite systems are an obvious solution, possibly combined with a base station for improved accuracy.

All of these methods have their own pros and cons, and sensor fusion methods can be used to combine the good bits of multiple sensors. In the end, the application determines the performance requirements. Based on these requirements, an informed decision can be made on the choice of sensors and the need for sensor fusion software.


The next step is navigation. How do we get from where we are, to where we want to go. For this, an AMR uses a map. This map can take many forms. The most common form is similar, or even equal to the map used for LiDAR localization. It can be created using the same SLAM technique. It does, however, lead to the confusion that the localization map and the navigation map are the same. 

When we use a reference system such as Ultra-Wideband or GPS, our localization map does not need to contain anything more than the coordinates of the beacons or satellites. This map simply captures the positions of landmarks in the coordinate system we’re using for the robot’s position. 

A demonstration RUVU did for a client building a large scale multi-robot application. The navigation planner uses a graph using both one-way, and two-way edges. It also uses directed nodes, making sure the robot ends up in the correct orientation.

The navigation map captures all the space the robot can move in. Currently, most AMRs manufacturers equip their robots with a fine-grained map allowing them to find their own way through all unoccupied space. Sometimes the user can prescribe no-go areas or preferred directions. This allows the user to give the robot suggestions on how to navigate the total space in a desirable way. In many applications, however, the user does not want the robot to move from its designated path. The navigation map can then be a simple graph: points connected by line segments, only allowing the robot to plan paths along these lines. In the general case, the navigation planner is a graph search algorithm, finding the optimal path through the navigation map.

Local control

Finally, the local control of the AMR makes sure that the robot follows the path that the navigation system calculated. Even this system can be more advanced than a similar system in a traditional AGV. Whereas the traditional AGV can only look down and find its lateral offset from the line at its current position, the AMR can look ahead. This allows for much better tracking behavior. 

Modern localization systems also allow more flexibility in the AMR design, because the position sensor does not need to be at a specific position on the AMR, whereas on an AGV, it always needs to be in front of the (rear) axle. The AMR can, in fact, be omnidirectional: its capable of moving in any direction without turning. 

This simulated robot should follow the green line. If it just drives straight, it will approach its reference path slowly. If it turns towards the line, it approaches more quickly, but its orientation will be further off. The application determines what is desirable behavior.

This freedom does mean that the local control software is more complex. It needs to optimize between aligning the vehicle to the path and steering towards it. In the case of omnidirectionality, it may also switch states between moving in the longitudinal, lateral, or rotational direction. The local controller typically also handles avoiding small obstacles on the navigation path. This feature increases the complexity of the optimization problem even more. If the local controller cannot handle the encountered obstacles, it may have to return to the navigation system to come up with an alternative route.


Automated vehicles have developed a lot since the introduction of the AGV. Modern AMRs are a lot more flexible in many ways. Robots can be applied in more dynamic environments, handling disturbances autonomously. They can be used in more applications, such as inspection or order picking, due to their capability of free navigation. And they can be reconfigured much more quickly when the area of operation is rearranged as infrastructure is cheaper to install, or not even required at all. 

This flexibility does mean that the individual vehicle has become a complex machine requiring specific expertise to design and build. The market for Autonomous Mobile Robots is growing, so bringing an AMR to your market segment might be a valuable investment. Building expertise and reinventing the wheel will cost a lot of resources, so collaboration is essential. But with the right partners and suppliers, it’s possible to develop a successful product within just a few months!