Mobile Robot Localization

An Interactive Explanation

Autonomous mobile robots need an understanding of where they are in the environment they are operating in, to perform tasks accurately. The problem of figuring out where the robot is in the environment, is called mobile robot localization. At first glance, using a GPS reciever might seem like an easy solution to the localization problem, but 1) GPS is not always available since the robot could be operating indoors, or occluded by buildings outdoors; and 2) even when available, GPS is not sufficiently accurate for precise navigation planning. Instead, the robot must rely on its on-board sensors such as LIDAR, vision cameras, and wheel odometers to solve the localization problem.

If sensors and robotic movement were perfect, localization would be an easy problem. In the real world however, there are lots of small errors in sensing and movement that lead to uncertainty about the true position of the robot. In order to overcome this uncertainty and estimate the the true position of the robot, we need to be able to quantify the error and account for it.

In this page, we first discuss the sources of uncertainty in robotic movement and sensing through illustrative examples, and then use these examples to build up to a popular algorithm for mobile robot localization called Monte-Carlo Localization, or particle filtering.

Uncertainty Arising From Motion

Uncertainty in a robot’s motion arises from a large number of factors, including differences in wheel diameter, tire inflation, friction with respect to the ground, and uneven terrain. These factors cause errors in the execution of planned motions. This demo simulates the errors in execution of motion commands on a real world robot. The blue line represents the planned motion, while the red line represents the robot's actual motion.


The errors in execution of motion commands can be broken down into four components:
E1

When the robot is commanded to rotate by a certain angle, the true rotation of the robot might differ from what was commanded.

E2

When the robot is commanded to move straight, the robot may not drive straight, and instead turn by some amount while traversing.

E3

When the robot is commanded to move straight by a certain distance, the robot may not move by that same distance.

E4

When the robot is commanded to rotate by a certain angle, the robot might, in addition to turning, skid sideways or forward. This type of error is significant on robots with tank treads and multiple wheels with skid-steering.

You can see the impact of each of these types of error on a robot’s motion. In the demos below, you can observe the impact of each type of error on the overall executed path of the robot. Try adjusting the slider to control the magnitude of the noise, and observe its impact on the path followed by the robot.


E1
E2
E3
E4

While the demos above simulated a single outcome of a robot at a time, we can also concurrently simulate many possible outcomes. The demos below simulate each of the four types of motion errors over many possible outcomes. This demonstrates the distribution of the robot’s possible locations over time.

E1
E2
E3
E4
Note how the errors accumulate over time: The possible outcomes of the robot spread apart, but in a different manner for each type of motion error.

Simulating A Robot’s Motion On A Map

We can put these motion models together to create a simulation of a robots motion. In this demo, a robot is asked to follow a fixed trajectory (shown as the green path) in an indoor office building. The blue outlines are simulated outcomes of the robot’s location over time. Note how the simulated outcomes diverge over time. You may adjust the magnitude of each type of motion error, and see how it affects the outcomes.

In the demos above, we have demonstrated that while the actual motion may not match the planned motion, we can simulate many possible outcomes. We do not know the exact location of the robot, but we can use the distribution of the possible outcomes, along with additional sensing, to estimate the most likely set of outcomes. This extra information comes in the form of sensor readings.

Uncertainty in Sensor Readings

Sensing and Laser Range Finders

There are a wide variety of sensors that are used by mobile robots. Cameras, SONAR, LIDAR, RADAR, and even Wi-Fi signals have been used to aid in mobile robot localization. One common kind of sensor is LIDAR, a sensor that uses the round-trip time of laser beams to estimate the distance of a obstacles from the sensor. Range finders on robots are usually lasers though sonars are also sometimes used. Both of these are time of flight sensors which operate by sending out a signal (a laser beam or sonar pulse) and recording how long it took for the signal to bounce off an obstacle and return. Most of the time, these sensors have a minimum and maximum effective range.

The laser range finders that are typically used function by sweeping the laser beam in an arc and recording the distance at a fixed set of angles. Each reading, provides a measurement of the closest obstacle in that direction. Using a map of the environment and a sweep of the laser range finder, it is possible to infer which locations on the map that reading came from. It is not possible to uniquely determine the location using only these sensor readings as there are usually locations that will produce similar readings. For example, when traveling down the middle of a hallway the readings from areas where there are no doors will be very similar.

In addition, there are errors in the readings given by the sensors. These errors occur for a variety of reasons including environment effects, errors int he timing signal, and the optical properties of the obstacle. In order to use the noisy sensor readings to localize the robot, we need to understand the properties of the noise.

The demo below simulates multiple readings for a single beam of a laser range-finder. The graph shown is a histogram of the number of times the sensor gives a particular reading. The actual obstacle is 10 meters away. Note that even though the readings are noisy, the average reading tends towards the true distance. Using the GUI below, you can change the magnitude of the sensor noise and view its impact on the readings. You can also modify the number of observations that are shown in the plot.

Despite the errors in individual readings of the sensor, the average error over a large number of readings tends to zero as the number of readings increases, and the distribution of the readings follow a known pattern, modeled by a Normal Distribution..

Simulating A Laser Rangefinder

The demo below demonstrates readings from a laser range finder on a map. Click and press on a location on the map to set the location of the robot. The robot will be turned to face the location that your mouse is when you release the button. You can click on the mini map to change the location of the window. The red lines are readings that returned something less than max. The gray lines are readings that returned the max distance. Note: When there is no obstacle closer to the robot than the max distance, the reading will be the max distance.

m
m
pixels

Particle Filter Localization

There are a large number of algorithms that are meant to solve the problem of mobile robot localization. One such algorithm is known as Particle Filter Localization or Monte Carlo Localization (MCL). Particle filters are a general class of filters that estimate a probability distribution by maintaining a number of hypotheses of the actual state of the distribution called particles. For mobile robot localization the goal is to determine the location of the robot given the sensor readings and optionally the commands sent to the robot or odometry and a map of the environment.

A particle filter implementation for mobile robot localization requires the following models:

  • A motion model that can generate a location given the preceding location and optionally a command or odometry reading.
  • A sensor model that calculates the probability of a sensor reading given a location and optionally a map.

For localization, the particles are possible locations of the robot plus a weight that represents the likelihood of that location being the true location of the robot. The particles are projected forward in time by the motion model in the prediction step. The weights are updated using the sensor model in the update step.

Overview of particle filter localization at each time step:

  1. Predict each particle's location and direction using the motion model. The motion model simulates the uncertainties described above.
  2. Update each particle's weight using the sensor model. The weights reflect the likelihood that the sensor reading that was observed will be seen at that particle's location.
  3. Resample the particles based on the updated weights. This is done to prevent the divergence of the particles seen from just simulating the motion as well as to help prevent the filter from being impacted by particles that are very unlikely to reflect the true state.

The particles can then be used to collect various statistics about the state of the robot. For example, the mean location of all of the particles is often used as the believed location of the robot.

You can use the visualization below to explore particle filter localization. Use the sliders to change the parameters of the filter as well as the sensor readings.