Imperial College, London
information systems engineering year 2:
Surprise 1997
Imperial College Crest

Mobile Robot Navigation

Final Report
Jonathan Dixon 
Oliver Henlich 
10 June 1997


Navigation of Mobile Robots is a broad topic, covering a large spectrum of different technologies and applications. It draws on some very ancient techniques, as well as some of the most advanced space science and engineering.

While the human-machine interface is not yet at a transparent level (with robots accepting and following spoken instructions; a problem in the realm of Natural Language Processing), the degree of autonomy available after a machine has been program is now approaching that once considered purely science fiction.

This document draws together, and builds upon, a lot of what is written in the authors' four preceding articles in Mobile Robot Navigation. For completeness, some sections of those articles have been included here; however, the reader is referred back to them for a more detailed analysis of some of the systems discussed here.

Macro- to Micro- Scale Navigation


"Mobile Robot Navigation" covers a large spectrum of different systems. requirements and solutions. The aim of the first half of this document is to study this range of requirements, identify the major areas within the scale which are most commonly used, and to discuss appropriate systems for meeting these requirements.

Physical Scales

The physical scale of a device's navigation requirements can be measured by the accuracy to which the mobile robot needs to navigate - this is the resolution of navigation. These requirements vary greatly with application, however a first order approximation of the accuracy required can by taken from the dimensions of the vehicle it self. Any autonomous device must be able to determine its position to a resolution within at least its own dimensions, in order to be able to navigate and interact with its environment correctly.

At the small end of the scale there are robots just a few centimetres in size, which will require high precision navigation over a small range (due to energy supply constraints), while operating in a relatively tame environment. At the other end of the scale there are Jumbo jet aircraft and ocean going liners, each with some sort of auto-pilot navigation, which requires accuracy to a number of metres (or tens of metres), over a huge (i.e. global) range, in somewhat more rugged conditions.

To help in categorising this scale of requirements, we use three terms:-

With the jet auto-pilot example Global navigation is the major requirement, for cruising between continents. Local navigation becomes necessary were the aircraft is expected to fly autonomously in crowded airways, or on approach to the runway on landing. Personal navigation is not an issue, as the vehicle is, fundamentally, one object, and should (hopefully) never come into contact with any other significant objects while under autonomous control.

The "micro" robot on the other hand, is almost exclusively interested in Personal and Local navigation. Such devices are rarely concerned with their position globally, on any traditional geographic scale. Instead their requirements are far more task based - the are concerned with their immediate environment, in particular relative to any objects relevant in the successful completion of their task. This involves Personal navigation, when it is in contact with other objects, and Local navigation for actual movement.

In general, the main focus of the scales of navigation are as follows,

Navigation Reference

Following on, and going in hand with, the scale of navigation requirements, is the frame in which position fixing is performed relative to. The two terms used here are, understandably, Absolute and Relative. What isn't so obvious, however, is what Relative is relative to, and where Absolute is absolute from; this is because these terms are somewhat context sensitive.

In terms of position fixing, absolute implies finding ones position relative to an absolute origin; a fixed stationary point common to all position fixes across the range of navigation. Hence in Global navigation, there should be one such point on the planet which all fixes are relative to. In Local navigation the absolute origin is some fixed point in the robot's environment, and in Personal navigation the origin can be viewed as the centre of the robot itself.

A Relative position fix when navigating Globally, taken relative to some other reference point (environment-relative), is analogous to the absolute position fix in Local navigation. Likewise, a position fix taken relative to the same robot's own position at some other point in time (self-relative), is like the personal absolute position fix. Through knowledge of the absolute reference frame (typically using a map), absolute position fixes in one navigation domain can be transformed into position fixes in another. Indeed, almost all global absolute position fixing is carried out by finding either an environment- or a self- relative position fix, and then converting this into a global position (see Beacon Navigation and Dead-Reckoning respectively).

Two Contemporary Systems

The technology employed in mobile robot navigation is rapidally developing. Here two relatively modern systems are studied, satellite based Global Positioning Systems and image based Vision Positioning Systems, which have the common feature of being under continual development. Between the two, a large scale of naivgational requirements can be met.

Summary of GPS

In 1973 the American Defence Navigation Satellite System was formed, as a joint service between the US Navy and Air Force, along with other departments including the Department of Transport, with the aim of developing a highly precise satellite based navigation system - the Global Positioning System. In the 24 years since conception GPS has established itself firmly into many military and civilian uses across the world, here it will be considered in the context of a device for navigating of mobile robots.

When GPS was released by the US DoD (Department of Defence), it superseded several other systems, however it was designed to have limited accuracy available to non-military (US) users. Several methods of improving the performance have been developed as a result of this, which greatly increase the usefulness of the system for robots.

Further reading on the technical aspects of GPS is given in the appendix.

GPS Segments

There are three segments which make up the GPS system.

The space segment of GPS is 24 satellites (or Space Vehicles - SVs) in orbit about the planet at a height of approximately 20 200 km, such that generally at least four SVs are viewable from the surface of the Earth at any time. This allows the instantaneous user position to be determined, at any time, by measuring the time delay in a radio signal broadcast from each satellite, and using this and the speed of propagation to calculate the distance to the satellite (the pseudo-range). As a 'rule-of-thumb', one individual satellite needs to be received for each dimension of the user's position that needs to be calculated. This suggests 3 satellites are necessary for a position fix of the general user (for the x, y, and z dimensions of the receiver's position), however, the user rarely knows the exact time which they are receiving at - hence 4 satellite pseudo-ranges are required to calculate these 4 unknowns.

The satellite data is monitored and controlled by the GPS ground segment - stations positioned globally to ensure the correct operation of the system.

The user segment is the mobile user and their GPS reception equipment. These have advanced considerably in recent years, to allow faster and more accurate processing of received data. They typically contain pre-amplification, an analogue to digital converter, between 5 and 12 digital signal processor (DSP) channels (each one is able to track a separate satellite transmission), and processor for navigational data. Other elements that might be
incorporated are differential GPS receiver/ processing capability, received phase information processing, and reception capability for the second (L2) GPS frequency.

Uses of GPS

GPS provides an accuracy of 100 m (95 % of the time) to Standard Positioning Service (SPS) users, due to the Selective Availability (S/A) errors introduced intentionally by the US military, for defence reasons. This can be improved to about 15 m (95 %) for authorised Precision Positioning Service (PPS) users [Kaplan, 1996]. The SPS accuracy is not good enough to be individually useful for mobile robot navigation. However, when augmented by the benefits of Differential techniques, GPS does become a viable method for global reference navigation.

The DGPS system operates by having reference stations receive the satellite broadcast GPS signal at a known site, and then transmit a correction according to the error in received signal, to mobile GPS users. So long as the mobile user is in the proximity of the stationary site, they will experience similar errors, and hence require similar corrections. Typical DGPS accuracy is around 4 to 6 m, with better performance seen as the distance between user and beacon site decreases.

DGPS provides the resolution necessary for most Global scale navigation purposes, as well as often being useful at the Local scale. There are a few restrictions on the situations were it can be used however; the following problems can greatly reduces DGPS (or GPS) usability,

In situations were the above are only a problem on occasion (e.g. a robot which operates outside as well as indoors), combining DGPS with other navigation technologies can prove very effective.

Another common marriage of technologies uses (D)GPS for the global level navigation, then other systems for precision local navigation. A good example of this is the UK Robotics Road Robot, a construction autonomous device built after the lack of automation in this area was noticed [Dolton, 1997]. This incorporates the Atlas navigation control unit, which initially finds its course (global) location using GPS, after which it uses laser trilateration to navigate (locally) while carrying out its task. This was found to produce reliable autonomous operation in testing.

GPS Receivers

When considering positioning systems for mobile robots, there are a large number of variables that must be considered, including A great number of modern commercial GPS receivers and OEM development modules come with differential correction capability, of which almost all follow the RTCM-104 standard for interfacing with the DGPS receiver and network [RTCM, 1994]. This allows a GPS receiver to be used according to requirements, and DGPS correction signals from any appropriate source to be used by connecting the relevant DGPS receiver.

An example of a commercial DGPS receiver is the Communication System International CSI SBX-1. This is a so called OEM module, designed to be integrated into another manufacturers system: ideal for mobile robot construction. It is rated at less than 1 W at 5 VDC, and has a footprint of 10 cm2. Coupled with a suitable GPS receiver (typically having somewhat high requirements; e.g. 10 W, 20 cm2 footprint) this would provide a good ground for mobile position fixes.

Summary of Vision-based positioning

Vision based positioning or localisation uses the same basic principles of landmark-based and map-based positioning but relies on optical sensors rather than ultrasound, dead-reckoning and inertial sensors.

The most common optical sensors include laser-based range finders and photometric cameras using CCD arrays. However, due to the volume of information they provide, extraction of visual features for positioning is far from straightforward. Many techniques have been suggested for localisation using vision information, the main components of which are listed below:

The environment is perceived in the form of geometric information such as landmarks, object models and maps in two or three dimensions. Localisation then depends on the following two inter-related considerations: The primary techniques employed in vision-based positioning to date are: Although it seems to be a good idea to combine vision-based techniques with methods using dead-reckoning, inertial sensors, ultrasonic and laser-based sensors, applications under realistic conditions are still scarce.

Clearly vision-based positioning is directly related to most computer vision methods, especially object recognition. So as research in this area progresses, the results can be applied to vision-based positioning.

Real world applications envisaged in most current research projects, demand very detailed sensor information to provide the robot with good environment-interaction capabilities. Visual sensing can provide the robot with an incredible amount of information about its environment. Visual sensors are potentially the most powerful source of information among all the sensors used on robots to date. Hence, at present, it seems that high resolution optical sensors hold the greatest promises for mobile robot positioning and navigation.

Please refer to Vision-Based Positioning for further information on issues and techniques mentioned above.

Other Technologies

Sensors for Dead-Reckoning

Dead Reckoning (derived from "deduced reckoning" from sailing) is a simple mathematical procedure for determining the present location of a vehicle by advancing some previous position through known course and velocity information over a given length of time.

At present, the vast majority of land-based mobile robots rely on dead reckoning to form the backbone of their navigation strategy. They use other navigation aids to eliminate accumulated errors.

Odometric Sensors

The simplest form of dead reckoning is often termed as odometry. This implies that the vehicle displacement along the path of travel is directly derived from some on-board "odometer." A common means of odometric measurement involves optical encoders directly coupled to wheel axles.

Since a large majority of mobile robots rely on motion by means of wheels or tracks, a basic understanding of sensors that accurately quantify angular position and velocity is an important prerequisite for dead reckoning using odometry.

Some of the common rotational displacement and velocity sensors in use today are given below:

The most popular type of rotary encoder for mobile robots is the incremental or absolute optical encoder.

Optical Encoders

At the heart of an optical encoder today lies a miniature version of the break-beam proximity sensor. Here, a focused beam of light aimed at a matched photodetector (eg. a slotted-opto switch) is periodically interrupted by a coded opaque/transparent pattern on a rotating intermediate disk attached to the shaft of interest. The advantage of this encoding scheme is that the output is inherently digital which results in a low-cost reliable package with good noise immunity.

As mentioned above, there are two basic types of optical encoders. The incremental version measures rotational velocity and can infer relative position. The absolute model on the other hand, measures angular position directly and can infer velocity. If non-volatile position information is not a requirement, incremental encoders are usually chosen on grounds of lower cost and simpler interfacing compared to absolute encoders.

Incremental Optical Encoders
The single-channel tachometer encoder is the simplest type of incremental encoder. It is basically a mechanical light chopper that produces a certain number of pulses per shaft revolution. Increasing the number of pulses per revolution increases the resolution (and cost) of the encoder. These devices are especially suited as velocity feedback sensors in medium- to high-speed control systems. However, they run into noise and stability problems at very slow velocities due to quantization errors. In addition to these instabilities, the single-channel tachometer encoder is incapable of detecting the direction of rotation and can thus not be used a as position sensor.

To overcome the problems mentioned above a slightly improved version of the encoder called the phase-quadrature incremental encoder is used. The modification being that a second channel, displaced from the first, is added. This results in a second pulse train which is 90 degrees out of phase with the first pulse train. Now, decoding electronics can determine which channel is leading the other and hence determine the direction of rotation with the added benefit of increased resolution.

Since the output signal of these encoders is incremental in nature, any resolution of angular position can only be relative to some specific reference, as opposed to absolute. For applications involving continuous 360-degree rotation, such a reference is provided by a third channel as a special index output that goes high once for each revolution of the shaft. Intermediate positions are then specified as a displacement from the index position. For applications with limited rotation, such as back-and-forth motion of a pan axis, electrical limit switches can be used to establish a home reference position. Repeatability of this homing action is often broken into steps. The axis is rotated at reduced speed in the appropriate direction until the stop mechanism is encountered. Rotation is then reversed for a short predefined interval after which the axis is then slowly rotated back to the stop position from this known start point. This usually eliminates inertial loading that could influence the final homing position (This two-step approach can be observed in power-on initialisation of stepper-motor positioners in dot-matrix printer heads).

Interfacing an incremental encoder to a computer is not a trivial task. A simple state-based interface is inaccurate if the encoder changes direction at certain positions, and false pulses can result from the interpretation of the sequence of state changes.

A very popular and versatile encoder interface is the HCTL 1100 motion controller chip made by Hewlett Packard. It performs accurate quadrature decoding of the incremental wheel encoder output and provides important additional functions such as:

At a cost of only $40 it is a good candidate for giving mobile robots navigation and positioning capabilities.
Absolute Optical Encoders
These are typically used for slower rotational applications that do not tolerate loss of positional information when there is a temporary power interruption. They are best suited for slow and/or infrequent rotations such as steering angle encoding, as opposed to measuring high-speed continuous rotation required for calculating displacement along the path of travel.

Discrete elements in a photovoltaic array are individually aligned in break-beam fashion with concentric encoder tracks, creating in effect, a non-contact implementation of a commutating brush encoder. Having a dedicated track for each bit of resolution results in a larger disk (relative to incremental designs), with a corresponding decrease in shock and vibration tolerance. Very roughly, each additional encoder track doubles the resolution and quadruples the cost.

Instead of the serial bit streams of incremental designs, absolute encoders provide a parallel word output with a unique code pattern for each quantized shaft position. The most common coding scheme is the Gray code. This code is characterised by the fact that only one bit changes at a time, thus eliminating (a majority of the) asynchronous ambiguities caused by electronic and mechanical component tolerances.

A potential disadvantage of absolute encoders is their parallel data output, which requires more complex interface due to the large number of electrical leads.

Doppler Sensors

The rotational displacement sensors discussed above derive navigation data directly from wheel rotation. This means that they are inherently subject to problems arising from wheel slippage, tread wear, and/or improper tire inflation. Doppler and inertial navigation techniques are sometimes employed to reduce the effects of such error sources.

The principle of operation is based on the Doppler shift in frequency observed when radiated energy reflects off a surface that is moving with respect to the emitter.

Most implementations used for robots employ a single forward-looking transducer to measure ground speed in the direction of travel. An example of this is taken from the agricultural industry, where wheel slippage in soft freshly plowed fields can seriously interfere with the need to release seed at a rate proportional to vehicle advance.

A typical implementation uses a microwave radar sensor which is aimed downward (usually 45 degrees) to sense ground movement as shown in the figure below.

Errors in detecting true ground speed can arise from vertical velocity components introduced by vehicle reaction to the ground surface and uncertainties in the angle of incidence. An interesting scenario resulting in erroneous operation would involve a stationary vehicle parked over a stream of water.

Accelerometers and Gyroscopes

Fundamentally, gyroscopes provide angular rate and accelerometers provide velocity rate information. Dynamic information is provided through direct measurements.

For accelerometers, there is a very poor signal-to-noise ratio at lower accelerations (ie. during low-speed turns). They also suffer from extensive drift, and they are sensitive to uneven grounds, because any disturbance from a perfectly horizontal position will cause the sensor to detect the gravitational acceleration g. Even tilt-compensated systems indicate a position drift rate of 1 to 8 cm/s, depending on the frequency of acceleration changes. This is an unacceptable error rate for most mobile robot applications.

The main problem with gyroscopes is that they are usually very expensive (if they are to be useful for navigation) and they need to be mounted on a very stable platform.

Active Beacons

Navigation using active beacons has been with us for many centuries. Using the stars for navigation is one of the oldest examples of global referenced navigation; technology has brought forward many other systems, such as lighthouses and, more recently, radio navigation.

Types of Beacons

For mobile robotics, laser, sonar and radio (or microwave) are common media for navigational beacons. Because of this, most methods are "line-of-sight" dependant: there must be no obstructions between the mobile user and the beacons. This means they have a limited range of use for a given number of beacons, and are mostly suited for Local area navigation. However, they might well provide position fixes in a global frame of reference, by transforming the users position according to the relation between the local and global reference frames. The (longer wave-length) radio beacon systems tend to be more useful for global scale navigation, due to the greater propagation distances available, and less suitable for indoor use, where multipath reflections from walls can introduce large inaccuracies.

There are two principle methods for determining the user's position:

Almost all electronic navigation beacon systems used trilateration, as it is generally possible to measure time delays (and hence distances) more accurately than incident angles.

Most beacon systems can be sub-categorised into one of the following transmission schemes:

  1. Scanning detectors with fixed active transmitting beacons,
  2. Rotating emitters with fixed receiving beacons,
  3. Scanning emitter/detectors with passive reflective beacons,
  4. Scanning emitter/detectors with active receiver/transmitter beacons.
Radio Beacons
Global Scale Radio beacon systems (e.g. Omega, Loran, as well as GPS [Dixon, 1997]) tend to use the first of these schemes, as it allows for an unlimited number of users from a finite number of beacons. All transmitters are synchronised so that they transmit a continuous wave in phase, however, the receiver is not synchronised to this. This means the user can only measure differences in the time taken for signals to arrive from the various transmitters, not the absolute time. To calculate their position, the user finds the intersection of hyperbolic line-of-positions constructed using the difference in phase of signals received from two pairs of continuously broadcasting transmitters [Tetley, 1991].

Available commercially are more localised beacon systems, which may use either scheme 1, 2, or 4. The first two allow many users in one area; the first being more suitable for autonomous mobile robot control, as the position information is calculated at the mobile end. The second is more suited to tracking applications, such as motor cars around a racetrack. With scheme 4 the round trip propagation delay time from user to beacon and back to user (or vice-versa; generally in position monitoring rather than navigation situations) is measured - analogously to radar operation - to determine range. Using this exact range data it is a simple method to calculate position, by the intersection of circles around, ideally, at least 3 beacons.

Example Beacon geometry with (a) two beacons, (b) three beacons.
Possible user positions at the intersect of circles when range to (a) two, and (b) three, transmitters is known.

Ultrasonic Beacons
Ultrasonic systems generally use one of the first two schemes, again, the first allows many more robots to operate concurrently.

Ultrasonics are frequently used under water situations, as sound has a much higher velocity here ( 1500 ms-1 c.f. 330 ms-1 in air). Also, it is possible to measure the incident angle of received signals much more accurately, allowing triangulation methods to be employed [Larcombe, 1994].

Ultrasonics are widely used for proximity detection (see the next section). Occasionally it is possible to combine the two, by introducing distinctive passive sonic beacons with unique reflection properties. By using trilateration against these beacons, a mobile robot can perform an absolute position fix, as well as finding its position relative to any non-unique objects in the vicinity.

Optical Beacons
Optical and laser systems often use the scheme 3, with passive retroreflective beacons, as these are very cheap. Laser energy is really the only form of transmission the can usefully be reflected without some form of intermediate amplification.

A great deal of successful laser navigation systems have been demonstrated, an early example of which was the mobile Hilare robot, developed at the Laboratoire d'Automatique et d'Analyse des Systemes, France [Borenstein et al., 1996]. This used groups retroreflective beacons, arranged in recognisable configurations to minimise errors from reflections from other surfaces. Two rotating laser heads then scanned the area to determine the bearing to these beacons.

Other methods employed in laser include passive receiver beacons (scheme 2), by Premi and Besant, Imperial College of Science and Technology, London. Here a vehicle-mounted laser beam rotating creates a plane which intersects three fixed-location reference receivers. These then use an FM data link to relay the time of arrival of laser energy back to the mobile vehicle, so that it can determine distances to each beacon individually. A similar system is now commercially available MTI Research, Inc., Chelmsford, MA. This "Computerised Opto-electrical Navigation and Control" (CONAC) system has been proven to be a relatively low-cost, high precision positioning system, working at high speeds (25 Hz refresh), with an accuracy of a few cm [MTI].

Environment Ranging Sensors

Most sensors used for the purpose of map building involve some kind of distance measurement. Below are the three distinct approaches to measuring range:

Time of Flight Range Sensors

The measured pulses used in TOF systems typically come from an ultrasonic, RF or optical energy source. The parameters required to calculate range are simply the speed of sound in air or the speed of light. The measured time of flight is representative of travelling twice the separation distance and must therefore be halved in order to give the actual target range.

The advantages of TOF systems arise from the direct nature of their straight-line active sensing. The returned signal follows essentially the same path back to a receiver located in close proximity to the transmitter. The absolute range to an observed point is directly available as output with no complicated analysis requirements.

Potential error sources for TOF systems include the following:

Variation in propagation speed

This is particularly applicable to acoustically based systems, where the speed of sound is significantly influenced by temperature and humidity changes.

Detection uncertainties

This involves determining the exact time of arrival of the reflected pulse. Errors are caused by the wide dynamic range in returned signal strength due to varying reflectivness of target surfaces. These differences in returned signal intensity influence the rise time of the detected pulse, and in the case of fixed-threshold detection will cause the more reflective targets to appear closer.

Timing considerations

Due to the relatively slow speed of sound in air, compared to light, acoustically-based systems make less timing precision demands than light-based systems and are less expensive as a result. TOF systems based on the speed of light require sub-nanosecond timing circuitry to measure distances with a resolution of about 30cm (a resolution of 1mm requires a timing precision of 3 picoseconds). This capability is very expensive to realise and may not be cost effective for most applications, particularly at close range where high accuracies are required.

Surface interaction

When light, sound or radio waves strike an object, any detected echo represents only a small portion of the original signal. The remaining energy is scattered or absorbed depending on surface characteristics and the angle of incidence of the beam. If the transmission source approach angle exceeds a certain critical value, the reflected energy will be deflected outside the sensing envelope of the receiver. In cluttered environments, sound waves can reflect from (multiple) objects and can then be received by other sensors ("crosstalk").

Ultrasonic TOF Systems
This is the most common technique employed on indoor mobile robots to date, which is primarily due to the ready availability of low cost systems and their ease of interface. Over the past decade, much research has been conducted investigating applicability in areas such as world modelling, collision avoidance, position estimation and motion detection. More recently, their effectiveness in exterior settings has been assessed. For example, BMW now incorporates four piezo-ceramic transducers on both front and rear bumpers in its Park Distance Control system.

Phase-Shift Measurement

Here a beam of amplitude-modulated laser, RF or acoustical energy is directed towards the target. A small portion of the wave (potentially up to six orders of magnitude less in amplitude) is reflected by the target's surface back to the detector along a direct path. The returned energy is compared to a simultaneously generated reference that has been split off from the original signal, and the relative phase shift between the tow is measured as illustrated below.

The relative phase-shift expressed as a function of distance to the reflecting target surface is:

For square-wave modulation at the relatively low frequencies of ultrasonic systems (20 to 200kHz), the phase difference between incoming and outgoing waveforms can be measured with the simple linear circuit shown below. The output of the exclusive-or gate goes high whenever its inputs are at opposite logic levels, generating a voltage across the capacitor that is proportional to the phase-shift.

Advantages of continuous-wave systems over pulsed time of flight methods include the ability to measure the direction and velocity of a moving target in addition to its range (using the Doppler effect). Range accuracies of laser-based continuous-wave systems approach those of pulsed laser TOF methods. Only a slight advantage is gained over pulsed TOF range finding however, since the time-measurement problem is replaced by the need for sophisticated phase-measurement electronics.

Algorithms and Methods for Navigation


Odometry provides good short-term accuracy, is inexpensive and allows very high sampling rates. However, the fundamental idea of odometry is the integration of incremental motion information over time, which leads inevitably to the accumulation of errors. Particularly the accumulation of orientation erro00rs will cause large position errors which increase proportionally with the distance travelled by the robot. Nevertheless, it is widely accepted that odometry is a very important part of a robot navigation system and that navigation tasks will be simplified if odometric accuracy can be improved.

Below are some of the reasons why odometry is used for mobile robots:

Systematic and Non-Systematic Odometry Errors

Odometry is based on the assumption that wheel revolutions can be translated into linear displacement relative to the surface. This assumption is only of limited validity since displacement errors can easily be caused by wheel slippage. Below are a number of error sources grouped into two distinct categories:

Systematic Errors

Non-Systematic Errors Making a clear distinction between these two error categories is of great importance for the effective reduction of odometry errors. On most smooth indoor surfaces systematic errors contribute much more to odometry errors than non-systematic errors. However, on rough surfaces with significant irregularities, non-systematic errors are dominant. The problem with non-systematic errors is that they appear unexpectedly and can hence cause large position errors.

Measurement of Odometry Errors

Measurement of Systematic Errors
One widely used but fundamentally unsuitable method is the Unidirectional Square-Path Test. To conduct this test, the robot must be programmed to traverse the four legs of a square path. The path will return the vehicle to the starting area but, because of odometry and controller errors, not precisely to the starting position. The systematic error is then calculated from the end position error. An improved version of this is the Bi-directional Square-Path Test which involves the robot travelling around a square path in both clockwise and anti-clockwise directions.
Measurement of Non-Systematic Errors
Due to the unpredictable nature of these errors, it is very difficult (perhaps impossible) to design a generally applicable quantitative test procedure for them. One proposed approach is to compare the susceptibility to non-systematic errors of different vehicles. This method uses the bi-directional square-path test but in addition, introduces artificial bumps. The robot's susceptibility to non-systematic errors is indicated by the return orientation error and not the return position error. This is because, the return position of this test is sensitive to the exact location where the bumps are placed, whereas the return orientation is not.

Reduction of Odometry Errors

The accuracy of odometry in mobile robots depends to some degree on their kinematic design and on certain critical dimensions. Here are some of the design-specific considerations that affect dead-reckoning accuracy: Another general observation is that errors associated to wheel slippage can be reduced by limiting the vehicle's speed during turning, and by limiting accelerations.
Reduction of Systematic Odometry Errors
It is generally possible to improve odometric accuracy by adding a pair of "knife-edge", non-load-bearing encoder wheels as shown in the diagram below. Since these wheels are not used for transmitting power, they can be made as recommended above.

An alternative approach is the use of an encoder trailer with two encoder wheels. This approach is often used for tracked vehicles, since it is virtually impossible to use odometry with tracked vehicles, because of the large amount of slippage between the tracks and the floor during turning.

Another approach to improving odometric accuracy without any additional devices or sensors is based on careful calibration of the mobile robot. Systematic errors are inherent properties of each individual robot and they change very slowly as a result of wear or of different load distributions. This technique of reducing errors requires high precision and accuracy calibration since minute deviations in the geometry of the vehicle or its part may cause substantial odometric errors. As a result, this technique is very time consuming.

Reduction of Non-Systematic Odometry Errors
Mutual Referencing is a method whereby two robots measure their positions mutually. When one of the robots moves to another place, the other remains still, observes the motion, and determines the first robot's new position. In other words, one robot localises itself with reference to a fixed object (the stationary robot). However, this limits the efficiency of the robots.

A unique way for reducing odometry errors even further is Internal Position Error Correction. Here, two mobile robots mutually correct their odometry errors on a continuous basis. To implement this method, it is required that both robots can measure their relative distance and bearing continuously. Conventional dead reckoning is used by each robot to determine its heading and direction information which is then compared to the heading and direction observed by the other robot in order to reduce any errors to a minimum.

Inertial Navigation

This is an alternative method for enhancing dead reckoning. The principle of operation involves continuous sensing of minute accelerations in each of the three directional axes and integrating over time to derive velocity and position. A gyroscopically stabilised sensor platform is used to maintain consistent orientation of the three accelerometers throughout this process.

Although this method is simple in concept, the specifics of implementation are rather demanding. This is mainly caused by error sources that affect the stability of the gyros used to ensure correct attitude. The resulting high manufacturing and maintenance costs of this method have usually made it impractical for mobile robot applications. For example, a high-quality Inertial Navigation System (INS) such as would be found in a commercial airliner will have a typical drift of about 1850m per hour of operation, and cost between $50,000 to $70,000. High-end INS packages used in ground applications have shown performance of better than 0.1 percent of distance travelled, but cost up to $200,000. However, since the development of laser and optical fibre gyroscopes (typically costing $1,000 to $5,000), INS is becoming more suitable for mobile robot applications.

One advantage of inertial navigation is its ability to provide fast, low-latency dynamic measurements. Also, INS sensors are self-contained, non-radiating and non-jammable. The main disadvantage however, is that the angular rate data and the linear velocity rate data must be integrated once and twice (respectively), to provide orientation and linear position, respectively.

Landmark-Based Navigation

Natural Landmarks

The main problem in natural landmark navigation is to detect and match characteristic features from sensory inputs. The sensor of choice for this task is computer vision. Most computer vision-based natural landmarks are long vertical edges, such as doors and wall junctions. For a more detailed description of this please refer to Vision-Based Positioning.

When range sensors are used for natural landmark navigation, distinct signatures, such as those of a corner or an edge, or of long straight walls, are good feature candidates. Proper selection of features will also reduce the chances for ambiguity and increase positioning accuracy. A natural landmark positioning system has the following basic components:

Artificial Landmarks

Detection is much easier with artificial landmarks, which are designed for optimal contrast. In addition, the exact size and shape of artificial landmarks are known in advance. Many artificial landmark positioning systems are based on computer vision and some examples of typical landmarks are black rectangles with white dots in the corners, a sphere with horizontal and vertical calibration circles to achieve three-dimensional localisation from a single image.

The accuracy achieved by the above methods depends on the accuracy with which the geometric parameters of the landmark images are extracted from the image plane, which in turn depends on the relative position and angle between the robot and the landmark.

There are also a variety of landmarks used in conjunction with non-vision sensors. Most often used are bar-coded reflectors for laser scanners. For an example of this, please refer to the Mobile Detection Assessment and Response System (MDARS) which uses retroreflectors.

Line Navigation

This is another type of landmark navigation that has been widely used in industry. Line navigation can be thought of as a continuous landmark, although in most cases the sensor used in this system needs to be very close tot he line, so that the range of the vehicle is limited to the immediate vicinity of the line. These techniques have been used for many years in industrial automation tasks and vehicles using them are generally called Automatic Guided Vehicles (AGVs). However, the techniques are not discussed in detail here since they do not allow the vehicle to move freely - the main feature that sets mobile robots apart from AGVs.

The main implementations for line navigation are given below:


Artificial landmark detection methods are well developed and reliable. By contrast, natural landmark navigation is not sufficiently developed yet for reliable performance under a variety of and dynamic conditions.

The main characteristics of landmark-based navigation are given below:

Map-Based Navigation

Map-based positioning (also known as "map matching"), is a technique in which the robot uses its sensors to create a map of its local environment. This local map is then compared to the global map previously stored in memory. If a match is found then the robot can compute its actual position and orientation in the environment. The prestored map can be a CAD model of the environment, or it can be constructed from prior sensor data.

The main advantages of map-based positioning are given below:

Disadvantages of map-based positioning arise because it requires that:

Map Building

As map building problem is very closely related to its sensing abilities, it could be defined as, "Given the robot's position and a set of measurements, what are the sensors seeing?"

Error and uncertainty analyses play an important role in accurate estimation and map building. It is vital to take explicit account of the uncertainties by for example, modelling the errors by probability distributions. The representation used for the map should provide a way to incorporate newly sensed information into the map. It should also provide the necessary information for path planning and obstacle avoidance.

The three main steps of sensor data processing for map building are:

Map Matching

This is one of the most challenging aspects of map-based navigation. In general, matching is achieved by first extracting features, followed by determination of the correct correspondence between image and model features. Work on map matching in the computer vision arena is often focused on the general problem of matching an image of arbitrary position and orientation relative to a model.

Matching algorithms can be classified as either icon-based or feature-based. The icon-based algorithm differs from the feature-based one in that it matches very range data point to the map rather than corresponding the range data into a small set of features to be matched to the map. The feature-based estimator, in general, is faster than the iconic-based estimator and does not require a good initial heading estimate. The iconic-based estimator can use fewer points than the feature-based estimator, can handle less-than-ideal environments and is more accurate.

As with Landmark-Based navigation, it is advantageous to use an approximate position estimation based on odometry to generate an estimated visual scene (from the stored map) that would be "seen" by the robot. This generated scene is then compared to the one actually seen. This procedure dramatically reduces the time taken to find a match.

One problem with feature-based positioning systems is that the uncertainty about the robot's position grows if there are no suitable features that can be used to update the robot's position. The problem is particularly severe if the features are to be detected with ultrasonic sensors which suffer from poor angular resolution.

Geometric and Topological Maps

In map-based positioning there are two common representations, namely geometric and topological maps. A geometric map represents objects according to their absolute geometric relationships. It can be a grid map, or a more abstract map such as a line or polygon map. The topological approach on the other hand, is based on recording the geometric relationships between the observed features rather than their absolute position with respect to an arbitrary co-ordinate frame of reference. Unlike geometric maps, topological maps can be build and maintained without any estimates for the position of the robot. As a result, this approach can be used to integrate large area maps since all connections between nodes are relative, rather than absolute.


Map-based positioning is still in the research stage. Currently, this technique is limited to laboratory settings and good results have been obtained only in well-structured environments. It is difficult to estimate how the performance of a laboratory robot scales up to a real world application. The relevant characteristics of map-based navigation systems are given below: The critical research areas are:

Multiple Robot Communications and Co-ordination

There are a number of issues introduced when more than one robot is present in a given locale concurrently. Without prior knowledge of each other's existence, they could well interfere with one another. However, if correctly programmed, and if the necessary communication network exists between them, they can co-operate in carrying out some task.

Two advantages of co-operation will be considered here; improving speed and improving accuracy.

Improving Speed

When using local area navigation, mobile robots are generally carrying out some other set task (i.e. the navigation is the means, not an end). If there are a number of robots co-operating in carrying out this task, then they will have to communicate positional data to each other to achieve this end.

When communicating positional information, a common reference should be used in order to compare positions. This means that an absolute global or local positioning system should be used.

The communication link used between robots should ideally allow bi-directional transfers, with multiple access - allowing 'A' to talk to 'B', without interference from 'C' talking to 'D'.

Given these conditions, considerable algorithmic advances can be made; these lie mostly in the higher level "guidance" processing of the robot ("what should I do next?"), rather than the navigational side ("where am I?"). This means these improvements are mostly task dependant.

As an example, consider a number of robots searching a given area for some proximity detectable object (e.g. using metal detectors). The main criteria dictating where a robot should look are:

These questions can either be answered by either The method chosen would aim to minimise both the amount of communication (and hence bandwidth) required, and also minimise the amount of redundant (potentially outdated) information in the system, the exact choice depending on the dynamics of the search.

Improving Accuracy

While carrying out a navigational task, a co-operating robots can provide augmentation for each others navigational system, in order to provide an accuracy better than the sum of their individual systems.

A good example of this is the Non-line-of-sight Leader/Follower (NLOSLF) DGPS method [Motazed, 1993]. This involves a number of vehicles in a convoy that autonomously follow a lead vehicle driven by a human operator or otherwise. The technique employed is referred to as "intermittent stationary base differential GPS", where the lead and final vehicle in the convoy alternate as fixed-reference DGPS base stations. As the convoy moves out from a known location, the final vehicle remains behind to provide differential corrections to the GPS receivers in the rest of the vehicles, via a radio data link. After travelling a predetermined distance in this fashion, the convoy is halted and the lead vehicle assumes the role of DGPS reference station, providing enhanced accuracy to the trailing vehicle as it catches up. During this stationary time, the lead vehicle can take advantage of on site dwell to further improve the accuracy of its own fix. Once the last vehicle joins up with the rest, the base-station roles are reversed again, and the convoy resumes transit.

This ingenious technique allows DGPS accuracy to be achieved over large ranges, with minimal reliance on out side systems. Drawbacks of this approach include the need for intermittent stops, the reliance on obtaining an initial high-accuracy position fix (for the initial reference station), and accumulating ambiguity in actual location of the two reference stations.

Summary of the Scales of Navigation

Case Studies

Miniature Maze-Solving Robots

This case study analyses the requirements for navigating miniature mobile robots around a walled maze.

The aim is for the robot to navigate about in order to solve the maze by finding a pre-determined exit.


Vehicle Dimensions  Cross-sectional diameter 12cm
Height multiple 3cm modules 
Weight < 2kg 
Power Supply 6V, 500mAh 
Intended Environment Level, non-rugged terrain
Navigation Requirements  Navigation-Space  Two dimensions of movement
Position fix, heading and velocity 
Accuracy < 10cm 
Range approx. 10m 
Refresh > 1Hz 
Processing Onboard, autonomous control 
System dimensions within vehicle specification 


Scale of Navigation

The first stage in providing a navigation system for a mobile robot is to identify what scale of navigation is required.

Here there is no requirement for global referenced position fixing, as the robot is only concerned with its position within the maze, and not with the absolute position of the maze on a larger scale.

On a local scale, the robot is concerned with its current position in the maze, and mapping all places visited in order to progress in solving the maze. At this level actual maze solving processing is not considered, merely producing the navigational and tracking information for the next level of processing to provide vehicle guidance information from.

This local navigation has two distinct parts:

In mapping the maze, either walls or the path taken can be recorded. Depending on which of these is chosen, the requirements of the navigation system vary slightly.

If mapping walls, as well as sensing their position around the robot, the distance travelled by the robot must also be measured. If mapping the path travelled, this must be measured and recorded by some other means; the detection of walls is still necessary, however, in order to take a central path through the corridors.

The detection of walls can occur at either a local or personal level, depending on the technology used (proximity or contact). The path travelled can likewise be measured in a locale- of self- relative reference frame (local area position fix or dead-reckon method).

Viable Systems

Due to the large constraints of the robot specification, in terms of physical attributes and power consumption, not all applicable navigation systems can be used.
Wall Detection
For wall detection the following techniques are available: Clearly this last method is not possible, as having a pre-made map would nullify the aim of the robot. Vision also is no use, as it would currently require too large (physically and electrically) an amount of processing for this application.

Of the other three, the technology for the application would largely depended on the maze construction. The robot is circular - if all corridors are known to be of an equal width to the robot's diameter, then the tactile sensor would be the simplest and most reliable method. If, however, the maze is constructed from larger corridors or rooms, this method might miss exploration of areas, and another method should be used.

Ultrasonic transceivers tend to be the more reliable in detecting large surfaces (as light sensors suffer greatly from ambient light interference) and are available in quite small packages. Hence a number of these placed on the robot would provide a fairly reliable wall detection system. A contact sensor at the front of the robot might also be included for detection of collision with objects invisible to the ultrasound.

Position Determination
As discussed, knowing the position of walls about the robot is not sufficient for determining its position in the maze; some other distinguishing information is required. The walls themselves can not provide this, as each one is not necessarily unique to the robot.

In determining position, the following could be used:

The active beacon system is a viable option, as quite small, low powered devices are available. There are two considerable drawbacks with this, however. Most commercial systems are quite expensive, as they require a fair number (at least three) of beacons to be purchased, along with the processing power for their co-ordination in the environment. They are also quite intrusive, which reduces the autonomy of the robot. While this could be justified, a more self-contained solution would be preferred.

Dead-reckoning, using odometry sensors, can provide a good enough accuracy over short distances, and easily meets the physical constraints. Dead-reckoning does, however, suffer from cumulative error, which requires periodic correction. Due to the nature of maze solving, there is a great deal of back-tracking; the corrections could be incorporated into this process, by continuously comparing actual position of walls sensed to expected position according to the map made on the outward journey. By this, the accumulated error becomes a function of net displacement in the solution of the maze, rather the total distance travelled throughout the mapping process.


The two main objectives of the case study have been met: The system arrived at meets all of the requirements for navigation of this mobile robot. Only generalised classes of hardware have been identified, namely ultrasonic detectors coupled with odometric sensors. Exact devices, and the necessary software algorithms for processing the raw data from these devices would have to be researched further.




Proceedings of the 2nd International conference on Automated Guided Vehicle systems  Institute for Production & Automation, Stuttgart, 1983  7
Bock, Y., Leppard, N. (Eds.), 1990. Global Positioning System: An Overview.  International Association of Geadesy Symposia. Springer-Verlag.  5
Borenstein, J., Everett, H. R., Feng, L., 1996.  Navigating Mobile Robots: Systems and Techniques.  A K Peters, Wellesley, MA.. 
Kak, A., Chen, S. (Eds.), 1987.  Spatial Reasoning and Multi-sensor Fusion.  Preceedings on the 1987 Workshop. Morgan Kaufmann Publishers, Inc.  4
Kaplan, Elliott D. (Ed.), 1996.  Understanding GPS - Principles and Applications.  Artch House Publishers.
Linkwitz, K., Hangleiter, U. (Eds.), 1989.  High Precision Navigation. Integration of Navigational and Geodetic Methods.  Springer-Verlag. 5
Philippe Coiffet Robot Technology Vol II - Interaction with the environment  5
Tetley, L., Calcutt D., 1991.  Electronic Aids to Navigation: Position Fixing.  Edward Arnold. 6
Titterton, D. H., Weston, J. L., 1997.  Strapdown Inertia Navigation Technology.  Peter Peregrinus for the Institution of Electrical Engineers 5
Toft, Hans, 1987.  GPS Satellite Navigation.  Shipmate Marine Electronics.


Robotica  Cambridge, 1995 Vol. 13  6
Robotics Engineering - The journal of intelligent engineering  (1983 - 1986) 2
The International journal of Robotics Research  MIT (April 1997 Vol. 16 No. 2)
Cosentino, R. J., Diggle, D. W., 1996.  Differential GPS.  Understanding GPS.  7
Dalton, G. 1997.  Atlas - road robot.  Industrial ROBOT. vol 24 no 2. 1997.
DoD/DoT, 1995.  1994 Federal Radionavigation Plan Spring-field, VA, National Technical Information Service.  4
Institute of Navigation, Summer 1996.  ION Newsletter 4
Institute of Navigation, Winter 1996.  ION Newsletter  5
Larcombe, M. H. E., Feb 1994.  Mobile Robot Technology and Teleoperation Course Notes 1994.  Dept. of Computer Science, University of Warwick.  5
Linkwitz, K., Wolfgang, M., 1989.  Navigational Methods of Measurement in Geodetic Surveying.  High Precision Navigation: Proceedings of an International Workshop, Stuttgart and Altersteig May 1988.  4
Radio Technical Commission for Maritime Services Special Committee No. 104.  RTCM Recommended Standards for Differential NAVSTAR GPS Service, Version 2.1.  RTCM, Washington DC, 1994. 

World Wide Web

NRaD Robotics
Camera Space Manipulation  6
Global Multi-Perspective Perception Robots  5
Mars Rovers: Pebbles  4
Robotics Research, Department of Computer Science, Manchester University
NASA Space Telerobotics Program Home Page  3
Robotics Web Servers  7
Industrial Robot: Journal Homepage.  4
MTI Research, Inc.:Laser Tracking, Mobile Robotics, Alertness Monitoring 5
Mercat GPS Homepage 
US Coast Guard Navigation Center
Navtech GPS
Dana, P. H., 1995. Global Positioning System Overview 6
Dowling, K., 1995.  History of the MRL The Mobile Robot Laboratory, Carnegie Mellon University 4
Gray, T.  GPS & DGPS Explained Communication Systems International Inc.  4