Imperial College, London
information systems engineering year 2:
While the human-machine interface is not yet at a transparent level (with robots accepting and following spoken instructions; a problem in the realm of Natural Language Processing), the degree of autonomy available after a machine has been program is now approaching that once considered purely science fiction.
This document draws together, and builds upon, a lot of what is written in the authors' four preceding articles in Mobile Robot Navigation. For completeness, some sections of those articles have been included here; however, the reader is referred back to them for a more detailed analysis of some of the systems discussed here.
At the small end of the scale there are robots just a few centimetres in size, which will require high precision navigation over a small range (due to energy supply constraints), while operating in a relatively tame environment. At the other end of the scale there are Jumbo jet aircraft and ocean going liners, each with some sort of auto-pilot navigation, which requires accuracy to a number of metres (or tens of metres), over a huge (i.e. global) range, in somewhat more rugged conditions.
To help in categorising this scale of requirements, we use three terms:-
The "micro" robot on the other hand, is almost exclusively interested in Personal and Local navigation. Such devices are rarely concerned with their position globally, on any traditional geographic scale. Instead their requirements are far more task based - the are concerned with their immediate environment, in particular relative to any objects relevant in the successful completion of their task. This involves Personal navigation, when it is in contact with other objects, and Local navigation for actual movement.
In general, the main focus of the scales of navigation are as follows,
In terms of position fixing, absolute implies finding ones position relative to an absolute origin; a fixed stationary point common to all position fixes across the range of navigation. Hence in Global navigation, there should be one such point on the planet which all fixes are relative to. In Local navigation the absolute origin is some fixed point in the robot's environment, and in Personal navigation the origin can be viewed as the centre of the robot itself.
A Relative position fix when navigating Globally, taken relative to some other reference point (environment-relative), is analogous to the absolute position fix in Local navigation. Likewise, a position fix taken relative to the same robot's own position at some other point in time (self-relative), is like the personal absolute position fix. Through knowledge of the absolute reference frame (typically using a map), absolute position fixes in one navigation domain can be transformed into position fixes in another. Indeed, almost all global absolute position fixing is carried out by finding either an environment- or a self- relative position fix, and then converting this into a global position (see Beacon Navigation and Dead-Reckoning respectively).
When GPS was released by the US DoD (Department of Defence), it superseded several other systems, however it was designed to have limited accuracy available to non-military (US) users. Several methods of improving the performance have been developed as a result of this, which greatly increase the usefulness of the system for robots.
Further reading on the technical aspects of GPS is given in the appendix.
The space segment of GPS is 24 satellites (or Space Vehicles - SVs) in orbit about the planet at a height of approximately 20 200 km, such that generally at least four SVs are viewable from the surface of the Earth at any time. This allows the instantaneous user position to be determined, at any time, by measuring the time delay in a radio signal broadcast from each satellite, and using this and the speed of propagation to calculate the distance to the satellite (the pseudo-range). As a 'rule-of-thumb', one individual satellite needs to be received for each dimension of the user's position that needs to be calculated. This suggests 3 satellites are necessary for a position fix of the general user (for the x, y, and z dimensions of the receiver's position), however, the user rarely knows the exact time which they are receiving at - hence 4 satellite pseudo-ranges are required to calculate these 4 unknowns.
The satellite data is monitored and controlled by the GPS ground segment - stations positioned globally to ensure the correct operation of the system.
The user segment is the mobile user and their GPS
reception equipment. These have advanced considerably in recent years,
to allow faster and more accurate processing of received data. They typically
contain pre-amplification, an analogue to digital converter, between 5
and 12 digital signal processor (DSP) channels (each one is able to track
a separate satellite transmission), and processor for navigational data.
Other elements that might be
incorporated are differential GPS receiver/ processing capability, received phase information processing, and reception capability for the second (L2) GPS frequency.
The DGPS system operates by having reference stations receive the satellite broadcast GPS signal at a known site, and then transmit a correction according to the error in received signal, to mobile GPS users. So long as the mobile user is in the proximity of the stationary site, they will experience similar errors, and hence require similar corrections. Typical DGPS accuracy is around 4 to 6 m, with better performance seen as the distance between user and beacon site decreases.
DGPS provides the resolution necessary for most Global scale navigation purposes, as well as often being useful at the Local scale. There are a few restrictions on the situations were it can be used however; the following problems can greatly reduces DGPS (or GPS) usability,
Another common marriage of technologies uses (D)GPS for the global level navigation, then other systems for precision local navigation. A good example of this is the UK Robotics Road Robot, a construction autonomous device built after the lack of automation in this area was noticed [Dolton, 1997]. This incorporates the Atlas navigation control unit, which initially finds its course (global) location using GPS, after which it uses laser trilateration to navigate (locally) while carrying out its task. This was found to produce reliable autonomous operation in testing.
An example of a commercial DGPS receiver is the Communication System International CSI SBX-1. This is a so called OEM module, designed to be integrated into another manufacturers system: ideal for mobile robot construction. It is rated at less than 1 W at 5 VDC, and has a footprint of 10 cm2. Coupled with a suitable GPS receiver (typically having somewhat high requirements; e.g. 10 W, 20 cm2 footprint) this would provide a good ground for mobile position fixes.
The most common optical sensors include laser-based range finders and photometric cameras using CCD arrays. However, due to the volume of information they provide, extraction of visual features for positioning is far from straightforward. Many techniques have been suggested for localisation using vision information, the main components of which are listed below:
Clearly vision-based positioning is directly related to most computer vision methods, especially object recognition. So as research in this area progresses, the results can be applied to vision-based positioning.
Real world applications envisaged in most current research projects, demand very detailed sensor information to provide the robot with good environment-interaction capabilities. Visual sensing can provide the robot with an incredible amount of information about its environment. Visual sensors are potentially the most powerful source of information among all the sensors used on robots to date. Hence, at present, it seems that high resolution optical sensors hold the greatest promises for mobile robot positioning and navigation.
Please refer to Vision-Based Positioning for further information on issues and techniques mentioned above.
At present, the vast majority of land-based mobile robots rely on dead reckoning to form the backbone of their navigation strategy. They use other navigation aids to eliminate accumulated errors.
Since a large majority of mobile robots rely on motion by means of wheels or tracks, a basic understanding of sensors that accurately quantify angular position and velocity is an important prerequisite for dead reckoning using odometry.
Some of the common rotational displacement and velocity sensors in use today are given below:
As mentioned above, there are two basic types of optical encoders. The incremental version measures rotational velocity and can infer relative position. The absolute model on the other hand, measures angular position directly and can infer velocity. If non-volatile position information is not a requirement, incremental encoders are usually chosen on grounds of lower cost and simpler interfacing compared to absolute encoders.
To overcome the problems mentioned above a slightly improved version of the encoder called the phase-quadrature incremental encoder is used. The modification being that a second channel, displaced from the first, is added. This results in a second pulse train which is 90 degrees out of phase with the first pulse train. Now, decoding electronics can determine which channel is leading the other and hence determine the direction of rotation with the added benefit of increased resolution.
Since the output signal of these encoders is incremental in nature, any resolution of angular position can only be relative to some specific reference, as opposed to absolute. For applications involving continuous 360-degree rotation, such a reference is provided by a third channel as a special index output that goes high once for each revolution of the shaft. Intermediate positions are then specified as a displacement from the index position. For applications with limited rotation, such as back-and-forth motion of a pan axis, electrical limit switches can be used to establish a home reference position. Repeatability of this homing action is often broken into steps. The axis is rotated at reduced speed in the appropriate direction until the stop mechanism is encountered. Rotation is then reversed for a short predefined interval after which the axis is then slowly rotated back to the stop position from this known start point. This usually eliminates inertial loading that could influence the final homing position (This two-step approach can be observed in power-on initialisation of stepper-motor positioners in dot-matrix printer heads).
Interfacing an incremental encoder to a computer is not a trivial task. A simple state-based interface is inaccurate if the encoder changes direction at certain positions, and false pulses can result from the interpretation of the sequence of state changes.
A very popular and versatile encoder interface is the HCTL 1100 motion controller chip made by Hewlett Packard. It performs accurate quadrature decoding of the incremental wheel encoder output and provides important additional functions such as:
Discrete elements in a photovoltaic array are individually aligned in break-beam fashion with concentric encoder tracks, creating in effect, a non-contact implementation of a commutating brush encoder. Having a dedicated track for each bit of resolution results in a larger disk (relative to incremental designs), with a corresponding decrease in shock and vibration tolerance. Very roughly, each additional encoder track doubles the resolution and quadruples the cost.
Instead of the serial bit streams of incremental designs, absolute encoders provide a parallel word output with a unique code pattern for each quantized shaft position. The most common coding scheme is the Gray code. This code is characterised by the fact that only one bit changes at a time, thus eliminating (a majority of the) asynchronous ambiguities caused by electronic and mechanical component tolerances.
A potential disadvantage of absolute encoders is their parallel data output, which requires more complex interface due to the large number of electrical leads.
The principle of operation is based on the Doppler shift in frequency observed when radiated energy reflects off a surface that is moving with respect to the emitter.
Most implementations used for robots employ a single forward-looking transducer to measure ground speed in the direction of travel. An example of this is taken from the agricultural industry, where wheel slippage in soft freshly plowed fields can seriously interfere with the need to release seed at a rate proportional to vehicle advance.
A typical implementation uses a microwave radar sensor which is aimed downward (usually 45 degrees) to sense ground movement as shown in the figure below.
Errors in detecting true ground speed can arise from vertical velocity components introduced by vehicle reaction to the ground surface and uncertainties in the angle of incidence. An interesting scenario resulting in erroneous operation would involve a stationary vehicle parked over a stream of water.
For accelerometers, there is a very poor signal-to-noise ratio at lower accelerations (ie. during low-speed turns). They also suffer from extensive drift, and they are sensitive to uneven grounds, because any disturbance from a perfectly horizontal position will cause the sensor to detect the gravitational acceleration g. Even tilt-compensated systems indicate a position drift rate of 1 to 8 cm/s, depending on the frequency of acceleration changes. This is an unacceptable error rate for most mobile robot applications.
The main problem with gyroscopes is that they are usually very expensive (if they are to be useful for navigation) and they need to be mounted on a very stable platform.
There are two principle methods for determining the user's position:
Most beacon systems can be sub-categorised into one of the following transmission schemes:
Available commercially are more localised beacon systems, which may use either scheme 1, 2, or 4. The first two allow many users in one area; the first being more suitable for autonomous mobile robot control, as the position information is calculated at the mobile end. The second is more suited to tracking applications, such as motor cars around a racetrack. With scheme 4 the round trip propagation delay time from user to beacon and back to user (or vice-versa; generally in position monitoring rather than navigation situations) is measured - analogously to radar operation - to determine range. Using this exact range data it is a simple method to calculate position, by the intersection of circles around, ideally, at least 3 beacons.
Possible user positions at the intersect of circles when range to (a) two, and (b) three, transmitters is known.
Ultrasonics are frequently used under water situations, as sound has a much higher velocity here ( 1500 ms-1 c.f. 330 ms-1 in air). Also, it is possible to measure the incident angle of received signals much more accurately, allowing triangulation methods to be employed [Larcombe, 1994].
Ultrasonics are widely used for proximity detection (see the next section). Occasionally it is possible to combine the two, by introducing distinctive passive sonic beacons with unique reflection properties. By using trilateration against these beacons, a mobile robot can perform an absolute position fix, as well as finding its position relative to any non-unique objects in the vicinity.
A great deal of successful laser navigation systems have been demonstrated, an early example of which was the mobile Hilare robot, developed at the Laboratoire d'Automatique et d'Analyse des Systemes, France [Borenstein et al., 1996]. This used groups retroreflective beacons, arranged in recognisable configurations to minimise errors from reflections from other surfaces. Two rotating laser heads then scanned the area to determine the bearing to these beacons.
Other methods employed in laser include passive receiver beacons (scheme 2), by Premi and Besant, Imperial College of Science and Technology, London. Here a vehicle-mounted laser beam rotating creates a plane which intersects three fixed-location reference receivers. These then use an FM data link to relay the time of arrival of laser energy back to the mobile vehicle, so that it can determine distances to each beacon individually. A similar system is now commercially available MTI Research, Inc., Chelmsford, MA. This "Computerised Opto-electrical Navigation and Control" (CONAC) system has been proven to be a relatively low-cost, high precision positioning system, working at high speeds (25 Hz refresh), with an accuracy of a few cm [MTI].
The advantages of TOF systems arise from the direct nature of their straight-line active sensing. The returned signal follows essentially the same path back to a receiver located in close proximity to the transmitter. The absolute range to an observed point is directly available as output with no complicated analysis requirements.
Potential error sources for TOF systems include the following:
Variation in propagation speed
This is particularly applicable to acoustically based systems, where the speed of sound is significantly influenced by temperature and humidity changes.
This involves determining the exact time of arrival of the reflected pulse. Errors are caused by the wide dynamic range in returned signal strength due to varying reflectivness of target surfaces. These differences in returned signal intensity influence the rise time of the detected pulse, and in the case of fixed-threshold detection will cause the more reflective targets to appear closer.
Due to the relatively slow speed of sound in air, compared to light, acoustically-based systems make less timing precision demands than light-based systems and are less expensive as a result. TOF systems based on the speed of light require sub-nanosecond timing circuitry to measure distances with a resolution of about 30cm (a resolution of 1mm requires a timing precision of 3 picoseconds). This capability is very expensive to realise and may not be cost effective for most applications, particularly at close range where high accuracies are required.
When light, sound or radio waves strike an object, any detected echo represents only a small portion of the original signal. The remaining energy is scattered or absorbed depending on surface characteristics and the angle of incidence of the beam. If the transmission source approach angle exceeds a certain critical value, the reflected energy will be deflected outside the sensing envelope of the receiver. In cluttered environments, sound waves can reflect from (multiple) objects and can then be received by other sensors ("crosstalk").
The relative phase-shift expressed as a function of distance to the reflecting target surface is:
For square-wave modulation at the relatively low frequencies of ultrasonic systems (20 to 200kHz), the phase difference between incoming and outgoing waveforms can be measured with the simple linear circuit shown below. The output of the exclusive-or gate goes high whenever its inputs are at opposite logic levels, generating a voltage across the capacitor that is proportional to the phase-shift.
Advantages of continuous-wave systems over pulsed time of flight methods include the ability to measure the direction and velocity of a moving target in addition to its range (using the Doppler effect). Range accuracies of laser-based continuous-wave systems approach those of pulsed laser TOF methods. Only a slight advantage is gained over pulsed TOF range finding however, since the time-measurement problem is replaced by the need for sophisticated phase-measurement electronics.
Below are some of the reasons why odometry is used for mobile robots:
An alternative approach is the use of an encoder trailer with two encoder wheels. This approach is often used for tracked vehicles, since it is virtually impossible to use odometry with tracked vehicles, because of the large amount of slippage between the tracks and the floor during turning.
Another approach to improving odometric accuracy without any additional devices or sensors is based on careful calibration of the mobile robot. Systematic errors are inherent properties of each individual robot and they change very slowly as a result of wear or of different load distributions. This technique of reducing errors requires high precision and accuracy calibration since minute deviations in the geometry of the vehicle or its part may cause substantial odometric errors. As a result, this technique is very time consuming.
A unique way for reducing odometry errors even further is Internal Position Error Correction. Here, two mobile robots mutually correct their odometry errors on a continuous basis. To implement this method, it is required that both robots can measure their relative distance and bearing continuously. Conventional dead reckoning is used by each robot to determine its heading and direction information which is then compared to the heading and direction observed by the other robot in order to reduce any errors to a minimum.
Although this method is simple in concept, the specifics of implementation are rather demanding. This is mainly caused by error sources that affect the stability of the gyros used to ensure correct attitude. The resulting high manufacturing and maintenance costs of this method have usually made it impractical for mobile robot applications. For example, a high-quality Inertial Navigation System (INS) such as would be found in a commercial airliner will have a typical drift of about 1850m per hour of operation, and cost between $50,000 to $70,000. High-end INS packages used in ground applications have shown performance of better than 0.1 percent of distance travelled, but cost up to $200,000. However, since the development of laser and optical fibre gyroscopes (typically costing $1,000 to $5,000), INS is becoming more suitable for mobile robot applications.
One advantage of inertial navigation is its ability to provide fast, low-latency dynamic measurements. Also, INS sensors are self-contained, non-radiating and non-jammable. The main disadvantage however, is that the angular rate data and the linear velocity rate data must be integrated once and twice (respectively), to provide orientation and linear position, respectively.
When range sensors are used for natural landmark navigation, distinct signatures, such as those of a corner or an edge, or of long straight walls, are good feature candidates. Proper selection of features will also reduce the chances for ambiguity and increase positioning accuracy. A natural landmark positioning system has the following basic components:
The accuracy achieved by the above methods depends on the accuracy with which the geometric parameters of the landmark images are extracted from the image plane, which in turn depends on the relative position and angle between the robot and the landmark.
There are also a variety of landmarks used in conjunction with non-vision sensors. Most often used are bar-coded reflectors for laser scanners. For an example of this, please refer to the Mobile Detection Assessment and Response System (MDARS) which uses retroreflectors.
The main implementations for line navigation are given below:
The main characteristics of landmark-based navigation are given below:
The main advantages of map-based positioning are given below:
Error and uncertainty analyses play an important role in accurate estimation and map building. It is vital to take explicit account of the uncertainties by for example, modelling the errors by probability distributions. The representation used for the map should provide a way to incorporate newly sensed information into the map. It should also provide the necessary information for path planning and obstacle avoidance.
The three main steps of sensor data processing for map building are:
Matching algorithms can be classified as either icon-based or feature-based. The icon-based algorithm differs from the feature-based one in that it matches very range data point to the map rather than corresponding the range data into a small set of features to be matched to the map. The feature-based estimator, in general, is faster than the iconic-based estimator and does not require a good initial heading estimate. The iconic-based estimator can use fewer points than the feature-based estimator, can handle less-than-ideal environments and is more accurate.
As with Landmark-Based navigation, it is advantageous to use an approximate position estimation based on odometry to generate an estimated visual scene (from the stored map) that would be "seen" by the robot. This generated scene is then compared to the one actually seen. This procedure dramatically reduces the time taken to find a match.
One problem with feature-based positioning systems is that the uncertainty about the robot's position grows if there are no suitable features that can be used to update the robot's position. The problem is particularly severe if the features are to be detected with ultrasonic sensors which suffer from poor angular resolution.
Two advantages of co-operation will be considered here; improving speed and improving accuracy.
When communicating positional information, a common reference should be used in order to compare positions. This means that an absolute global or local positioning system should be used.
The communication link used between robots should ideally allow bi-directional transfers, with multiple access - allowing 'A' to talk to 'B', without interference from 'C' talking to 'D'.
Given these conditions, considerable algorithmic advances can be made; these lie mostly in the higher level "guidance" processing of the robot ("what should I do next?"), rather than the navigational side ("where am I?"). This means these improvements are mostly task dependant.
As an example, consider a number of robots searching a given area for some proximity detectable object (e.g. using metal detectors). The main criteria dictating where a robot should look are:
A good example of this is the Non-line-of-sight Leader/Follower (NLOSLF) DGPS method [Motazed, 1993]. This involves a number of vehicles in a convoy that autonomously follow a lead vehicle driven by a human operator or otherwise. The technique employed is referred to as "intermittent stationary base differential GPS", where the lead and final vehicle in the convoy alternate as fixed-reference DGPS base stations. As the convoy moves out from a known location, the final vehicle remains behind to provide differential corrections to the GPS receivers in the rest of the vehicles, via a radio data link. After travelling a predetermined distance in this fashion, the convoy is halted and the lead vehicle assumes the role of DGPS reference station, providing enhanced accuracy to the trailing vehicle as it catches up. During this stationary time, the lead vehicle can take advantage of on site dwell to further improve the accuracy of its own fix. Once the last vehicle joins up with the rest, the base-station roles are reversed again, and the convoy resumes transit.
This ingenious technique allows DGPS accuracy to be achieved over large ranges, with minimal reliance on out side systems. Drawbacks of this approach include the need for intermittent stops, the reliance on obtaining an initial high-accuracy position fix (for the initial reference station), and accumulating ambiguity in actual location of the two reference stations.
The aim is for the robot to navigate about in order to solve the maze by finding a pre-determined exit.
|Vehicle||Dimensions||Cross-sectional diameter 12cm|
|Height multiple 3cm modules|
|Power Supply||6V, 500mAh|
|Intended Environment||Level, non-rugged terrain|
|Navigation Requirements||Navigation-Space||Two dimensions of movement|
|Position fix, heading and velocity|
|Processing||Onboard, autonomous control|
|System dimensions||within vehicle specification|
Here there is no requirement for global referenced position fixing, as the robot is only concerned with its position within the maze, and not with the absolute position of the maze on a larger scale.
On a local scale, the robot is concerned with its current position in the maze, and mapping all places visited in order to progress in solving the maze. At this level actual maze solving processing is not considered, merely producing the navigational and tracking information for the next level of processing to provide vehicle guidance information from.
This local navigation has two distinct parts:
If mapping walls, as well as sensing their position around the robot, the distance travelled by the robot must also be measured. If mapping the path travelled, this must be measured and recorded by some other means; the detection of walls is still necessary, however, in order to take a central path through the corridors.
The detection of walls can occur at either a local or personal level, depending on the technology used (proximity or contact). The path travelled can likewise be measured in a locale- of self- relative reference frame (local area position fix or dead-reckon method).
Of the other three, the technology for the application would largely depended on the maze construction. The robot is circular - if all corridors are known to be of an equal width to the robot's diameter, then the tactile sensor would be the simplest and most reliable method. If, however, the maze is constructed from larger corridors or rooms, this method might miss exploration of areas, and another method should be used.
Ultrasonic transceivers tend to be the more reliable in detecting large surfaces (as light sensors suffer greatly from ambient light interference) and are available in quite small packages. Hence a number of these placed on the robot would provide a fairly reliable wall detection system. A contact sensor at the front of the robot might also be included for detection of collision with objects invisible to the ultrasound.
In determining position, the following could be used:
Dead-reckoning, using odometry sensors, can provide a good enough accuracy over short distances, and easily meets the physical constraints. Dead-reckoning does, however, suffer from cumulative error, which requires periodic correction. Due to the nature of maze solving, there is a great deal of back-tracking; the corrections could be incorporated into this process, by continuously comparing actual position of walls sensed to expected position according to the map made on the outward journey. By this, the accumulated error becomes a function of net displacement in the solution of the maze, rather the total distance travelled throughout the mapping process.
|Proceedings of the 2nd International conference on Automated Guided Vehicle systems||Institute for Production & Automation, Stuttgart, 1983||7|
|Bock, Y., Leppard, N. (Eds.), 1990.||Global Positioning System: An Overview.||International Association of Geadesy Symposia. Springer-Verlag.||5|
|Borenstein, J., Everett, H. R., Feng, L., 1996.||Navigating Mobile Robots: Systems and Techniques.||A K Peters, Wellesley, MA..||9|
|Kak, A., Chen, S. (Eds.), 1987.||Spatial Reasoning and Multi-sensor Fusion.||Preceedings on the 1987 Workshop. Morgan Kaufmann Publishers, Inc.||4|
|Kaplan, Elliott D. (Ed.), 1996.||Understanding GPS - Principles and Applications.||Artch House Publishers.||7|
|Linkwitz, K., Hangleiter, U. (Eds.), 1989.||High Precision Navigation. Integration of Navigational and Geodetic Methods.||Springer-Verlag.||5|
|Philippe Coiffet||Robot Technology Vol II - Interaction with the environment||5|
|Tetley, L., Calcutt D., 1991.||Electronic Aids to Navigation: Position Fixing.||Edward Arnold.||6|
|Titterton, D. H., Weston, J. L., 1997.||Strapdown Inertia Navigation Technology.||Peter Peregrinus for the Institution of Electrical Engineers.||5|
|Toft, Hans, 1987.||GPS Satellite Navigation.||Shipmate Marine Electronics.||6|
|Robotica||Cambridge, 1995 Vol. 13||6|
|Robotics Engineering - The journal of intelligent engineering||(1983 - 1986)||2|
|The International journal of Robotics Research||MIT (April 1997 Vol. 16 No. 2)||6|
|Cosentino, R. J., Diggle, D. W., 1996.||Differential GPS.||Understanding GPS.||7|
|Dalton, G. 1997.||Atlas - road robot.||Industrial ROBOT. vol 24 no 2. 1997.||5|
|DoD/DoT, 1995.||1994 Federal Radionavigation Plan.||Spring-field, VA, National Technical Information Service.||4|
|Institute of Navigation, Summer 1996.||ION Newsletter.||http://www.ion.org/||4|
|Institute of Navigation, Winter 1996.||ION Newsletter.||http://www.ion.org/||5|
|Larcombe, M. H. E., Feb 1994.||Mobile Robot Technology and Teleoperation Course Notes 1994.||Dept. of Computer Science, University of Warwick.||5|
|Linkwitz, K., Wolfgang, M., 1989.||Navigational Methods of Measurement in Geodetic Surveying.||High Precision Navigation: Proceedings of an International Workshop, Stuttgart and Altersteig May 1988.||4|
|Radio Technical Commission for Maritime Services Special Committee No. 104.||RTCM Recommended Standards for Differential NAVSTAR GPS Service, Version 2.1.||RTCM, Washington DC, 1994.||5|
World Wide Web
|Global Multi-Perspective Perception Robots||http://vision.ucsd.edu/papers/iros-wksp-final/||5|
|Robotics Web Servers||http://piglet.cs.umass.edu:4321/cgi-bin/robotics/||7|
|Mercat GPS Homepage.||http://www.mercat.com/||4|
|US Coast Guard Navigation Center.||http://www.navcen.uscg.mil/||6|
|Dana, P. H., 1995.||Global Positioning System Overview.||http://www.utexas.edu/||6|
|Dowling, K., 1995.||History of the MRL.||The Mobile Robot Laboratory, Carnegie Mellon University.||4|
|Gray, T.||GPS & DGPS Explained.||Communication Systems International Inc.||4|