"Go down this road for about a mile and turn
left on Oxford street...it's the second or the third light, I
think...and then turn right into the alley just past the butchers;
I'll be by the second house on the left, the green one with the
big hedge in front...you can't miss it."
Great advances in technology will be required before
it will be possible for a robot to follow navigational directions
The key difference between robotic and human navigation
is the quantum difference in perceptual capabilities. Humans can
detect, classify, and identify environmental features under widely
varying environmental conditions, independent of relative orientation
and distance. Current robots, while being able to detect stationary
obstacles before they run into them, have very limited perceptual
and decisional capabilities. Although much research is being done
to improve robotic navigational performance through enhanced perception,
decisions to utilise these emerging technologies must be based
on a critical analysis of considerations of technical risk and
In general, robotic navigation involves three distinct aspects:
However, though these aspects are distinct, they
are often related. For example, local navigation capabilities
also support global navigation in a mapped environment, since
knowing one's position relative to a known mapped feature determines
one's absolute position.
Again, the main problem is one of perception, having
confidence that the detected and localised feature is in fact
the same feature that appears in one's map. The approaches taken
for the Interior (local/personal) and Exterior (global) cases
are somewhat different.
The subject matter to follow is primarily concerned with the local and personal navigational capabilities of a robot.
Robot guidance techniques generally consisted essentially
of following buried cables or painted lines. These techniques
are very reliable and fairly easy to implement but they heavily
constrain the motion of the robot.
Current robotics research is often concerned with devising techniques and methods which will allow improved implementations of the following two types of advanced robots:
As a result, much time has been invested in trying to improve the robots navigational capabilities. This involves determining positions and paths in real-time while the robot is in motion, using on-board and off-board position/velocity sensors.
A fixed industrial robot essentially consists of
a mechanical structure. One end is firmly fixed to the floor while
the other end (the end-effector) is free to move under programme
control. Sensors are attached to the moving parts of the Robot
so that the position of the end-effector can be calculated mathematically
(since the lengths of the links are known), relative to a fixed
frame of reference originated at the base.
A mobile robot however, is in a moving frame of reference,
thus its position must be determined relative to a fixed frame
of reference somewhere in the surroundings.
This fundamental difference between a fixed and mobile
Robot implies that, in order to control the robot reliably, either
of the following conditions must be met:
This is one of the most popular guidance techniques
for industrial robot's. It uses buried cables arranged in complex
closed loops, each closed loop carries a different frequency a.c.
signal. Small magnetic plates are fixed to the ground at junctions
and before and after sharp bends to allow detection of these potential
danger points and for appropriate speed reduction. The system
also has communication points along the paths where the robot
can report its status to the main computer which co-ordinates
all the robot's and plans and blocks routes to avoid collisions.
These systems are popular in industries because they are fairly reliable and simple. However, they suffer from the following drawbacks:
These are popular in light engineering or office
environments. The system used is very similar to wire-guided robot's,
except that their guidance technique is different. They follow
lines on the floor, which have been painted using visible or invisible
fluorescent dye (which are usually caused to fluoresce by shining
UV light on them).
The advantage of this guidance technique over wire-guidance is that, paths can be fixed quickly and are easy to alter.
The disadvantages are:
This consists of periodically measuring the precise
rotation of each robot drive wheel (using for example optical
shaft encoders). The robot can then calculate its expected position
in the environment, if it knows its starting point of motion.
The main problem with this technique is, drive wheel slippage. If this occurs at the drive wheel, the encoder on that wheel would register a wheel rotation, even though that wheel is not driving the robot relative to the ground. The other problem is that error accumulate.
Tactile detection is a form of perception through interaction between the robot and the environment such that the geometry of the environment may be recognised. This implies physical contact and requires:
Below is a brief summary of some of the common techniques employed for this type of perception.
This acts as the equivalent of a two-position switch, informing of present state - contact/no contact. It's positioning is crucial to its usefulness. For example, if placed at strategic points on a moving arm, obstacles can be encountered when the arm is in motion and appropriate decisions can follow.
An application of this type of sensor is the "Hill and Sword gripper". The gripper has sensors/buttons on it which when pressed, activate a screen which obscures, as a function of stress which it undergoes, a ray issued from an LED and picked up by a phototransistor. This information can then be used to give an indication of clamping force and form.
These are usually made up of a matrix of elementary
digital and analogue sensors. They are most commonly used to produce
shape information. However, it is not always simple to interpret
So, the interaction with the environment can control itself if:
Tactile detection systems can create hazards due
to their reliance on physical contact. To avoid this requires
use of prior knowledge of the position of the objects present
in the field of the robot and the need of appropriate trajectory
planning. Proximity detection (or remote sensing) is a method
of achieving these requirements, at the expense of position precision
compared to that of tactile detection.
Proximity sensors can be used when:
If the first two methods are used, the detector is
a passive receiver. This is also true for detection of
a signal of natural origin as in 3. If the signal is of artificial
origin, it indicates that their is an artificial transmitter as
well as a receiver. When these two devices are placed on the same
sensor, an active sensor is created.
Some of the common active sensors use:
Ultrasound usually proves to be effective in distance measurement. Radio waves have limited potential on grounds of cost and complexity.
If the sensor is positioned facing the surface, the light received by the detecting photodiode produces a signal that is a function of the distance between the sensor and the surface. The response curve is shown in the diagram below.
Given a smooth reflecting surface, three difficulties would be encountered in making a distance measurement:
This is why these proximity sensors are mostly used
for the detection of presence rather than measuring distance
or for recognition.
So clearly, proximity detectors do not provide a solution to the problem of position awareness and navigation on their own. However, they play a vital part in navigation of "free-ranging" robots as they can be used effectively for obstacle avoidance.
This consists of setting up the axis of a gyroscope,
parallel to the direction of motion of the robot. When the robot
deviates from the path, an acceleration will occur perpendicular
to the direction of motion and this is detected by the gyroscope.
Integrating this acceleration twice gives the position deviation
from the path, which can then be corrected.
The problem with this system is that path deviation at constant velocity cannot be corrected. The axis of a gyroscope also tends to drift with time, giving rise to errors.
These beacons are fixed at appropriate locations in the environment. The precise locations of these beacons are known to the robot. As it moves, it uses some on-board device to measure its exact distance and direction from any one beacon. Hence the robot can calculate its own precise position in the environment. Please refer to Case study II & III for an application of this technique.
This often involves creating an "absolute map"
of the surroundings and storing this on-board the robot. The robot
then periodically generates a "current-map" of the surroundings,
as it moves, using an on-board video camera or ultrasonic transducers.
Various objects in the "absolute map" are then recognised
in the "current map" and by cross-correlation, estimates
for the robot position are obtained. A number of these estimates
are then averaged to give the current position.
The disadvantages of this system are:
Please refer to Case study IV for an application of Ultrasonic navigation techniques.
This consists of viewing the same point on an object in the surroundings, using two on-board cameras. Precise measurements are then made on the stereoscopic image of the object. The angular disposition of each camera is then measured, and since the inter-camera distance is known to the robot, the distance to the object can be estimated. If the object is recognisable in the "absolute map", the position of the robot can be estimated. Repeating this procedure with several objects allows a better estimation to be made. The main problems with this technique lie in:
This is currently a large research area as it seems to hold the most promising solutions to the navigational and perceptual problems of robots. Below is one of its more recent developments (Case study I).
(The International Journal of Robotics Research -
Vol .16 No. 2 April 1997 MIT)
Camera-Space manipulation is a relatively new alternative
to perception through vision. "Most of the [present] methods
seek the use of cameras to 'measure' the generally 3D position
and orientation of the workpiece in an absolute reference frame,
relative to which the kinematics of the robot are first calibrated"
It is based on:
The manoeuvre objectives are specified and pursued
in the "subjective" reference frame of each sensor (ie.
2D plane of each camera).
Determining Camera-Space objectives involves:
In order to acquire an objective, a manipulator (for
example) is moved into various positions while taking samples
of the xyz co-ordinates (derived from the joint and length readings
of the manipulator) and the corresponding camera-space co-ordinates
of cues on the object. Now an approach trajectory can be calculated
and started while constantly taking samples to correct movement
To overcome perspective problems, a mathematical technique called "flattening" is employed. This modifies visual samples to become consistent with flat orthographic projections.
Numerical simulation was used to test the flattening procedure and eliminated other errors. The results showed that flattening could be used to convert perspective into orthographic projections with perfect accuracy. However, the physical experiment did not achieve perfect results as other error sources such as sampling were introduced. Nevertheless, a physical precision of less than 1 mm was achieved.
(For a detailed example of 3D positioning and Camera-Space manipulation see http://www.nd.edu/NDInfo/Research/sskaar/Home.html)
(Robotica 1995, Vol. 13 p243 Cambridge)
This is a relatively new method of estimating position
and heading angle of a mobile robot moving on a flat surface.
The passive beacons consist of two cylinders with
different diameters. The rotating sonar "sweeps" the
area in front of it and hence detects the signals from the two
different beacons. This information can then be used to determine
(mathematically) the position of the robot. The advantage of this
arrangement is that the position and heading can be determined
from a single robot position (ie. it does not rely on robot movement
to calculate heading). Also, this technique does not use the conventional
method of building up a "current map" of its environment
and then comparing it with an "absolute map" given to
it to determine its position.
As knowledge of the current speed of sound is essential, this value has to be continuously updated. Conventional methods used temperature and humidity measurements to calculate this speed. However, this produced another error source for the whole system (ie. the error associated with each extra transducer). The system developed here makes use of an on-board reflector onto which the sonar is periodically directed in order to calibrate the sonar (since the distance between the reflector and sonar is known). This improved the overall accuracy of measurements made.
(Robotica 1995, Vol. 13 p87 Cambridge)
This study presents two different algorithms for position and orientation determination.
The prototype implemented makes use of four ultrasonic
beacons at known positions in water (four are required to give
full six degrees of motion). For example, three beacons floating
on the surface and one below.
The beacons fire ultrasonic pulses in sequence with a fixed inter-firing period. The first one in the sequence fires a double pulse for initial identification. Synchronisation of the beacons is vital for this method to work.
This is based on the gradients of three scalar fields.
Scalar fields consist of the difference in distance from a reference
beacon to the other beacons (differences are used since absolutes
are unknown). The aim of the algorithm is to find the 3D position
that has the required differences in distance to match the measurements
made by the localiser receiver (receiving pulses from the beacons).
In general the algorithm in 3 to 4 iterations (taking a few ms on a 80286 16 Mhz). The biggest factor in this delay is the inter-firing period.
This technique rejects spurious data/noise more effectively.
It updates the position estimate each time a beacon fires, thus
increasing the frequency of localisation by a factor of four.
It allows the incorporation of motion data and the capability
of processing data from more than four beacons (the minimum required
for localisation). In essence, it estimates the state of the whole
system each time a pulse arrives.
The main drawback of this method is that it is computationally slower than the position outlined above (taking about 230 ms per measurement step on a 80286 16 Mhz).
The accuracy of the prototype was about 50 mm depending on its position in relation to the beacons.
The effectiveness and accuracy of this method could be improved by more sophisticated pulse arrival time detection techniques. This would involve using better transducers and more beacons.
(Robotica 1995, Vol. 13 p437 Cambridge)
The aim of this study is to design and implement an autonomous mobile robot. In other words, a system capable of interpreting, perceiving, extracting and realising a task without any outside help. This involves three main tasks:
This was achieved through the use of twelve ultrasonic sensors grouped into six groups of two. The sensors were arranged round the front half of the robot. Crosstalk was largely avoided by activating the groups in random sequence.
Distance information is provided by three successive sensors. The obtained distance information can be categorised into the following three configurations, Edge (E), Vertex (V) and Channel (C).
The tactic here is similar to that of a driver trying to get somewhere in a city. He has only a vague idea of the city. His actions must be determined at each moment, according to present geometrical constraints. Humans can instantly evaluate by reasoning and perceive, danger. For robot navigation we break the problem down into principle actions:
The above study may seem to be slightly non-exhaustive
as it does mot mention developments and applications of artificial
intelligence in relation to a robot and being able to perceive
and move in its environment. This is intentional however, so as
to avoid inconclusive discussion of theoretical developments and
concentrate on common sense and basic ideas which should be the
starting point for research or applications in robot technology.
Developing robot-environmental interaction techniques seems to be the key to improving robot technology in the future. This directly implies improving perceptual and navigational capabilities. Perception of the environment represents the greatest challenge in this respect. The often mentioned desirability (or indeed requirement) of eye-hand co-ordination has often evolved to eye-hand-touch co-ordination to improve the precision with which the robot can position and move itself. As new and faster methods of analysing vision data are developed the reliability and accuracy of this form of perception will also increase.