Surprise 97

Robot Navigation

"Where am I going and how do I get there"
An Overview of Local/Personal Robot Navigation:

Oliver Henlich

20 May 1997

Contents

  • Introduction
  • Robot guidance
  • Fixed position and mobile robots
  • Robots guided by off-board fixed paths
  • Wire-guided robot's
  • Painted-line guided robot's
  • Guidance techniques for "free- ranging" robots
  • Dead-reckoning
  • Tactile detection
  • Isolated binary contact sensor
  • Analogue sensors
  • Matrix Sensors
  • Proximity detection
  • Infrared proximity detection
  • Inertial navigation
  • Position determination via fixed beacons in the surroundings
  • Optical or ultrasonic imaging of the surroundings
  • Optical stereoscopic vision
  • CASE STUDY I: Application of a precision enhancing measure in 3D rigid-body positioning using Camera-Space manipulation.
  • Conclusion
  • CASE STUDY II: Mobile robot localisation using a single rotating sonar and two passive cylindrical beacons.
  • CASE STUDY III: A 3D localise for autonomous robot vehicles
  • Geometric position algorithm
  • Kalman filter for position estimation
  • Conclusion
  • CASE STUDY IV: A mobile robot navigation method using a fuzzy logic approach
  • Measuring and perceiving the environment.
  • Analysing and modelling the measured environment.
  • Path planning
  • Conclusion
  • Bibliography

  • Introduction

    "Go down this road for about a mile and turn left on Oxford street...it's the second or the third light, I think...and then turn right into the alley just past the butchers; I'll be by the second house on the left, the green one with the big hedge in front...you can't miss it."

    Great advances in technology will be required before it will be possible for a robot to follow navigational directions as above.

    The key difference between robotic and human navigation is the quantum difference in perceptual capabilities. Humans can detect, classify, and identify environmental features under widely varying environmental conditions, independent of relative orientation and distance. Current robots, while being able to detect stationary obstacles before they run into them, have very limited perceptual and decisional capabilities. Although much research is being done to improve robotic navigational performance through enhanced perception, decisions to utilise these emerging technologies must be based on a critical analysis of considerations of technical risk and cost.

    In general, robotic navigation involves three distinct aspects:

    However, though these aspects are distinct, they are often related. For example, local navigation capabilities also support global navigation in a mapped environment, since knowing one's position relative to a known mapped feature determines one's absolute position.

    Again, the main problem is one of perception, having confidence that the detected and localised feature is in fact the same feature that appears in one's map. The approaches taken for the Interior (local/personal) and Exterior (global) cases are somewhat different.

    The subject matter to follow is primarily concerned with the local and personal navigational capabilities of a robot.


    Robot guidance

    Robot guidance techniques generally consisted essentially of following buried cables or painted lines. These techniques are very reliable and fairly easy to implement but they heavily constrain the motion of the robot.

    Current robotics research is often concerned with devising techniques and methods which will allow improved implementations of the following two types of advanced robots:

    1. Those that can understand a problem, resolve it and apply an appropriate solution, starting with incomplete (but adequate) information.
    2. Those that use information derived from the environment to modify their behaviour (dynamically) in order to attain set objectives.

    As a result, much time has been invested in trying to improve the robots navigational capabilities. This involves determining positions and paths in real-time while the robot is in motion, using on-board and off-board position/velocity sensors.


    Fixed position and mobile robots

    A fixed industrial robot essentially consists of a mechanical structure. One end is firmly fixed to the floor while the other end (the end-effector) is free to move under programme control. Sensors are attached to the moving parts of the Robot so that the position of the end-effector can be calculated mathematically (since the lengths of the links are known), relative to a fixed frame of reference originated at the base.

    A mobile robot however, is in a moving frame of reference, thus its position must be determined relative to a fixed frame of reference somewhere in the surroundings.

    This fundamental difference between a fixed and mobile Robot implies that, in order to control the robot reliably, either of the following conditions must be met:

    1. The motion of the robot must be heavily constrained, by fixing the paths of the robot, off-board the robot.
    2. A fixed frame of reference must be provided for the robot, so that it can continuously refer to this frame using on-board sensors, and therefore know its precise position in the surroundings. Hence the robot can detect and correct for deviations in its path, in comparison to the commanded path stored on-board.
    3. A "fixed map" of the surroundings must be put on board the robot, in which the robot's position is known. The robot would then periodically produce a "current map" of the surroundings, during motion, using on-board sensors. The "current map" would then be compared to the "fixed map" to determine the current robot position. Using this technique the robot could dynamically recognise and avoid obstacles, hence determining its own path to the commanded destination.

    Robots guided by off-board fixed paths

    Wire-guided robot's

    This is one of the most popular guidance techniques for industrial robot's. It uses buried cables arranged in complex closed loops, each closed loop carries a different frequency a.c. signal. Small magnetic plates are fixed to the ground at junctions and before and after sharp bends to allow detection of these potential danger points and for appropriate speed reduction. The system also has communication points along the paths where the robot can report its status to the main computer which co-ordinates all the robot's and plans and blocks routes to avoid collisions.

    These systems are popular in industries because they are fairly reliable and simple. However, they suffer from the following drawbacks:

    Painted-line guided robot's

    These are popular in light engineering or office environments. The system used is very similar to wire-guided robot's, except that their guidance technique is different. They follow lines on the floor, which have been painted using visible or invisible fluorescent dye (which are usually caused to fluoresce by shining UV light on them).

    The advantage of this guidance technique over wire-guidance is that, paths can be fixed quickly and are easy to alter.

    The disadvantages are:


    Guidance techniques for "free-ranging" robots

    Dead-reckoning

    This consists of periodically measuring the precise rotation of each robot drive wheel (using for example optical shaft encoders). The robot can then calculate its expected position in the environment, if it knows its starting point of motion.

    The main problem with this technique is, drive wheel slippage. If this occurs at the drive wheel, the encoder on that wheel would register a wheel rotation, even though that wheel is not driving the robot relative to the ground. The other problem is that error accumulate.

    Tactile detection

    Tactile detection is a form of perception through interaction between the robot and the environment such that the geometry of the environment may be recognised. This implies physical contact and requires:

    Below is a brief summary of some of the common techniques employed for this type of perception.

    Isolated binary contact sensor

    This acts as the equivalent of a two-position switch, informing of present state - contact/no contact. It's positioning is crucial to its usefulness. For example, if placed at strategic points on a moving arm, obstacles can be encountered when the arm is in motion and appropriate decisions can follow.

    Analogue sensors

    An application of this type of sensor is the "Hill and Sword gripper". The gripper has sensors/buttons on it which when pressed, activate a screen which obscures, as a function of stress which it undergoes, a ray issued from an LED and picked up by a phototransistor. This information can then be used to give an indication of clamping force and form.

    Matrix Sensors

    These are usually made up of a matrix of elementary digital and analogue sensors. They are most commonly used to produce shape information. However, it is not always simple to interpret their outputs.

    So, the interaction with the environment can control itself if:

    Proximity detection

    Tactile detection systems can create hazards due to their reliance on physical contact. To avoid this requires use of prior knowledge of the position of the objects present in the field of the robot and the need of appropriate trajectory planning. Proximity detection (or remote sensing) is a method of achieving these requirements, at the expense of position precision compared to that of tactile detection.

    Proximity sensors can be used when:

    1. The object naturally transmits a signal (eg. radioactive).
    2. The object is equipped with its own transmitter.
    3. A signal is transmitted to the object and then received after reflection. This signal can be of natural origin (eg. reflection of ambient light) or can be artificial.

    If the first two methods are used, the detector is a passive receiver. This is also true for detection of a signal of natural origin as in 3. If the signal is of artificial origin, it indicates that their is an artificial transmitter as well as a receiver. When these two devices are placed on the same sensor, an active sensor is created.

    Some of the common active sensors use:

    Ultrasound usually proves to be effective in distance measurement. Radio waves have limited potential on grounds of cost and complexity.

    Infrared proximity detection

    If the sensor is positioned facing the surface, the light received by the detecting photodiode produces a signal that is a function of the distance between the sensor and the surface. The response curve is shown in the diagram below.

    Given a smooth reflecting surface, three difficulties would be encountered in making a distance measurement:

    1. The whiteness of the surface has to be known.
    2. Except for the singularity (S) at the top of the curve, the same signal can denote two possible distances.
    3. The axis of the sensor must be normal to the surface.

    This is why these proximity sensors are mostly used for the detection of presence rather than measuring distance or for recognition.

    So clearly, proximity detectors do not provide a solution to the problem of position awareness and navigation on their own. However, they play a vital part in navigation of "free-ranging" robots as they can be used effectively for obstacle avoidance.

    Inertial navigation

    This consists of setting up the axis of a gyroscope, parallel to the direction of motion of the robot. When the robot deviates from the path, an acceleration will occur perpendicular to the direction of motion and this is detected by the gyroscope. Integrating this acceleration twice gives the position deviation from the path, which can then be corrected.

    The problem with this system is that path deviation at constant velocity cannot be corrected. The axis of a gyroscope also tends to drift with time, giving rise to errors.

    Position determination via fixed beacons in the surroundings

    These beacons are fixed at appropriate locations in the environment. The precise locations of these beacons are known to the robot. As it moves, it uses some on-board device to measure its exact distance and direction from any one beacon. Hence the robot can calculate its own precise position in the environment. Please refer to Case study II & III for an application of this technique.

    Optical or ultrasonic imaging of the surroundings

    This often involves creating an "absolute map" of the surroundings and storing this on-board the robot. The robot then periodically generates a "current-map" of the surroundings, as it moves, using an on-board video camera or ultrasonic transducers. Various objects in the "absolute map" are then recognised in the "current map" and by cross-correlation, estimates for the robot position are obtained. A number of these estimates are then averaged to give the current position.

    The disadvantages of this system are:

    Please refer to Case study IV for an application of Ultrasonic navigation techniques.

    Optical stereoscopic vision

    This consists of viewing the same point on an object in the surroundings, using two on-board cameras. Precise measurements are then made on the stereoscopic image of the object. The angular disposition of each camera is then measured, and since the inter-camera distance is known to the robot, the distance to the object can be estimated. If the object is recognisable in the "absolute map", the position of the robot can be estimated. Repeating this procedure with several objects allows a better estimation to be made. The main problems with this technique lie in:

    This is currently a large research area as it seems to hold the most promising solutions to the navigational and perceptual problems of robots. Below is one of its more recent developments (Case study I).


    CASE STUDY I: Application of a precision enhancing measure in 3D rigid-body positioning using Camera-Space manipulation.

    (The International Journal of Robotics Research - Vol .16 No. 2 April 1997 MIT)

    Camera-Space manipulation is a relatively new alternative to perception through vision. "Most of the [present] methods seek the use of cameras to 'measure' the generally 3D position and orientation of the workpiece in an absolute reference frame, relative to which the kinematics of the robot are first calibrated"

    It is based on:

    The manoeuvre objectives are specified and pursued in the "subjective" reference frame of each sensor (ie. 2D plane of each camera).

    Determining Camera-Space objectives involves:

    In order to acquire an objective, a manipulator (for example) is moved into various positions while taking samples of the xyz co-ordinates (derived from the joint and length readings of the manipulator) and the corresponding camera-space co-ordinates of cues on the object. Now an approach trajectory can be calculated and started while constantly taking samples to correct movement where necessary.

    To overcome perspective problems, a mathematical technique called "flattening" is employed. This modifies visual samples to become consistent with flat orthographic projections.

    Conclusion

    Numerical simulation was used to test the flattening procedure and eliminated other errors. The results showed that flattening could be used to convert perspective into orthographic projections with perfect accuracy. However, the physical experiment did not achieve perfect results as other error sources such as sampling were introduced. Nevertheless, a physical precision of less than 1 mm was achieved.

    (For a detailed example of 3D positioning and Camera-Space manipulation see http://www.nd.edu/NDInfo/Research/sskaar/Home.html)


    CASE STUDY II: Mobile robot localisation using a single rotating sonar and two passive cylindrical beacons.

    (Robotica 1995, Vol. 13 p243 Cambridge)

    This is a relatively new method of estimating position and heading angle of a mobile robot moving on a flat surface.

    The passive beacons consist of two cylinders with different diameters. The rotating sonar "sweeps" the area in front of it and hence detects the signals from the two different beacons. This information can then be used to determine (mathematically) the position of the robot. The advantage of this arrangement is that the position and heading can be determined from a single robot position (ie. it does not rely on robot movement to calculate heading). Also, this technique does not use the conventional method of building up a "current map" of its environment and then comparing it with an "absolute map" given to it to determine its position.

    As knowledge of the current speed of sound is essential, this value has to be continuously updated. Conventional methods used temperature and humidity measurements to calculate this speed. However, this produced another error source for the whole system (ie. the error associated with each extra transducer). The system developed here makes use of an on-board reflector onto which the sonar is periodically directed in order to calibrate the sonar (since the distance between the reflector and sonar is known). This improved the overall accuracy of measurements made.


    CASE STUDY III: A 3D localise for autonomous robot vehicles

    (Robotica 1995, Vol. 13 p87 Cambridge)

    This study presents two different algorithms for position and orientation determination.

    The prototype implemented makes use of four ultrasonic beacons at known positions in water (four are required to give full six degrees of motion). For example, three beacons floating on the surface and one below.

    The beacons fire ultrasonic pulses in sequence with a fixed inter-firing period. The first one in the sequence fires a double pulse for initial identification. Synchronisation of the beacons is vital for this method to work.

    Geometric position algorithm

    This is based on the gradients of three scalar fields. Scalar fields consist of the difference in distance from a reference beacon to the other beacons (differences are used since absolutes are unknown). The aim of the algorithm is to find the 3D position that has the required differences in distance to match the measurements made by the localiser receiver (receiving pulses from the beacons).

    In general the algorithm in 3 to 4 iterations (taking a few ms on a 80286 16 Mhz). The biggest factor in this delay is the inter-firing period.

    Kalman filter for position estimation

    This technique rejects spurious data/noise more effectively. It updates the position estimate each time a beacon fires, thus increasing the frequency of localisation by a factor of four. It allows the incorporation of motion data and the capability of processing data from more than four beacons (the minimum required for localisation). In essence, it estimates the state of the whole system each time a pulse arrives.

    The main drawback of this method is that it is computationally slower than the position outlined above (taking about 230 ms per measurement step on a 80286 16 Mhz).

    Conclusion

    The accuracy of the prototype was about 50 mm depending on its position in relation to the beacons.

    The effectiveness and accuracy of this method could be improved by more sophisticated pulse arrival time detection techniques. This would involve using better transducers and more beacons.


    CASE STUDY IV: A mobile robot navigation method using a fuzzy logic approach

    (Robotica 1995, Vol. 13 p437 Cambridge)

    The aim of this study is to design and implement an autonomous mobile robot. In other words, a system capable of interpreting, perceiving, extracting and realising a task without any outside help. This involves three main tasks:

    Measuring and perceiving the environment.

    This was achieved through the use of twelve ultrasonic sensors grouped into six groups of two. The sensors were arranged round the front half of the robot. Crosstalk was largely avoided by activating the groups in random sequence.

    Analysing and modelling the measured environment.

    Distance information is provided by three successive sensors. The obtained distance information can be categorised into the following three configurations, Edge (E), Vertex (V) and Channel (C).

    Path planning

    The tactic here is similar to that of a driver trying to get somewhere in a city. He has only a vague idea of the city. His actions must be determined at each moment, according to present geometrical constraints. Humans can instantly evaluate by reasoning and perceive, danger. For robot navigation we break the problem down into principle actions:


    Conclusion

    The above study may seem to be slightly non-exhaustive as it does mot mention developments and applications of artificial intelligence in relation to a robot and being able to perceive and move in its environment. This is intentional however, so as to avoid inconclusive discussion of theoretical developments and concentrate on common sense and basic ideas which should be the starting point for research or applications in robot technology.

    Developing robot-environmental interaction techniques seems to be the key to improving robot technology in the future. This directly implies improving perceptual and navigational capabilities. Perception of the environment represents the greatest challenge in this respect. The often mentioned desirability (or indeed requirement) of eye-hand co-ordination has often evolved to eye-hand-touch co-ordination to improve the precision with which the robot can position and move itself. As new and faster methods of analysing vision data are developed the reliability and accuracy of this form of perception will also increase.


    Bibliography

    1. Proceedings of the 2nd International conference on Automated Guided Vehicle systems (Institute for Production & Automation, Stuttgart 1983)
    2. Robot Technology Vol II - Interaction with the environment (Philippe Coiffet)
    3. Robotica (Cambridge 1995 Vol. 13)
    4. Robotics Engineering - The journal of intelligent engineering (1983 - 1986)
    5. The International journal of robotics research (MIT April 1997 Vol. 16 No. 2)