Exploring Mars Using Intelligent Robots

by Paris Andreou and Adonis Charalambides


1. Introduction

The exploration of Mars has long been considered as a major goal in the exploration of the Solar System.At the beginning of the 20th century it was believed that intelligent life existed on Mars ,while until 1960 the possibility of plant life was still considered [1].However,detailed information from the Mariner 9 orbiter and the two Viking Landers[2] that landed on Mars in 1976 ,ruled out the possibility of any life on Mars,but created the perspective for future human settlement of the Red Planet,as it came to be known.

Mars is the most Earth-like planet and the best candidate for the first human settlement off Earth.Information from the Vikings[3] reveals that Mars is a cold,dry planet with extreme temperatures and a thin atmosphere.The terrain is rough and often untraversable.However,certain features are most encouraging.From the thin atmosphere nitrogen,argon and water vapour can be extracted,which are enough to prepare breathable air and water.Water can also be found in the soil and at the polar caps in the form of ice [4].Wind and the sun are plausible sources of power. Furthermore,extracts from the atmosphere and the soil can be used to produce rocket propellants, fertilizer and other useful compounds and feedstocks[3].

Further exploration needs to be done in order to obtain a better insight to the Martian environment.Unfortunately,a manned mission is out of the question for the time being for several reasons.A trip to Mars would require almost 2 years away from Earth,which creates a problem of supply of consumables.Then,the physical factors still need to be thoroughly investigated.The extreme temperatures(reaching -100 degrees centigrade) and the need to produce breathable air still present problems.An unmanned mission is therefore a necessary precursor to a piloted flight to Mars.

The use of robotic rovers is an attractive and necessary option if exploration of Mars is to go forward.Having decided on this route,further problems come to surface.The delay for radio signals between Mars and Earth vary between 6 to 41 minutes while the long distance imposes a low communication bandwidth.This precludes the use of teleoperation for controlling the vehicle.(A teleoperated vehicle is one which every individual movement would be controlled by a human being). Therefore,some autonomy of the vehicle is needed.However,a totally autonomous vehicle that could travel for extended periods carrying out its assigned tasks is simply beyond the present state of the art of artificial intelligence. This report considers the technical issues involved in the operation of a Mars Exploration Rover. In particular,the various navigation techniques and related technologies are discussed,while up to date robots and their performance are used as examples.

About to land on Mars ,source http://nssdc.gsfc.nasa.gov/planetary/mesur.html

2. Navigation

As mentioned earlier,the Mars rover must possess a certain degree of autonomy.There are various degrees of autonomy and various approaches in the way this autonomy is granted to the rover.These factors determine the navigation technique used,but all fall under two broad categories:

  1. Path-Planning navigation and
  2. Reactive navigation (or Real-time obstacle avoidance navigation)
In Path-planning,some form of terrain analysis is performed and a safe route is decided before the vehicle is commanded to start moving.In reactive navigation,the rover moves towards a goal location and avoids obstacles or untraversable territory as it encounters them,without previous knowledge of their existence.

Two important path-planning techniques developed at the Jet Propulsion Laboratory (JPL) are Computer-Aided Remote Driving (CARD) and Semi-Autonomous Navigation (SAN) .

2.1 Computer-Aided Remote Driving (CARD)

CARD [5,6] relies for its operation on stereo images sent from the rover to the ground-station. Stereo-imaging produces 3D pictures with enhanced depth perception,which enables the operator to estimate a safe path for the vehicle and designate this path using a 3D cursor.The ground station computer calculates the control sequences for the vehicle according to the designated path and then sends these to the vehicle.The vehicle moves to the new location,sends a new picture and the process is repeated.Depending on the terrain and visibility conditions the rover can move about 20 metres on each of these iterations.Taking into consideration the delays involved(round-trip signal delay,operator delay and computation delay),this approach results in an average speed of somewhat less than a centimetre per second.

The major advantage of CARD compared to teleoperation is the relatively reduced information transmission.In addition,since the major computation is done on Earth,the computers used can be as powerful as they come,saving the rover from carrying and powering any significant computers onboard.However,nomatter how fast the path planning and computation are performed,the round-trip signal delay cannot be reduced and the average speed is unlikely to improve dramatically. Therefore,CARD will be more suited to short distance travels,in cases such as traversing a difficult area or performing a number of experiments in one location.These may involve a manipulator arm and include rather complex operations such as detailed sampling,coring or other manipulation operations,maybe even rover maintenance.

CARD was proposed at JPL in 1982 as a low-computation technique and developed on Surveyor Lunar Rover Testbed (SLRV,shown in figure 1 ,originally designed for Lunar operations back in the 1960's).

2.2 Semi-Autonomous Navigation (SAN)

In Semi-Autonomous Navigation [5,6,7],the rover is given approximate routes from Earth,but plans its local routes autonomously.Thus,some operations are performed on Earth while others onboard the vehicle.

In this scenario,a satellite orbiting Mars sends stereo images of the areas of interest to the ground-station on Earth.These images may have a resolution of about a metre and enable operators to plan a safe route for the vehicle,possibly a few kilometres in length.In addition to path planning,an elevation map is produced by computers.Both the elevation map and the planned path are sent to the rover.

Onboard the rover,lazer rangefinders and stereo cameras are used to obtain images of the immediate environment of the rover.These images are used to compute a local topographic map . This map is matched to the local portion of the global map sent from Earth,so that the rover can position itself on the global map and follow the designated route.By comparison of the global map sent from Earth and the local map obtained from the rover's sensors,a new,detailed,high-resolution map is produced by computers onboard the rover.This map is eventually analyzed by computation on the rover to determine the safe areas over which to drive, while at the same time adhering to the route sent from Earth.An overview of SAN is shown in figure 2.

A rover collects samples on the surface of Mars, in this depiction by artist Ken Hodges.(" A Mars Rover for the 1990's " [6] )

2.2.1 Path Planning and execution monitoring for SAN

Primary task of path planning is to find an appropriate path for the rover to follow in order to reach a goal location ,designated from Earth, avoiding any obstacles and untraversable terrain ( e.g. elevation higher than what the rover can compensate). The path planning proposed[8] by JPL for a SAN takes place in two phases. The rover is given the global terrain map(section 3.2) from the orbiter with the goal site for the rover. The path planner generates a global path gradient [9] using a spreading activation algorithm[10], this algorithm takes into account all the kinematic constraints of the rover and the traversability of the terrain to produce a global gradient. Phase II is a refined version of phase I, using the global gradient previously computed, the planner searches through the local terrain maps to find a safe route that will bring it closer to the goal location. It is assummed that the planner has three local terrain maps for the same region with diffent resolutions ( low, medium and high). At first the planner tries to find a set of locations (exit zones) on the low resolution map that will bring the rover closer to the goal location using the computed local map gradient ( as done in phase I). Then possible paths are computed to the exit zones using the local gradient map. The paths are passed into a simulator that determines which paths can be executed safely and the path to the top rated exit zone is preferably chosen. If the simulator cannot find an acceptable path from the low resolution map then the higher resolution maps are used in turn until one is found. In the higher resolution maps exit zones will be found by using as goal sites the exit zones of the immediately lower resolution maps. If no path is accepted by the simulator ( quite unlikely) the rover must back up and try to reach the goal site from a different route.

Once a local route is found the rover starts to track that route. An execution monitoring system is continuously monitoring the values generated by the rover sensors (eg inclinometers, wheel encoders ,laser rangefinders etc) and compares their values against defined limits for each sensor. If at any time a sensor produces a value not in the acceptable range it means that an obstacle has been encountered that was not found by the perception system when producing the terrain maps, for example the current inclination of the rover is outside safety limits. In such a case a reflex action is performed. Usually the invocation of a reflex action causes the rover to stop ,back up far enough to use its perception to see where the violation occured and marks it in the local terrain map as a non-geometric hazard. A new path is planned using the revised local terrain map and executed.If the violation occurs again then the rover backs up and the region is marked as untraversable in the global terrain map.

2.2.2 Performance

This technique has major advantages over CARD.Firstly,there are fewer transmissions to the rover (maybe only once a day),as longer routes(up to 10 Kilometres) can be planned and sent to the rover,due to the satellite pictures.This leads to a much faster speed,averaging about 10 cm/s. Furthermore,this technique may prove to be more reliable than the rover being totally autonomous, as its progress is reviewed regularly and its actions can be changed on the fly,according to changing demands.

However,it has its drawbacks too.Significant computational power is needed onboard the vehicle,which increases both the vehicle's weight and power requirements.

The first successful field demonstration of SAN occured on May 7,1990,with JLP's Planetary Rover Navigation Testbed,also known as "Robby",shown in figure 3 . Robby is approximately 4 metres long,2 metres wide,with six 1-metre wheels and a mass of 1200 Kg.A distance of about 8 metres was travelled in 2 hours. This indicated the system's heavy dependence on the computational power onboard the vehicle.For example,the stereo image processing required 27 minutes per frame pair and the path planner took 38 minutes.There was an urgent need to increase the system's speed by reducing the amount of computation.Improvements in the processing of stereo images(by using commercial pipelined image processing boards) and modifications to the route planner,led Robby to successfully navigate a 100-metre course in 4.3 hours on 13 September 1990,a 4 to 5 times performance increase compared to its maiden trip only 4 months earlier.

Lander and rover on the Martian surface , JPL's microrover program ,source : http://nssdc.gsfc.nasa.gov/planetary/mesur.html

2.3 Reactive Navigation

Reactive navigation differs from path planning in that,while a goal location is known,the rover does not plan its path but rather moves towards the location by reacting to its immediate environment in real time.

There are various approaches to Reactive Navigation,but all stem from their designers's belief that robust autonomous performance can be achieved using minimal computational capabilities,as opposed to the enormous computational requirements of path planning techniques.

Designers of Reactive Navigation systems oppose the traditional robotics and Artificial Intelligence(AI) philosophy: that a robot must have a "brain",where it retains a representation of the world. Furthermore,they discard the idea that there are three basic steps for the robot to achieve its "intelligent" task:perception,world modelling and action.Robots based on this paradigm spend an excessive time in creating a world model before acting on it.Reactive methods seek to eliminate the intermediate step of world modelling.

Based on the above thinking,reactive methods share a number of important features.First, sensors are tightly coupled to actuators through fairly simple computational mechanisms. Second,complexity is managed by decomposing the problem according to tasks(eg collecting a soil sample) rather than functions(eg building a world model)[11].Then,reactive systems tend to evolve as layered systems .This is where most disagreement occurs between the different researchers.

Three different approaches are described below.

2.3.1 The Subsumption architecture - a biological approach

The Subsumption architecture [12,13] was conceived by Rodney A. Brooks of MIT Artificial Intelligence Lab.Brooks set out to develop an architecture where no central brain or representation would be used and traditional notions of planning would be totally discarded.Simple behaviours were built first connecting sensing to actuation.Then higher level behaviours were added,without modifying older behaviours.The new higher level behaviours suppress the original layers whenever the higher levels get triggered.Depending on what its sensors tell it at any given moment,the robot chooses the appropriate behaviour.Essentially,it acts as a giant finite state machine.

Brookes achieved his goal of connecting sensors to actuators directly in a highly parallel and distributed architecture.He named his method the Subsumption architecture,as it allowed one behaviour to subsume control from another lower level behaviour in the system.

It is interesting to note how the Subsumption architecture resembles the behaviour of some insects and snails,which operate on a hierarchy of behaviours.While their "control structure" consists only of a few simple rules,they successfully navigate in search of food and manage to survive.

More than half a dozen robots have been built based on the Subsumption architecture.Two in particular will be briefly described,Genghis and Herbert.

Genghis is a six-legged robot,about a foot long and weighs slightly more than a kilogram.It is capable of walking over rough terrain,avoiding obstacles in its way or climbing over them.Genghis was designed and built in under three months by a single person,which is remarkable for an autonomous robot and indicates the simplicity and potential of the subsumption theory.

Herbert is a more capable and advanced robot.It is equipped with 24 8-bit processors,30 infrared proximity sensors,a manipulator arm and a lazer rangefinder.It was designed to wander around the rooms of the MIT AI laboratory and collect empty soda cans from tables.Herbert succesfully demonstrated real-time obstacle avoidance,recognition of can-like and table-like objects.Again,its behaviours emerged from a moment-to-moment interaction with its environment,coordinated from the 24 parallel processors,rather than a central control unit.

With the subsumption architecture,Brookes demonstrated intelligent behaviour with minimal amounts of computation.He claims that robots not very different from the ones already developed,like Genghis,can be improved and used for planetary exploration,putting a serious challenge to the path planners.

2.3.2 The ALFA programming paradigm

ALFA is a behaviour language for designing reactive control mechanisms for autonomous mobile robots[11]. ALFA was designed to support a bottom-up hierarchical layered design methodology,like subsumption, but in contrast to it layers do not interact by suppressing behaviours in lower layers,but instead by providing information to lower layers through interfaces.

ALFA was developed at the JPL and tested on the Rocky III robot,under a contract with NASA [11].A similar robot (Rocky IV) is shown in figure 4. The requirements that led to its design called for an autonomous vehicle to carry out a planetary mission.The precise requiremets were:The ability to navigate to a designated area,acquire a suitable sample and bring it to the lander,the ability to negotiate obstacles,operate with no real-time communication and carry its own power and computation.

Despite its stringent requirements,Rocky was made relatively small and simple.It has six wheels, only thirteen centimetres in diameter and a mass of about fifteen kilograms.What is more interesting is the computational system,which consists of a humble eight-bit Motorola 6811 processor with 32Kbytes of memory(even though only 10Kbytes were used for the control software).

The sensors used on the robot are also very simple.For navigation the robot uses a compass and an infrared beacon detector to sense signals from the lander.For the same purpose the two middle wheels are instrumented with one-count-per revolution encoders.Other sensors are simple mechanical contact sensors underneath and at the front of the robot.

The structure of the control software for Rocky III is shown in figure 5 .It consists of three layers,each layer receiving information from the layer above and feeding the next one.The lowest layer performs the low-level motor control,by computing settings for the vehicle speed and steering direction.The second layer performs two functions,moving to the commanded heading and avoiding obstacles.The third layer is the master sequencer which performs overall control of the mission:drives the robot to the sample site,collects a soil sample and returns to the lander.

Rocky is autonomous.Once the start signal is received,the robot requires no further communications. An operator downloads the sample site and way-points(if any).The positions are given in X-Y coordinates with respect to the lander.The robot is given its starting location and the compass orientation of the lander.The operator then tells the robot to start.

The simplicity with which the Rocky navigates is truly remarkable.Moving to the commanded heading simply involves computing the difference between the desired heading and the current heading,as reported by the compass,and generating an appropriate steering command.Moving around obstacles is accomplished by backing away from the obstacle and turning to one side.Rocks grater than a wheel diameter are detected by the front contact switches,while severe slopes are detected by the roll and pitch clinometers.

According to the researchers at JPL[11],the robot has been tested on rough outdoor terrain and has not failed yet! In all cases the avoidance software succeeded in getting the robot through the obstacles and to the destination.

D.P. Miller points out some important and interesting features about the success of Rocky and behaviour control[11].All information about the terrain comes from a total of eight single-bit sensors.This is in marked contrast to Herbert [12],which uses 24 processors and 30 infrared sensors."Rocky cannot sense the environment until it literally runs into it!"This,nevertheless, does not decrease Rocky's success."Natural terrain is seldom a maze.Terrain is rich with paths, and it is not necessary for the robot to select the optimal path,only a path that works".This idea seems to encapsulate the success of reactive control.Consider the complex elevation maps processed by pipelined image processing boards and the sophisticated correlation algorithms for terrain matching used on Robby[5],under SAN.The most complex "map" used by Rocky is a list of X,Y points that give the position of the lander,the goal point and the waypoints,handled by an 8-bit 6811 processor and 10KBytes of code! Now,this is elegance!

The above description leads to a very interesting question:Is Rocky "intelligent",and if so, where is the intelligence?We quote from [11]:

"The capabilities exhibited by this robot are a result of the entire robot system interacting with its environment.The sensors are simple,but they are the right sensors for this robot and this class of activities.By mixing the sensing and reactive capabilities appropriately with the mobility hardware's capabilities,and the class of tasks assigned to the robot,we have a robot that operates intelligently over its domain.The intelligence is just as much hardwired into the selection and placement of the sensors and the actuators as it is in the executed code,but it works just as well".And we cannot but quote Rondey Brookes's words from Steven Levy[13] on the same issue: "I want to have stuff that speaks for itself,stuff deployed out there in the world,and surrounding you know.If you want to argue if it is intelligent or not,or if it's living or not,fine.But if it's sitting there existing 24 hours a day,365 days of the year,doing stuff which is tricky to do and doing it well,then I'm going to be happy.And who cares what you call it,right?"

It appears to us that the results obtained so far with reactive robots should provide good food for thought for the path planners.They have managed to show that robust autonomous navigation can be achieved using a system where the intelligence is in large part encoded in the device structure, rather than totally in the control/planning system.

2.3.3 Neural system approach

Another modern approach to reactive navigation is based upon adaptive capabilities of artificial neural networks,employing learning to tune reactive controllers.

The limitations of already existing applications of neural techniques to mobile robot control is that training time is consuming and off-line.Recent research[14] has attempted to develop a reactive navigation system capable of on-line learning reflexive locomotion behaviours,using a trial-and-error training method.The aim is explicitly stated in [14]:"The particular task,that the robot is expected to accomplish,is to follow a goal while avoiding obstacles , so the system is able to model simultaneously those two basic locomotion reflexes".While no mention is made to planetary rovers,the italiced phrases indicate this technique's potential application to planetary vehicles.

The main process of this approach is to map perceptual situations into locomotion commands .This can be further decomposed into more distinct stages: input data preprocessing,perceptual situation classification,action association and action validation .This leads to a hierarchical,layered control architecture,as shown in figure 6.

The inputs to the system are a goal location set by an operator and the readings of 24 ultrasonic range sensors,providing information about the spatial arrangement of the immediate environment.The output is a steering angle,chosen from a discrete set of angles.

The input preprocessor groups the inputs of the 24 ultrasonic sensors(to reduce the dimensionality of the sensed environment),into seven groups,maintaining three sides(front,left and right),and the four corners of the vehicle.

An interesting aspect of the preprocessor is that it can create a kinematic description of the scene to handle dynamic environments,eventhough this is not an important feature for a Mars Rover. (On the contrary,it would constitute a great achievement to collide with any moving object on Mars!).

The important part of the system is the partinioner,which classifies the perceptual situation.This is implemented as a self-organizing adaptive resonance Fuzzy-ART neural network module.

Neural networks belonging to the ART family have several important features for the task at hand. In particular,the ART neural nets are capable of incremental learning ,as opposed to the averaging behaviour of other nets.Furthermore,the learning process is performed on-line ,that is the net can learn at the same time as responding to an input stimulus.Finally,in the ART-like systems the number of classes is not fixed,a network grows while learning,keeping a minimal configuration,enough in size to represent the scene configuration categories.The number of these categories is prevented from qrowing excessively,by learning only perceptual situations which are important .(A situation is considered important if an action just chosen by the system is rejected by the Action Feasibility Verifier -see figure 6.)

The purpose of the action associator is to learn to choose the most appropriate action according to the sensed environment while maintaining the goal following and obstacle-avoidance behaviours.

The Action Feasibility Verifier ensures that a selected action will not drive the vehicle onto an obstacle or untraversable terrain(through suitable sensors).If such a hazard exists,the Verifier will cancel the action(while invoking obstacle avoidance learning) and select another appropriate action.

The results of the experiments using this approach indicated that the system does posses the learning capabilities suggested.The learning ability was tested by carrying out a number of training sessions on several paths.On each subsequent session,the number of locomotion commands decreased,reducing to between 50-60% of the initial session commands,after 6 or more sessions.The important result, however,was that a totally untrained system could learn by trial-and-error ,in real-time ,and self-supervise the learning.This can have serious implications for an exploration rover on the surface of Mars:the rover can be literally released on Mars,exploring its way and building its knowledge while doing so,and at the same time,maintain the goal following and obstacle avoidance behaviours.

The system does however have its limitations.It fails on two particular cases.On one case,the system can become unstable ,that is,it will reject a potentially hazardous action but on subsequent attempts to choose a more appropriate action may return to the already rejected action.In the other case,the system may oscillate indefinitely .An example is the situation when the goal location is behind a wall(figure 7).The system is unable to choose which way to go,ending up turning left and right indefinitely,about its current position.( Interlude: It is interesting to compare this with another neural system,which in a similar situation does not fail.Randall D. Beer of Case Western Reserve University built a mechanical insect modelled on the American cockroach[13].The insect was based on a neural network suggested by biological examples.Several levels of behaviour were built in the system that could suppress each other,in a similar fashion to Brooks's subsumption architecture.To test the system's capabilities,the insect was placed in a container with a food source,behind a curved barrier,as shown in figure 8.The insect soon picked up the scent of the food and moved towards it.When it reached the barrier,it switched to edge-following behaviour until it came to the end of the barrier,switching to walking behaviour.Reaching the top-right corner,the insect operates under another behaviour,avoiding getting trapped in the corner and follows the edge of the wall.Doing that it comes closer to the food and is able to sense it again,which causes food seeking behaviour to become active and the insect eventually gets to the food.The biological approach won again.)

The Fuzzy-ART neural network approach described above,despite its current limitations,has managed to show self-supervised learning by trial-and-error.If current problems can be overcome,a very powerfull technique may emerge.

3. Perception mechanisms for a Mars Rover

For the rover to function properly it is essential that it can model or sense its environment. For example, the rover must know if it is going uphill or downhill , if there is an obstacle in front of it and so on. This is achieved by the perception system of the rover. The perception system senses the environment by using physical and virtual sensors. Physical sensors such as wheel encoders , inclinometers , cameras and laser rangefinders are used to detect the immediate terrain environment of the robot. Virtual sensors are mathematical functions defined by the values of some physical sensors. For example, virtual sensors can give the absolute spatial location of the rover in cartesian coordinates.

3.1 Stereovision

For navigating a semi-autonomous rover over a planned path ( section 2.2.1) modelling the environment is very important. The rover must be able to 'see' what is ahead to avoid obstacles and untraversable terrain , and must know where its goal location is. For this application the perception mechanism can be achieved by stereo vision[15] and/or scanning laser rangefinder. JPL adopts stereo vision techniques at the present time.

Stereo vision is achieved by using two or more cameras. Usually two cameras are used about 0.5m appart to give a left and a right image of the same view. If the vehicle is CARDed from earth then pairs of images are transmitted to Earth, a special program alternatively displays the right and left images to give the sense of viewing in stereo to the human operators. The stereo vision gives the sense of depth to the user who can identify and plan safe paths for the rover to follow in order to reach a goal position. On the other hand if the rover uses SAN it must use the stereo images from the cameras to plan a safe path for itself. The raw stereo images cannot be used by the rover and must be converted to some symbolic form that the rover can understand. The first step to this is the computation of a range map[7] and is done by using an algorithm for stereo vision by correllation [7]. This algorithm involves correlation of corresponding points on both images to give a single point on a range map together with a covarience matrix to represent the uncertainty of the position of that point and the difference in range between the two points as a disparity value[15]. This can be represented as in figure 9 [16] where in the lower-right corner is the left image from a stereo pair.The upper-right shows range data as distance from the cameras. The upper left contains the subpixel disparity image, produced by the stereo system, from which the range data was computed. The lower-left shows confidence data. In all cases the colours span the rainbow, with red being low values and violet being high values.

3.2 Terrain Matching

The process of path planning in SAN suggests the superposition of a global low resolution height map from the orbiter to high resolution map from the rover. This process is called terrain matching [7] and it is achieved by an algorithm [7] where the translation that best matches a local and a global height map is seeked. The height map corresponds to unequally spaced points representing heights above a reference surface. Figure 10 shows a matched local and global height map. A hierarchy of resolutions of the same local height map is kept to assist in the path planning techniques ( section 2.2.1). Three differrent resolutions are usually computed :

3.3 Artificial potential field for obstacle avoidance

A method for real-time obstalce avoidance , is based on a perception mechanism for identifying obstacles via the application of an Artificial Potential Field[17]. Once the rover rangefinder ( acoustic or laser ) encounters an obstacle near or on the rover's trajectory, the rover must be able to act so as to avoid it in real time.

Real-time obstacle avoidance can be performed using application of an Artificial Potential Field on any obstacle encounterd, that is,it is assumed that every obstacle is surrounded by repulsive force fields, whose boundary is determined by fixed minimum distance before the rover can collide on it. The sum of these fields can be used to quantify the need for action to avoid collision.

The rover has a number of Points Subject to Potential (PSP) which are used to calculate a proximity signal with reference to one of them ( the closest at any time to the obstacle ). This signal is thought of as an error in avoiding the collision which increases as the rover approaches the potential field boundary of the obstacle and decreases as the bearing of the obstacle moves away from the front , the side and past the rover.

4. Scaling Considerations for Rovers,Reactive and Guided Navigation

Realisable missions for planetary exploration up to the present day are biassed towards using large (in the range of 1000 and 2500 Kg) ,complex and expensive rovers for their missions. This led to missions being delayed for long periods of time before launched due to their complexity and cost. The possibility of failure of such a mission also plays an important role in delays before all the complex systems can be thoroughly checked for functionality. Thus the idea put forward [12] for launching small rovers and subsequently small spacecrafts has gained considerable favour in the last few years, and the advances in microelectronic technology have now made such missions inexpensive and realisable. JPL's Microrover Flight Experiment [18] has produced a series (Rocky) of microrovers under this concept.The main argument[12] in favour of such small missions is that instead of launching single, large and expensive missions every few years we can launch multiple and inexpensive ( mass produced ) micromissions. Further it is argued that failure of a current planetary spacecraft would be catastrophic whereas failure of a micro spacecraft or rover would not be so critical to the whole mission due to the redundancy provided by multiple rovers.

The scheme for multiple microrover Mars exploration[5] suggested by JPL involves many small rovers landed at several locations on Mars. The microrovers will depend on their lander for computation and navigation, therefore they can only function in the vicinity of the lander.In effect the rover will be CARDed by the lander. Traditional concepts for Mars exploration involve the landing of a long range rover near a junction of scientific sites of interest and exploring several of them. In contrast the microrover concept involves the landing of several rovers near sites of scientific interest and well away from rough and ambiguous terrain so that only very local navigation is required. JPL is working on such a microrover project called Go-For ( figure 11) . Primary mission[5] of the vehicle is to Go For samples , images , spectra etc.

The ability of a small vehicle to move through the rough terrain of Mars could be questioned. For example,a large rover can drive over certain obstacles that a small rover should overtake, but at the same time a small rover might be able to drive between obstacles or even over them ( just like an ant), which a large rover should overtake. Overall it can be argued that the abilities of a microrover for navigating itself are as good as a large rover. Not only that but a lighter rover requires less power to drive itself and hence can respond better to the power constraints faced by the rovers. Furthermore if the smaller rover is autonomous ( which is the case) it can move much faster as it does not require the time consuming computations for world modelling and path planning as required by the large semiautonomous robots.

A new technique known as micromachining[19] is under development, where motors and actuators can be built on silicon wafers. Here we can have an entire robot etched on silicon wafers and thus we can print robots by the thousand as we print intgrated circuits. As the real need for planetary exploration is only to collect information then the silicon robot can function up to the mission expectations without the need of currying bulky and heavy equipment.

In comparing the two approaches we must emphasise the difference in the control and behaviour between them. Large rovers are semi-autonomous requiring world modelling and path planning (section 2.2.1), involving computational intensive processing. Small rovers on the other hand are autonomous, guiding themselves by reacting to the environment as it is encountered ( section 2.3) and thus require no world modelling and minimal overheads in computational requirements are present.

Once on the surface of Mars a reactive rover will require little or no communication with Earth hence it primarily avoids communication delays ( 6 - 40 minutes). So it is already in a more favourable position than a semi-autonomous rover which will require communication with Earth at least once a day. The computer processing requirements of an autonomous robot can be met by small computers with very little memory, for example Rocky 3 (section 2.3.2) works on an 8-bit Motorolla 6811 processor with 32KB of memory. These requirements are fairly simple and can be executed at high speeds. In contrast semi-autonomous rovers require complicated on board computations for stereo vision , path planning and so on, which can only be met by more advanced on board computers. The JPL rover 'Robby' for example uses complicated hardware such as DataCube pipelined image processing boards for stereo vision algorithm implementation. The computational requirements before a local path of several metres can be executed sum to a total time delay of tens of minutes.

Summarising we quote from [12] ".. the time between mission conception and implementation can be radically reduced, that launch mass can be slashed, that totally autonomous robots can be more reliable than ground controlled robots, and that large number of robots can change the trade off between reliability of individual components and overall mission success ".

5. Conclusion

The field of planetary exploration is vast , incorporating many other fields , perception (sensors and tranducers) , cognition (computation and planning) , manipulation (locomotion and kinematics) . The diversity and the complexity of the subject has resulted in numerous approaches in solving the existing problems. Each approach has merits and weaknesses . The present state of technology allows models and ideas to be tested thoroughly and develop fast. Which approach will make it first to the surface of the Red Plane is hard to predict, but solid foundations have been laid for the ultimate exploration of the Solar System.

6. An Appendix of the Figures in the document

Figure 1 : The Surveyor Lunar Rover Testbed (SRLV) ( Back )

Ref : http://robotics.jpl.nasa.gov/groups/rv/homepage.html

Figure 2 : Semiautonomous navigation ( Back )

Ref : "A Mars Rover for the 1990's",p.484,Fig.2 (see ref. 6)

Figure 3 : JPL's Planetary Rover Navigation Testbed ("Robby") ( Back )

Ref : http://robotics.jpl.nasa.gov/groups/rv/homepage.html

Figure 4 : JPL's Rocky IV ( Back )

Ref : http://robotics.jpl.nasa.gov/groups/rv/homepage.html

Figure 5 : The Structure of the control software of Rocky III ( Back )

Ref : "Reactive Navigation through Rough Terrain:Experimental Results", p.826,Fig.6 (see ref.11)

Figure 6 : The System Architecture ( Back )

Ref : "Self-Supervised Neural System for Reactive Navigation",p.2079,Fig.1 (see ref.14)

Figure 7 : A situation leading to infinite oscillations ( Back )

Ref : "Self-Supervised Neural System for Reactive Navigation",p.2080,Fig. 4, (see ref.14)

Figure 8 : Food behind barrier ( Back )

Ref : "Artificial Life",p.294, (see ref. 13)

Figure 9 : Wilde Field-of-View Stereo from http://robotics.jpl.nasa.gov/tasks/ugv/homepage.html ( Back )

Figure 10 : local terrain height map merged with the global terrain height map , Visual terrain matching for a Mars rover Donald B. Genery, Jet Propulsion Laboratories, 1989. ( Back )

Figure 11 : Microrover Testbet "Go-For" ( Back )

Ref : http://robotics.jpl.nasa.gov/groups/rv/homepage.html

7. References

[1]The exploration of the solar system David Morisson, Journal of British Interplanetary Society, Volume 41 , Jan/Feb 1988, pp 41-47, Usefulness : 6 , Readability 9 .

[2]Use of Martian Resources in a controlled ecological life support system (CELSS) David T. Smernof and Robert D. MacElroy, Journal of British Interplanetary Society, Volume 42 , April 1989, pp 179 , ,Usefulness : 5 , Readability : 8.

[3]The resources of Mars for human settlement Thomas R. Meyer Journal of British Interplanetary Society, , Volume 42 , April 1989 , pp 147 , Usefulness : 7 , Readability : 8.

[4] Exploration of Mars C.P. McKay ,Journal of British Interplanetary Society, Vol. 42 , April 1989, Editorial, Usefulness : 5 , Readability : 9.

[5]Robotic vehicles for planetary exploration Brian Wilcox, Larry Matthies, Donald Genery, Proceedings of the 1992 International Conference on Robotics and automation, Nice,France, May 1992, Usefullness: 9, Readability 9.

[6]A Mars rover for the 1990's Brian H. Wilcox, Journal of the British Interplanetary Society, Vol.40, 1987, pp.484-488, Usefullness 10, Readability 9.

[7]Visual terrain matching for a Mars rover Donald B. Genery, Jet Propulsion Laboratories, 1989, Usefullness : 4, Readability 3.

[8]Path Planning and Execution Motitoring for a Planetary Rover Erann Gat, Marc G. Slack, David P. Miller ,R. James Firby, Jet Propulsion Laboratories 1990, Usefullness 8, Readability 7.

[9]Internalised Plans: representation for action resources D. W. Payton, Proceedings of the workshop on representation and learning in an autonomous agent, November 1988.

[10]Path Planning through time and space in dynamic domains M. G. Slack, Proceeding of the 10th International Joint Conference on Artificial Intelligance, pp.1067-1070, 1987 .

[11]Reactive Navigation through Raugh Terrain: Experimental Results David P. Miller, Jet Propulsion Laboratories, 1992 Usefullness 10, Readability 10.

[12]Fast, Cheap, and out of Control: A Robot invation of the Solar System R. A. Brooks, Journal of the British Interplanetary Society, Vol.42, 1989, pp.478-485, Usefullness 9, Readability 9.

[13]Artificial Life Steven Levy, pp.273-308, Usefullness 9, Readability 9, Comments: ch real artificial life

[14]Self-Supervised Neural System for Reactive Navigation James L. Crowley, Artur Dubrawski, 1994 Usefullness 9, Readability 9.

[15]Autonomous Planetary Rover (V.A.P.):On Board Perception system and Stereovision by Correlation approach L. Boissier, B. Hotz, C. Proi, 1992 Usefullness 4, Readability 5.

[16}Wide Field-of-View Stereo Todd Litwin, http://robotics.jpl.nasa.gov/tasks/ugv/home page.html Usefullness 4, Readability 10.

[17]Path Tracking, Obstacle Avoidance and Position Estimation by an Autonomous , Wheeled Planetary Rover D. N. Green, Usefullness 7, Readability 9.

[18]MFEX: Microrover Flight Experiment Control Subsystem http://robotics.jpl.nasa.job/tasks/mfex/home page , Usefullness 8, Readability 9.

[19]Gnat Robots (and) how they will change robotics A. M. Flynn, IEEE Microrobots and teleoperations workshop, November 1987.

Other sources

8. Sources of Information on the WEB

This section provides some good sites for information about Mars exploration . A list of the sites is given below with a small representative extract from each site