|Active Visual Navigation Simulation (directory ETLRobot/):||A simulation of map-building for a Nomad-like robot measuring features visually with a stereo active head (and the robot version currently used to drive a real robot in our laboratory). This section also includes discussion of some of the functionality available in the system as a whole, such as the support for waypoint navigation, previously known features and manually setting the initial state.|
|Active Visual Navigation in 3D on Non-Flat Terrain Simulation (directory ETLRobot/):||An extension of the system above to the case where the robot now moves on undulating terrain and must estimate its location in 3D, using active vision in combination with a roll/pitch accelerometer sensor.|
|Scene Data Analysis||A program (currently only implemented for the Active Visual Navigation in 3D system) for data analysis from real and simulated experiments, from which numerical and postscript graphical output can be performed to create results for papers, etc.|
|Active Vision Double Robot Simulation (directory ETLDouble/):||Simulator/real software for a two-robot system, with one vision robot and a blind slave robot. The two robots' locations are estimated within a single motion model which permits the rest of the software to be used in its standard configuration. The vision robot is able to make observations of the blind robot to aid its position estimation.|
|Single Camera in 3D (directory Camera/):||Both simulation and implementation using real images for vision-based localisation using a single camera (currently using only prior-known features).|
|1D Simulator (directory OneD/):||A simulator with simple 1D vehicle movement and feature measurement models which makes the properties of SLAM very easy to see.|
To run the program:
If this binary does not work on your system due to library issues, you may have to compile the sources yourself (or contact me for help). If you have a non-Linux UNIX system, it is probable that minor changes will need to be made before successful compilation.
The following is an explanation of the program's operation, and suggestions for producing interesting behaviour.
Making Measurements of Features
Moving the Robot
Clicking on "Go Using Robot Control Parameters" makes the robot move one time step (of duration settable by another slider) according to the currently set control parameters v, and delta_S and delta_T. However, simulated random variations are incorporated to produce the amount by which the yellow true vehicle position moves in this step. In the real world, factors such as wheel slippage would mean that the robot would never move in exactly the way commanded. At the end of each step, a simulated measurement is taken of each of the selected features. For convenience, the number of steps of movement occurring with one click of "Go" can be changed with "Increase/Decrease Steps" (although this is limited to one if the number of selected features is more than one, reflecting the situation with our active vision-equipped robot which can track only one feature at a time continuously).
Growth of Uncertainty
Limiting Uncertainty Using Feature Measurements
Tracking Just One Feature and Active Fixation Switching
The action which will eliminate this drift is to select another feature instead and track that one for a while. In our active system, this kind of rapid switching of attention to limit uncertainly is a fundamental behaviour, and similar to the way that the human eye is always in motion during navigation tasks.
Demonstrating Long-Term Map Building and Using
mv scene_features2D scene_features2D_safe
This puts a more complex scene point distribution into place in the simulation (the file scene_features2D is loaded as the program starts). Restarting the program "scenesim", initialise just one feature close the the robot, and track it while moving towards another feature. Deselect the first one, then initialise the second and track that while moving towards a third. In this way, try to make a "circuit" of the available features (which are laid out in a supposedly "looped-corridor-like" configuration), coming back in the direction of the very first feature while only ever acquiring a new feature for a short period of tracking then moving onto another new one.
What you should see is that a rather large amount of drift in the robot location estimate has occurred. Acquiring and tracking a new, unknown feature, provides no new information on the robot's position in the world frame. Further, the estimates of the world locations of the most recently acquired features will be quite uncertain, since the robot's location was itself uncertain when they were initialised.
The interesting thing now is to reselect the first feature initialised. This is like completing a loop of a rectangular corridor in a building and coming back to where you started. The coupling of estimates in the covariance matrix will mean that making this measurement, as well as immediately making the robot position estimate accurate again, also improves the estimates of all the other features tracked in between this closing of the loop.
Deselect all features and make a long blind movement. Then select a known feature and see how the robot estimate locks back into place. Sometimes two or more features will have to be measured in turn here to fully recover to a good estimate. Is is of course possible to cause the algorithm to fail in such extreme cases as this due to the unmodelled non-linearities in the system, but you have to try quite hard.
Monitoring the Map State
Automatic Feature Selection
Setting the Initial State
Previously Known Features
mv known_features2D_safe known_features2D
These features will be inserted into the map with zero covariance and thus are "perfect" landmarks. Arbitrary numbers of perfect and normally self-detected features can be used together. When some known features are specified in the map, note the effect this has on the choice of initial state and covariance for the robot, as defined in the file "initial_state": if these known features have defined world positions with zero covariance, can we be sure when placing the robot at its initial or zero position in the world that its position relative to these features is perfectly set? It makes more sense to give the robot a realistic uncertainty in its initial position (maybe a couple of centimetres of potential error in position and a few degrees in orientation). In this way, the world frame is now being defined by the known feature positions, rather than the initial position of the robot, which we could safely do when nothing at all was known in advance about the world.
To run the program:
The simulator for this system is operated in mostly the same way as the single robot one above; however, setting non-zero control parameters and clicking on "Go Using Robot Control Parameters" produces a very different result. The simulated robot (shown in yellow) moves according to the control parameters, but its orientation begins to change and it moves up and down as motion across undulating terrain of unknown and changing orientation is simulated (the degree of undulation in the slope is determined by a single parameter representing the standard deviation of curvature). Note that this simulation is simplified in that there is no persistent terrain model generated in advance and underlying the random orientation changes which occur: random slope changes occur locally at the current robot position. This means that the same part of the "terrain" can be traversed more than once and have a different orientation --- nevertheless the model serves its purpose in this simulation. The robot's position is now of course represented in three dimensions, and its state vector has eight components (x, y, z, q0, qx, qy, qz, S): x, y, z are its cartesian position in 3D; q0, qx, qy, qz are the components of a quaternion used to represent 3D orientation; S is the relative steering angle representing the heading direction of the steering wheels with respect to the robot's turret.
If no sensor measurements are used beyond the use of wheel odometry implicit at all times in this simulation, the estimated robot position in green does not track the true position at all well: without any further information, this estimate can do no better than assume the average case, that the terrain is flat, and the estimated robot moves in a plane, quickly diverging from the true position.
A first level of improvement can be attained through the use of a roll/pitch orientation sensor assumed to be installed on the robot. Clicking "Measure Roll/Pitch" simulates a noisy measurement from this sensor and updates the estimate accordingly. Making occasional roll/pitch measurements can greatly improve localisation, but these measurements only reveal information about the robot's angle of lean to the left/right (roll) and front/back (pitch). There is no direct information about the horizontal orientation (yaw) or about cartesian position. The robot can be made to move while making repeated automatic roll/pitch measurements by using the "Go Using Roll/Pitch" button.
The other way to improve 3D position estimation is of course to build and use a map of scene features. This can be done in the same way as in the 2D navigation case described in detail above: clicking the right button on a point in the 3D viewer initialises a feature, and clicking it again toggles its status as a selected feature. Selected features will be measured at every movement step. "Auto-Select Feature" can be used to automatically select one visible feature for immediate meaurement. The "Navigate to Waypoint" buttons will make the robot perform multiple navigation steps to reach a target waypoint defined in the file waypoints3D or using the Waypoint Browser while using auto-select at each movement step to select features for measurement (note that these waypoints function in a 2D sense: the robot will ignore all but the first and third components of these waypoint entries, corresponding to x and z position, and steer in such a way that depending on the unknown slopes it will arrive at a position either directly above or below the waypoint).
It is interesting to compare performance when using vision both with and without the use of the roll/pitch sensor. With vision but no roll/ptich measurements, the level of uncertainty in the system is high at all times since whenever the robot moves its position uncertainty grows significantly due to the lack of knowledge about slope changes: the burden of reducing the uncertainty lies solely with vision. In fact, when this software was tested with a real robot (see the paper about 3D SLAM on my home page), it was found that this high level of uncertainty led to unacceptably large visual search regions and a high level of mismatches. Performace is much better when using vision and roll/pitch in combination: an example of sensor fusion.
To use the program, you first need to generate some data using the scenesim3D program:
Simply click "Output State
Data to File" at every position of the robot where you want to save a snapshot
of the map and vehicle state (and covariance).
sceneanalyse3D n state0
where n is the number of snapshot states you wish to view (this will be the total number of times you clicked the button "Output State to File", and can also be determined from the file state0 by looking at the first of the two integers which precede the floating point numbers representing map data: this is the snapshot step number, starting from 0, so add 1 to the final step number in the file).
Within sceneanalyse3D, you can change the currently viewed step with the control at the top of the form. Clicking "Output Postscript" outputs a postscript picture to file ps_scene.ps. Use the other buttons on the form before pressing this button to adjust the parameters of the picture: the size, length of axes, etc.
To run the program:
The simulator for this system runs in the same way as the single robot one above. The first robot can move around and carry out map-building as usual. A second set of controls exists for the second robot, which normally has only odometry to estimate its location. However, an additional control button "Make Observation of Second Robot" simulates a measurement by the vision robot of the location of a beacon marker attached to the blind robot. It can be seen that making this observation dramatically improves the second robot's position estimate.
Currently, this is not a SLAM implementation but one of localisation using known features (and can be thought of as inside-out model-based tracking). We include both a simulator and a real implementation with an image sequence where the locations of a collection of features have been measured by hand. It can be seen in both cases that the position of the camera is estimated well until the features in the map go out of view according to visibility criteria.
To run the simulator program:
To run the program with real images, you will first need to install the image sequence available here: seq.tar.gz
To run the program:
The robot has only velocity control as it moves along a linear path (like a train on a railway line). Its true motion deviates from commands with motion noise which is proportional to the distance travelled. Features (maybe some signs at the side of the tracks!) are measured in one dimension too, with a sensor that is assumed to have constant measurement noise (it can measure distant features with the same uncertainty as close ones, which is perhaps unrealistic but may fit some sensor types such as radar(?)).
One interesting thing to try is make some blind robot motions, then select a group of features and make many repeated measurements of all of them. The variances in the locations of the robot all of these features in the world frame, as well as all the covariances between these quantities, tend towards the same constant value.