Dense visual perception for robot navigation

My research is at the intersection of two fields: computer vision and robotics. In particular I am interested in how cameras can be used to provide a robot with detailed 3D information of the robot's vicinity and objects close to it suitable for local perception, obstacle avoidance and navigation.

In my project I am working towards bringing a full dense, parallelisable vision approach to usable robot SLAM and navigation, where I see the potential for full and robust solutions using a small number of low-cost, automatically calibrated cameras mounted on a robot.


  • 3DV 2016

    Monocular, Real-Time Surface Reconstruction using Dynamic Level of Detail

    Jacek Zienkiewicz, Akis Tsiotsios, Andrew Davison, Stefan Leutenegger
    International Conference on 3D Vision (3DV), 2016


    We present a scalable, real-time capable method for robust surface reconstruction that explicitly handles multiple scales. As a monocular camera browses a scene, our algorithm processes images as they arrive and incrementally builds a detailed surface model. While most of the existing reconstruction approaches rely on volumetric or point-cloud representations of the environment, we perform depth-map and colour fusion directly into a multi-resolution triangular mesh that can be adaptively tessellated using the concept of Dynamic Level of Detail. Our method relies on least-squares optimisation, which enables a probabilistically sound and principled formulation of the fusion algorithm. We demonstrate that our method is capable of obtaining high quality, close-up reconstruction, as well as capturing overall scene geometry, while being memory and computationally efficient.

  • IROS 2016

    Real-Time Height Map Fusion using Differentiable Rendering

    Jacek Zienkiewicz, Andrew Davison, Stefan Leutenegger
    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016


    We present a robust real-time method which performs dense reconstruction of high quality height maps from monocular video. By representing the height map as a triangular mesh, and using efficient differentiable rendering approach, our method enables rigorous incremental probabilistic fusion of standard locally estimated depth and colour into an immediately usable dense model. We present results for the application of free space and obstacle mapping by a low-cost robot, showing that detailed maps suitable for autonomous navigation can be obtained using only a single forward-looking camera.

  • JFR 2015

    Extrinsics Autocalibration for Dense Planar Visual Odometry

    Jacek Zienkiewicz, Andrew Davison
    Journal of Field Robotics (JFR), 2015


    A single downward-looking camera can be used as a high precision visual odometry sensor in a wide range of real-world mobile robotics applications. In particular, a simple and computationally efficient dense alignment approach can take full advantage of the local planarity of floor surfaces to make use of the whole texture available rather than sparse feature points. In this paper we detailed present analysis and highly practical solutions for auto-calibration of such a camera's extrinsic orientation and position relative to a mobile robot's coordinate frame. We show that two degrees of freedom, the out-of-plane camera angles, can be auto-calibrated in any conditions; and that bringing in a small amount of information from wheel odometry or another independent motion source allows rapid, full and accurate 6 DoF calibration. Of particular practical interest is the result that this can be achieved to almost the same level even without wheel odometry and only widely-applicable assumptions about nonholonomic robot motion and the forward/backward direction of its movement. We show accurate, rapid and robust performance of our auto-calibration techniques for varied camera positions over a range of low-textured real surfaces both indoors and outdoors.

  • BMVC 2013

    Dense, Auto-Calibrating Visual Odometry from a Downward-Looking Camera

    Jacek Zienkiewicz, Robert Lukierski, Andrew Davison
    British Machine Vision Conference (BMVC), Bristol, UK, 9-13 September 2013


    We present a technique whereby a single camera can be used as a high precision visual odometry sensor in a range of practical settings using simple, computationally efficient techniques. Taking advantage of the local planarity of common floor surfaces, we use real-time dense alignment of a 30Hz video stream as the camera looks down from a fast-moving robot, making use of the whole texture available rather than sparse feature points. Our key novelty, and crucial to the practicality of this approach, is rapid and automatic calibration for 6DoF camera extrinsics relative to the robot frame. Our experiments show robust performance over a range of low-textured real surfaces.
    PDF Slides [22 MB]

  • Patent 2012

    Laser scanner to measure distance (1)

    Sebastian Pastor, Thomas Schopp, Helmut Themel, Jacek Zienkiewicz
    EP Patent 2182377, 2012

  • Patent 2012

    Laser scanner to measure distance (2)

    Sebastian Pastor, Thomas Schopp, Jacek Zienkiewicz
    EP Patent 2182378, 2012