Hello, and welcome to my research website. I am a Royal Academy of Engineering Research Fellow at Imperial College London, where I also lead the robot manipulation team in the Dyson Robotics Lab. My research is at the intersection of computer vision, robotics, and machine learning.

I am studying how we can teach robots to understand images they see of the world. In turn, this aims to equip robots with capacities such as recognising, picking up, and manipulating objects, and performing useful tasks by learning hand-eye coordination skills. Recently, I have become particularly involved with deep learning and reinforcement learning for controlling robots directly from raw images, with a particular emphasis on large-scale training using simulation.

After receiving a BA and MEng from Cambridge University, in 2009 I joined Imperial College to begin my PhD studies. Following my PhD, I spent a year at UCL as a postdoc, before returning to Imperial College in 2014, to help set up the Dyson Robotics Lab and take up a Dyson Fellowship. In 2017, I was then awarded a 5-year Royal Academy of Engineering Research Fellowship, to work at Imperial College on my project "Empowering Next-Generation Robots with Dexterous Manipulation: Deep Learning via Simulation".

For students interested in projects with me, please see my teaching page, and for any other enquires, please get in touch. In 2017, I chaired the 2nd UK Robot Manipulation Workshop at Imperial College, which brought together 130 researchers from across the UK, and we are currently preparing the 3rd Workshop to be held later in 2018. So if you would like to get involved in any capacity, then please let me know!



Representative Publications

Below are some of my publications which best represent my recent research. (See here for all publications.)


Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task

Stephen James, Andrew J. Davison, and Edward Johns. In CoRL 2017.
We trained a deep neural network to map images to motor velocities, and control a robot arm to perform a multi-stage task, involving locating and grasping a cube, followed by locating a basket and dropping the cube in the basket. The key novelty is that the controller was trained only in simulation, without any real-world data at all, and simulation-to-real transfer was achieved by randomising various properties in the simulator. This then allows the robot to operate well in the real world, under a number of conditions, such as dynamic illumination changes and object clutter.



Deep Learning a Grasp Function for Grasping Under Gripper Pose Uncertainty

Edward Johns, Stefan Leutenegger, and Andrew. J. Davison. In IROS 2016.
We developed a method to train a CNN to predict a dense "grasp function" over a depth image of an object, where each point on the function indicates the quality of a grasp executed at that point. Given a gripper with known pose uncertainty, the grasp function can then be smoothed with respect to this uncertainty, to achieve robust grasping. All data was generated in simulation, by attempting multiple grasps over 1000 synthetic objects, and recording the successful ones.



Pairwise Decomposition of Images Sequences for Active Multi-View Recognition

Edward Johns, Stefan Leutenegger, and Andrew. J. Davison In CVPR 2016 (oral).
We proposed a 3D object recognition method, which can operate over any arbitrary camera trajectories over an object. This was achieved by representing an image sequence as a set of image pairs, and training a CNN to predict the class of each pair, and then aggregating the results over all pairs. Furthermore, we show that a CNN can also be trained to predict the next best viewpoint to move the camera to, in order to recognise the object with as few images as possible.