RAEng Research Fellow, Imperial College (2018 - present)
I am currently a Royal Academy of Engineering Research Fellow, working on my project "Empowering Next-Generation Robots with Dexterous Manipulation: Deep Learning via Simulation". The aim of this project is to investigate how training robot controllers in large-scale simulations, can generate sufficiently diverse data to enable robots to generalise to novel objects and scenes, outside of the laboratory and in everyday environments. This involves studying simulation-to-real transfer learning, together with methods to combine simulated and real-world data, and with broad applications across grasping, planning, and end-to-end control.
Dyson Research Fellow, Imperial College (2014-2017)
I was a founding member of Imperial's Dyson Robotics Lab with Andrew Davison, a new venture to bridge the gap between academic research, and real-world applications in domestic robotics. Following the release of the 360 Eye vacuum-cleaning robot, the group has been working with Dyson to develop state-of-the-art methods in navigation, mapping, object recognition and manipulation. My particular focus was on active vision for object recognition, and machine learning for robotic grasping. I became particularly interested in applying deep learning methods to robot control problems, towards tightly-coupled controllers which expressed both perception of the world, and an understanding of its dynamics.
Postdoc, University College London (2013-2014)
I spent a year at UCL working on machine learning methods for computer vision, with Dr. Gabriel Brostow at University College London. Here, I developed methods for involving humans in-the-loop for image recognition, where I developed an interactive system which could automatically teach humans how to distinguish between different objects, such as characters in the Chinese alphabet, or unusual animal species with fine-grained differences. This involved a collaboration with Dr. Kirsty Kemp at the Zoological Society of London, where we targeted the automated analysis of images captured along the seabed to study the effects of overfishing.
Phd, Imperial College London (2009-2013)
My PhD was supervised by Prof. Guang-Zhong Yang at the Hamlyn Centre at Imperial College London, where I focused on scene recognition, image retrieval, and topological localisation techniques for mobile robot localisation. Further to this, I was also directly involved in broader research projects including human-robot interaction, robot navigation, data fusion for mobile robots, and pervasive sensing in "intelligent" environments. My PhD thesis, "Generative Methods for Scene Association with Pairwise Constraints", proposed a vision-based, qualitative localisation system for a mobile robot, based on learning a joint distribution over appearance and geometry for pairs of local features. Furthermore, the generative nature of the method allows for scene models to be updated over time, to enable continual learning of the environment appearance under both short-term and long-term dynamic changes.
BA and MEng, Cambridge University (2003-2007)
I did my BA and MEng in Engineering at Cambridge University, specialising in Electrical and Information Engineering such as computer vision, machine learning, signal processing, and control. In my MEng project, "Mobile Robot Navigation with a Minal Set of Beacons", I built a microcontroller-based robot and developed a method for visual localisation using two light beacons mounted on the ceiling.