If you are studying for an MEng or MSc at Imperial College, and would like to do your individual project in robotics, machine learning, or computer vision, then I would like to hear from you! I'll encourage fun and open-ended projects rather than pre-fabricated routine ones, so if that sounds like your sort of thing, then drop me an email. Generally, my projects involve deep learning or reinforcement learning, particularly with applications in robotics, and top students can expect to be working on state-of-the-art research -- potentially with a publication at the end of the project. Take a look at the projects below to get an idea of what you might be working towards. My available projects will be posted on Imperial's website as with all other projects, but I am also open to proposals of your own, so please do get in touch if you would like to discuss a particular interest of yours.


Current Students


Linkun Geng

MSc 2017-2018

Ludovico Lazzaretti

MSc 2017-2018

Shikun Liu (co-supervised with Andrew Davison)

MRes 2017-2018

SiCong Li

MEng 2017-2018


Past Students


Jonathan Heng

MSc 2016-2017
Transfer Learning for Robot Control with Generative Adversarial Networks
Jonathan studied the use of transfer learning, to enable a robot trained in one domain (e.g. a simulator) to operate in another domain (e.g. the real world). His particular solution was to use a Generative Adversarial Network (GAN) to transfer images observed by the robot, from one domain to the other. Click on the image to the right to download his thesis.



Jiaxi Liu

MSc 2016-2017
Learning how to Grasp Objects with Robotic Gripper using Deep Reinforcement Learning
Jiaxi investigated training a grasping system using reinforcement learning, with demonstrations used to speed up learning. In this case, the demonstrations were achieved by sampling grasps using UC Berkeley's Dex-Net 2.0 system, and these populated a replay buffer to assist a DQN algorithm. Click on the image to the right to download his thesis.



Diego Mendoza Barrenechea

MSc 2016-2017
Curriculum Learning for Robot Manipulation using Deep Reinforcement Learning
Diego explored the use of curriculum learning for robotics, i.e. beginning with a simple task and gradually making it more challenging. He studied this with tasks such as reaching and pushing a cube with a robot arm using reinforcement learning, and with a curriculum such as gradually increasing the number of joints which the robot can control. Click on the image to the right to download his thesis.



Miklos Kepes

MEng 2016-2017
Teaching Multi-fingered Robotic Hands to Grasp Simple Objects using Tactile Feedback-driven Deep Reinforcement Learning
Miklos investigated training multi-fingered robotic hands to pick up objects, using deep reinforcement learning. In simulation, several modifications to the DQN algorithm were developed, with state represented by finger joint angles and tactile sensors, and the network outputting position control for each finger of a three-fingered hand. Click on the image to the right to download his thesis.



Rad Ploshtakov

MEng 2016-2017
Evaluating Transfer Learning Methods for Robot Control Policies
Rad studied how to transfer robot control policies from one domain (e.g. a simulation) to another domain (e.g. the real world). He evaluated a number of different methods for transfer learning, with a 2D setup for a simple target reaching task. Click on the image to the right to download his thesis.



Stephen James

MEng 2015-2016
3D Simulated Robot Manipulation Using Deep Reinforcement Learning
Stephen explored how to train a robot arm to pick up an object, using extensions of Deep Q-Learning. The approach was to train the robot in simulation, and then transfer the learned policies over to a real robot. Stephen is now continuing his work as PhD student in my lab. Click on the image to the right to download his thesis. Click on the image to the right to download his thesis.



Christophe Steininger

MEng 2015-2016
Genetic Algorithms with Deep Learning for Robot Navigation
Christophe investigated ways in which neural networks can be trained with genetic algorithms. His test case was for robot navigation, where the task is to cover as much of a room as possible within a given time, such as for a robot vacuum cleaner. Christophe now works as a developer at Amazon. Click on the image to the right to download his thesis.