The Applications of AI seminars run in Autumn and Spring terms, on Wednesday afternoons, in the Department of Computing, Imperial College London.

The Applications of AI seminars will be given in the Huxley Building, room 145.

Talks are on technical or ethical/social aspects of the application of AI. These can range from a purely technical presentation on new methods in machine learning applied to particular problems, to more philosophical discussion on the moral status of intelligent agents and bias in statistical learning algorithms—and everything in between.

Please email Robert Craven ( for further information.

Upcoming talks

November 13th 2019, 2pm: Matthew Williams

Title: Applying AI in a clinical environment: regulatory, clinical and practical hurdles, and routes forward.

The recent growth in deep learning, and success in terms of dealing with some specialised domains such as medical imaging, has led to a great deal of interest in the medical uses of AI, with some over-hyped claims circulating. Yet the clinical deployment of such systems remains limited.

In this talk I will discuss our current work on using wearable devices + ML-based analysis, and use that as a worked example to explain why progress has been so slow. In particular, I will look at historical examples, and current regulatory and ethical issues. I will conclude by providing some remarks on what we might do—from both clinical and computing sides—to accelerate progress.

Matt Williams is consultant clinical oncologist, specialising in the management of brain tumours. He holds a PhD in Computer Science from UCL, and leads the Computational Oncology group at Imperial, where he works on the application of computational and mathematical methods to clinical problems.

For more information on Matthew's research, see his webpage.

January 22nd, 2pm: Karina Vold

Title: Using AI to extend human cognition: between artifacts and social cognition.

How can we use AI to improve human cognition? Under the Extended Mind thesis, the functional contributions of tools can become so essential for our cognition that they are on a par with our brains: our cognitive processes can literally 'extend' into the tools. The literature around the Extended Mind typically focusses on relatively simple artefacts, such as writing utensils or a walking stick. But Hernández-Orallo and I (2019) have recently argued that AI presents the possibility for a new range of comparatively more sophisticated cognitive capabilities to be extended, including executive functions such as emotional regulation, mind-modelling, and metacognition. In this talk I will compare both the risks and opportunities of 'AI extenders' to those of traditionally discussed cases of artefacts and socially extended cognition.

Karina Vold is based at the Leverhulme Centre for the Future of Intelligence, where she is exploring the topics of AI and human personhood, different models of agency, and ethical questions about the use of AI. She has also conducted research projects at the Turing Institute. Her background is in philosophy and political science, and she has a PhD in philosophy from McGill University.

For more information on Karina's research, see her pages here or here.

January 29th, 2pm: Richard Evans

Title: Machine Apperception.

What does Kant have to teach modern machine learning practitioners about their craft? What can an abstruse philosophical text, that is over two hundred years old, have to teach us about contemporary issues in machine learning? The answer, I claim, is rather more than you might think. In particular, I will argue that the synthetic a priori truths expounded in the Critique of Pure Reason are exactly the domain-independent inductive biases we need to achieve data efficiency, strong generalisation, and interpretability. I shall show, in a range of experiments, how a Kant-inspired AI architecture, the Apperception Engine, is able to solve tasks that are out of reach of contemporary neural network architectures.

Richard Evans is a research scientist at DeepMind, specialising in unsupervised learning, explainable AI, program synthesis, and Kant. Previously, he was the founder and lead AI architect of Little Text People, an AI startup that was acquired in 2012.

For more information on Richard's research, see his website.

February 5th, 3pm: Katie Russell

Title: TBC.

Abstract: TBC.

Dr Katie Russell is head of Data and Analytics at OVO energy.

February 12th, 2pm: Brent Mittelstadt

Title: TBC.

Abstract: TBC.

For information on Brent's research, see his website.