AI systems are increasingly used to make consequential decisions in a wide range of domains, including banking, healthcare and beyond. As a result, several methods and tools have been developed to explain the decisions of these systems, with the aim to deliver explainable AI (XAI) solutions suitable for a human-centred world.

While explanations can improve the interpretability of AI decisions, they have also been shown to lack robustness, e.g., they may produce completely different explanations for similar events. This phenomenon can have troubling implications, as lack of robustness may indicate that explanations are not capturing the underlying decision-making process of a system and thus cannot be trusted.

This workshop aims to bring together researchers from academia and industry working in XAI to explore limits of current explainability methods and discuss the role that robustness may play in delivering trustworthy explanations. The workshop will consist of invited and selected presentations and discussions.

Invited Speakers




Programme

Time (GMT+1) Talk
09:15 - 09:30 Registration
09:30 - 09:45 Welcome
9:45 - 10:30 Invited talk: Rafael Calvo
10:30 - 10:40 Supporting the Value-Sensitive Participatory Design of AI-Based Systems, Malak Sadek
10:40 - 10:50 Do users care about the robustness of AI explanations? A study proposal, Bence Palfi
10:50 - 11:00 Break
11:00 - 11:45 Invited talk: Saumitra Mishra
11:45 - 11:55 Formalising the Robustness of Counterfactual Explanations for Neural Networks, Junqi Jiang
11:55 - 12:05 On interactive explanations as non-monotonic reasoning, Guilherme Paulino-Passos
12:05 - 12:30 Group discussion
12:30 - 13:30 Lunch
13:30 - 14:15 Invited talk: Nicola Paoletti
14:15 - 14:25 Sonification in Explainable Artificial Intelligence: An Example Study on COVID-19 Detection from Audio, Alikan Akman
14:25 - 14:35 Towards a Theory of Faithfulness: Faithful Explanations of Differentiable Classifiers over Continuous Data, Xiang Yin
14:35 - 14:45 Break
14:45 - 15:30 Invited talk: Antonia Creswell
15:30 - 15:40 Using logical reasoning to create faithful explanations for NLI models, Joe Stacey
15:40 - 15:50 A framework for evaluating the cognitive capabilities of AI systems, Ryan Burnell
15:50 - 16:20 Group discussion
16:20 - 16:30 Concluding remarks

Location

Room 308, Huxley Building,
180 Queen's Gate,
South Kensington Campus,
Imperial College London,
London, SW7 2RH, UK.

Participation

Participation in the workshop is free, subject to availability.

If you would like to give a short presentation around the workshop's topics, please submit a title and a short abstract by Sep 14th, 3:00 pm, using the link below. To attend without presenting, please submit a short description of your research interests and previous (relevant) research, again by Sep 14th, 3:00 pm, using the same link.

We plan to notify presenters and attendees by Sep 16th. Lunch and refreshments will be provided for registered attendees (whether presenting or not).

Submit on Easychair

Organisers