The recent surge in attention on various (computational, ethical or otherwise) concerns raised by the widespread advancement of AI systems has shone a spotlight on explainability.

This workshop will explore the roles which causality and persuasion may play in addressing these concerns, and in particular user trust, in order to deliver explainable AI (XAI) solutions suitable for an AI-supported but human-centred world. Causality is of interest due to the need for explanations to be faithful to the underlying (machine-generated) models. Persuasion is of interest from the viewpoint of the users receiving explanations: while these explanations need to be persuasive towards the target users there is a risk of manipulation.

This workshop aims to bring together researchers from academia and industry working in XAI, causality and persuasion. The workshop will consist of invited and selected presentations and discussions. We welcome presentations on either ongoing or consolidated (e.g. published) work.

Programme

The programme is as follows (all times are GMT+1).

Time Speaker Talk
09:40 - 09:45 Francesca Toni & Daniele Magazzeni Welcome
09:45 - 10:00 Pietro Baroni Explanations from Gods of Olympus to black-box models: a layman perspective
10:00 - 10:15 Simone Stumpf When explanations might do more harm than good
10:15 - 10:30 Thomas Spooner Counterfactual Explanations and The Unique Challenges of Regression Models
10:30 - 10:35 Nico Potyka On the Relationship between Neural Networks and Quantitative Argumentation Frameworks
10:35 - 10:40 Dong Huynh, Niko Tsakalakis, Sophie Stalla-Bourdillon & Luc Moreau PLEAD: Provenance-based Approach to Constructing Explanations for Automated Decisions
10:40 - 10:45 Leonid Chindelevitch, Hooman Zabeti, Max Libbrecht, Nafiseh Sedaghat & Amir Hosein Safari Explainability vs. interpretability in genotype-phenotype predictions: the case of antimicrobial resistance
10:45 - 11:00 Christos Bechlivanidis Explanation and Blame in Smart Product Failure
11:00 - 11:05 Matija Franklin Towards an evaluatory psychological framework of XAI explanations
11:05 - 11:10 Kristina Milanovic & Jeremy Pitt The Impact of Preconceived Expectations of Artificial Intelligence on Human Decision Making
11:10 - 11:15 Michael Yeomans, Julia Minson, Hanne Collins & Francesca Gino Conversational Receptiveness: Improving Engagement with Opposing Views
11:15 - 11:20 Gerard Canal THuMP project --- Enhancing trust in Human-Machine Partnerships
11:20 - 11:30 - Discussion
11:30 - 11:45 Lara Kirfel, Alice Liefgreen & Dave Lagnado How actionability shapes people’s perceptions of counterfactual explanations in automated decision-making
11:45 - 12:00 Hana Chockler Causes and Explanations in Practical Applications
12:00 - 12:05 Francis Rhys Ward Reducing Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios: A Causal Influence Diagram Perspective
12:05 - 12:15 - Discussion
12:15 - 12:30 Anthony Hunter Computational Persuasion and XAI
12:30 - 12:45 Andrea Celli Algorithmic Bayesian Persuasion: Results and Open Challenges
12:45 - 13:00 - Discussion

Location

This workshop will be held virtually.

Participation

If you would like to give a short presentation around the workshop's topics, please submit a title and a short abstract, by June 30th, at the following link. We plan to notify presenters by July 2nd.

Submit a title and abstract

If you would like to participate without giving a presentation please register below by July 5th.

Attend without presenting

Organisers