Argumentative Explanations in AI

Half-day tutorial at KR 2020

Francesca Toni & Antonio Rago

As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. A general-purpose, systematic approach for addressing the two challenges of explainability and anthropomorphisation in symphony to form the basis of an AI-supported but human-centred society is critical for the success of XAI.

Our tutorial will focus on how argumentation can serve as the driving force of explanations in three different ways, namely: by building explainable systems from scratch with argumentative foundations or by extracting argumentative reasoning from general AI systems or from data thereof. We will provide a comprehensive review of the methods in the literature for extracting argumentative explanations.

The tutorial is aimed at any KR researcher interested in how KR can contribute to the timely field of XAI. It will be self-contained, with basic background on argumentation and XAI provided.

Tutorial Outline

Part 0
(30 mins)
Introduction to XAI and Background on (Abstract, Bipolar, Gradual) Argumentation
Part 1
(60 mins)
Building Explainable Systems with Argumentative Foundations
In this part we consider systems which are purpose-built for providing explanations with argumentative reasoning capabilities interweaved in their methods.
Part 2
(60 mins)
Extracting Argumentative Explanations from General AI Systems
In this part we will show how the extraction of argumentative abstractions of AI systems permits a dialectical understanding of a prediction or model. In these cases, the argumentation mechanism acts as an explanation wrapper in a range of models.
Part 3
(30 mins)
Extracting Argumentative Explanations (for Predictions) from Data
In this final part we will show how argumentative explanations for predictions can be extracted from data alone, without the need for interfering with or approximating the model.

Resources

(To appear)

Contact



Antonio Rago       Francesca Toni