Argumentative Explanations in AI

Spotlight tutorial at ECAI 2020

Antonio Rago & Francesca Toni

As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. A general-purpose, systematic approach for addressing the two challenges of explainability and anthropomorphisation in symphony to form the basis of an AI-supported but human-centred society is critical for the success of XAI.

We will provide an extensive introduction to the ways in which argumentation can be used to address these two challenges in symphony via the extraction of bespoke, human-like explanations for AI systems. We will review recent literature in which this has been achieved in one of two ways: by building explainable systems with argumentative foundations from scratch or by extracting argumentative reasoning from general AI systems.

We will motivate and explain a topic of emerging importance, namely Argumentative Explanations in AI, which is itself a novel synthesis combining two distinct lines of AI work. Argumentation is an established branch of knowledge representation which has historically been well represented in the ECAI community, while XAI is arguably one of the biggest concerns du jour for the AI community in general.

This tutorial is aimed at any AI researcher interested in how argumentation can contribute to the timely field of XAI. It will be self-contained, with basic background on argumentation and XAI provided.

Tutorial Outline

The tutorial will be split into two parts, each outlining a form of method for producing argumentative explanations in AI. Relevant references for further reading are indicated in the table.

Part 0
(10 mins)
Introduction to XAI and Background on (Abstract, Bipolar, Gradual) Argumentation

Some relevant papers include the following:

Part 1
(40 mins)
Building Explainable Systems with Argumentative Foundations

In this part we consider systems which are purpose-built for providing explanations with argumentative reasoning capabilities interweaved in their methods. Such methods include the following:

Part 2
(40 mins)
Extracting Argumentative Explanations from General AI Systems

In this part we will show how the extraction of argumentative abstractions of AI systems permits a dialectical understanding of a prediction or model. In these cases, the argumentation mechanism acts as an explanation wrapper in a range of models. Such methods include the following:

Resources

Slides