The lack of explainability in AI, e.g. in machine learning or recommender systems, is one of the most pressing issues in the field of late, especially given the ever-increasing integration of AI techniques into everyday systems used by experts and non-experts alike. The need for explainability arises for a number of reasons: an expert may require more transparency to justify outputs of an AI system, especially in safety-critical situations such as self-driving cars, while a non-expert may place more trust in an AI system providing basic explanations, for example for movies recommended by recommender systems.

This workshop will bring together doctoral, early-stage and experienced researchers working in all areas of AI where there is either a need for explainability or potential for providing explainability. The workshop will consist of invited talks, presentations from members of Imperial College London's scientific community, and discussions regarding, amongst others, the format, purpose and identification of explanations in various AI settings. We welcome presentations of two kinds:

  • short (circa 15 minutes) presentations on ongoing work
  • long presentations (circa 30 minutes) on consolidated (e.g. published) work
  • Invited Speakers

    Euan Matthews (ContactEngine)

    "The Practicalities of Explanation"

    Programme

    Time Speaker Talk
    09:30 - 09:45 Welcome -
    09:45 - 10:00 Francesca Toni (DoC, Imperial College London) Introduction
    10:00 - 10:40 Richard Evans (Deep Mind, Google) Invited Talk - "Learning Explanatory Rules from Noisy Data"
    10:40 - 11:10 Stephen Muggleton (DoC, Imperial College London) "Ultra-strong machine learning - comprehensibility of programs learned with ILP"
    11:10 - 11:30 Coffee Break -
    11:30 - 12:00 Alessio Lomuscio (DoC, Imperial College London) "An approach to reachability analysis for feed-forward ReLU neural Networks"
    12:00 - 12:40 Hajime Morita (Fujitsu) Invited Talk - "Explainable AI that Can be Used for Judgment with Responsibility"
    12:40 - 13:30 Lunch -
    13:30 - 14:00 Christos Bechlivanidis (University College London) Invited Talk - "Concreteness and abstraction in everyday explanation"
    14:00 - 14:30 Seth Flaxman (Maths & DSI, Imperial College London) "Predictor Variable Prioritization in Nonlinear Models: A Genetic Association Case Study"
    14:30 - 15:00 Erisa Karafili (DoC, Imperial College London) "Argumentation-based Security for Social Good"
    15:00 - 15:20 Coffee Break -
    15:20 - 15:50 Kristijonas Cyras (DoC, Imperial College London) "Explaining Predictions from Data Argumentatively"
    15:50 - 16:10 Antonio Rago (DoC, Imperial College London) "Argumentation-Based Recommendations: Fantastic Explanations and How to Find Them"
    16:10 - 16:30 Yannis Demiris (EEE, Imperial College London) "Multimodal Explanations in Human Robot Interaction"
    16:30 - 16:45 Euan Matthews (ContactEngine) Invited Talk - "The Practicalities of Explanation"
    16:45 - 17:00 Discussion/Closing Remarks -

    Location

    Room 308, Huxley Building,
    180 Queen's Gate,
    South Kensington Campus,
    Imperial College London,
    London, SW7 2RH, UK.

    Participation

    Participation in the workshop is free, subject to availability.

    To attend and give a presentation, please submit via Easy Chair:

    • for long presentations: a short description of consolidated (e.g. published) work - please include a link to a publication if applicable;
    • for short presentations: a short description of ongoing work.

    To attend without presenting, please submit a short description of your research interests and previous (relevant) research, also via Easy Chair.

    The deadline for submissions is 6th April 2018. Notifications will be sent on a rolling basis as submissions are received.

    Organisers & Sponsors