Invited Speakers
Giuseppe De Giacomo; Sapienza Universita' di Roma (Italy)

Reactive Synthesis, Planning and Reinforcement Learning in the Finite Trace Setting
Reactive Synthesis, Planning in Nondeterministic Domains and Reinforcement Learning are all about synthesizing an agent behavior/strategy/policy that satisfy certain specifications. However one crucial distinction is that in Planning and in Reinforcement Learning, we consider separately the world specification (the domain) from the task specification (the goal). This distinction is important in AI, since the world model seldom changes, while the goal that the agent has to accomplish changes unceasingly as is the agent acts in the world. In this talk we discuss, compare and generalize these forms of synthesis in the context of linear-time finite trace specification (LTLf/LDLf).
Jane Hillston; University of Edinburgh (UK)

Statistical Abstraction for Multi-scale Spatio-temporal Systems
Spatio-temporal systems often exhibit multi-scale behaviours, in which response to a physical environment drives internal processes which in turn influence spatial behaviour. However such systems present formidable challenges for computational modelling and analysis. I will present a prototypic scenario in which spatially distributed agents decide their movement based on external inputs and a fast-equilibrating internal computation: bacterial chemotaxis. Illustrated through this example, I will propose a generally applicable strategy of model abstraction based on statistically abstracting the internal system using Gaussian Processes, a powerful class of non-parametric regression techniques from Bayesian Machine Learning.
Joint work with Michalis Michaelides and Guido Sanguinetti.
Dvijotham Krishnamurthy; Google DeepMind (UK)

Scalable verification of deep learning
Prediction systems based on deep learning are being deployed in a variety of applications where reasoning about their behavior under perturbations of the input (arising from sensor noise or adversarial manipulation of the input) is of prime importance. Thus, the problem of verifying input-output properties of deep learning models needs to be addressed. However, traditional techniques from formal methods cannot easily be scaled to apply to large and complex systems with millions of neurons. In this talk, I will present techniques developed at DeepMind and the machine learning community more broadly aimed at scaling verification of input-output properties of modern complex deep learning systems. I will also discuss the state of progress in the field, what has been achieved till date and conclude by addressing the challenges that remain.