Invited Speakers and Tutorials

Keynote Speaker: 

J. Alan Robinson 

Computational Logic: Memories of the Past and Challenges for the Future 
Invited Speakers: 

Krzysztof Apt  
A Denotational Semantics for FirstOrder Logic and the Alma0 Language  
Melvin Fitting  
Databases and Higher Types  
David Page  
ILP: Just Do It  
David Poole  
Logic, Knowledge Representation and Bayesian Decision Theory  
Leslie Valiant  
Robust Logics  
Moshe Y. Vardi  
Constraint Satisfaction and View Integration  
Victor Vianu  
Topological Queries in Spatial Databases 
Peter Flach 

Knowledge Representation for Inductive Logic Programming  
Michael Hanus  
Functional Logic Programming  
Manuel Hermenegildo  
Michael Kohlhase  
Using Deduction Techniques for Natural Language Understanding  
Aart Middeldorp  
Term Rewriting and Narrowing  
Stephen Muggleton  
Applications of Inductive Logic Programming  
Ilkka Niemela  
Stable Model Semantics: From Theory to Implementations and Applications  
Andreas Podelski  
Constraints for Program Analysis and Model Checking  
Vitor Santos Costa  
High Performance Logic Programming Systems  
Pascal Van Hentenryck  
The OPL Optimization Programming Language  
Toby Walsh  
Phase Transition Behaviour  
Michael Wooldridge  
The Logic of Rational Agency 
Computational Logic: Memories of the Past and Challenges for the Future
J. Alan Robinson
The development of computational logic since the introduction of Frege's modern logic in 1879 is presented in some detail. The rapid growth of the field and its proliferation into a wide variety of subfields is noted and is attributed to a proliferation of subject matter rather than to a proliferation of logic itself. Logic is stable and universal, and is identified with classical first order logic. Other logics are here considered to be first order theories, syntactically sugared in notationally convenient forms. >From this point of view higher order logic is essentially first order set theory. The paper ends by presenting several challenging problems which the computational logic community now faces and whose solution will shape the future of the field.
A Denotational Semantics for FirstOrder Logic and the Alma0 Language
We elaborate the view that firstorder logic can be seen as a computational mechanism over arbitrary interpretations. In this talk we introduce a denotational semantics for firstorder logic. This work follows an earlier paper with Marc Bezem on operational semantics of firstorder formulas over arbitrary interpretations. By allowing an assignment of a nonground term to a variable we introduce in this framework logical variables.
The semantics combines a number of wellknown ideas from the areas of semantics of imperative programming languages and logic programming. The soundness result relates this semantics with the truth definition in a particularly simple way.
This approach to programming has been realized in the implemented programming language Alma0 that extends imperative programming by features that support declarative programming.
Databases and Higher Types
Generalized databases will be examined, in which attributes can be sets of attributes, or sets of sets of attributes, and other higher type constructs. A precise semantics will be developed for such databases, based on a higher type modal/intensional logic.
ILP: Just Do It
Inductive logic programming (ILP) is built on foundations laid by research in other areas of computational logic. These foundations include:
1. Implementations such as fast deductive systems, 2. Theoretical results (beyond the soundness and completeness of resolution) such as Lee's subsumption theorem and the undecidability of implication between two definite clauses, and 3. Lessons for the art of realworld applications, such as how best to represent domain knowledge in logic. In spite of these foundations, at 10 years of age ILP now faces a number of new challenges brought on by exciting application opportunities. The purpose of this talk is to interest researchers from other areas of computational logic in contributing their special skill sets to help ILP meet these challenges. For example, within many bioinformatics applications, ILP needs a synthesis with probabilistic techniques to capture relationships that hold probabilistically rather than deterministically. For applications from information extraction to physical modeling, it has been demonstrated that ILP can benefit from incorporating multiple types of reasoners, in an analogous manner to constraint logic programming. For problems as diverse as natural language processing and drug design that have massive search spaces, ILP potentially can benefit from stochastic search as in many of the leading planning and propositional reasoning systems. The talk will address these and other recentlyidentified directions for ILP, discussing recent work, drawing links to other areas of computational logic, and seeking to attract new researchers at each step.
Logic, Knowledge Representation and Bayesian Decision Theory
In this talk I give a brief overview of recent work on uncertainty in AI, and relate it to logical representations. Bayesian decision theory and logic are both normative frameworks for reasoning that emphasize different aspects of intelligent reasoning. Belief networks (Bayesian networks) are representations of independence that form the basis for understanding much of the recent work on reasoning under uncertainty, evidential and causal reasoning, decision analysis, dynamical systems, optimal control, reinforcement learning and Bayesian learning. The independent choice logic provides a bridge between logical representations and belief networks that lets us understand these other representations and their relationship to logic and shows how they can extended to firstorder rulebased representations. This talk discusses what the representations of uncertainty can bring to the computational logic community and what the computational logic community can bring to those studying reasoning under uncertainty.
Robust Logics
It has been long recognized that cognitive phenomena exhibit both inductive as well as deductive aspects. The processes of induction and deduction have been studied systematically though separately in the frameworks of computational learning and computational logic. Since cognitive computations appear to perform these processes in combination, a single framework is required within which the two can be discussed simultaneously. Robust logics are designed to serve just that purpose. They are based on the view that a knowledgebase can be made robust only if each assertion in it is verifiable empirically against and learnable from real world observations. The challenge then is to reconcile this with the advantages offered by conventional logics, in particular a sound basis for deduction. Robust logics are designed to bridge this gap while retaining computational feasibility. In this framework both the computational work as well as the accuracy of both learning and deduction are polynomially controlled. This therefore offers a computationally feasible approach to some of the traditional problems of artificial intelligence that further seeks to avoid the main sources of brittleness encountered by traditional logic based systems.
Constraint Satisfaction and View Integration
A large class of problems in AI and other areas of computer science can be viewed as constraintsatisfaction problems. This includes problems in machine vision, belief maintenance, scheduling, temporal reasoning, type reconstruction, graph theory, and satisfiability. In general, the constraint satisfactionproblem is NPcomplete, so searching for tractable cases is an active research area. It turns out that constraint satisfaction has an intimate connection with the viewintegration problem in database theory. In view integration, we attempt to answer a query posed to a database only on the basis of information available through views. An important question is when tractable viewbased query processing is possible. The tight connection with constraint satisfaction indicates, however, that characterizing tractability of viewbased query processing is a rather difficult problem.
Topological Queries in Spatial Databases
Handling spatial information is required by many database applications, and each poses different requirements on query languages. In many cases the precise size of the regions is important, while in other applications we may only be interested in the topological relationships between regions  intuitively, those that pertain to adjacency and connectivity properties of the regions, and are therefore invariant under homeomorphisms. Such differences in scope and emphasis are crucial, as they affect the data model, the query language, and performance.
This talk focuses on queries targeted towards topological information for twodimensional spatial databases, where regions are specified by polynomial inequalities with integer coefficients. We focus on two main aspects:
(i) languages for expressing topological queries, and
(ii) the representation of topological information.
In regard to (i), we study several languages geared towards topological queries, building upon wellknown topological relationships between pairs of planar regions proposed by Egenhofer. In regard to (ii), we show that the topological information in a spatial database can be precisely summarized by a finite relational database which can be viewed as a topological annotation to the raw spatial data. All topological queries can be answered using this annotation, called topological invariant. This yields a potentially more economical evaluation strategy for such queries, since the topological invariant is generally much smaller than the raw data. We examine in detail the problem of translating topological queries against the spatial database into queries against the topological invariant. The languages considered are firstorder on the spatial database side, and fixpoint and firstorder on the topological invariant side. In particular, it is shown that fixpoint expresses precisely the PTIME queries on topological invariants. This suggests that topological invariants are particularly wellbehaved with respect to descriptive complexity.
This talk is based on joint work with C.H. Papadimitriou, D. Suciu and L. Segoufin.
Knowledge Representation for Inductive Logic Programming
Inductive Logic Programming combines inductive learning with a logical representation formalism. The notion of individual (example, database) is crucial in most forms of inductive learning. There are several ways to represent an individual in first and higherorder logic: as a closed term, as a (Herbrand) interpretation, or as a (sub)database. This tutorial reviews these individualcentred representations and discusses transformations between them, such as flattening a termbased representation into a database representation. The main issue addressed in this tutorial is the close connection that exists between such individualcentred representations and the semipropositional attributevalue representations widely used in machine learning. This provides a clear perspective on exactly how ILP differs from attributevalue learning, and helps to understand the key steps in ILP algorithms. Finally, I will briefly review other knowledge representation issues in ILP, such as negation and abduction.
Functional Logic Programming
Functional logic programming languages are an attempt to combine the best features of functional languages (reduction of nested expressions, higherorder functions, lazy evaluation, polymorphic type systems) and logic languages (logical variables, partial data structures, goal solving, builtin search) in a seamless way. Thanks to this combination, they provide new programming techniques (e.g., demanddriven search) and better structuring facilities, and they avoid impure features of logic languages like Prolog.
The aim of this tutorial is to provide an introduction to functional logic languages and their programming techniques. It will be demonstrated that functional languages can be easily extended to cover features for logic and concurrent programming. Concrete examples will be shown by the use of Curry, a multiparadigm declarative language developed by an international initiative to define a standard in this area. Beyond the standard features for functional and logic programming, Curry supports techniques for encapsulating and programming search strategies, programming with various constraint domains, and for concurrent and distributed programming. The advantages of integrated functional logic languages will be demonstrated by several applications, including GUI or Internet programming.
The CIAO Logic Programming Environment
We present a tutorial overview of Ciao, a next generation logic programming environment which is unique in several ways: Ciao is a complete (ISO)Prolog system, but its novel modular design allows both restricting and extending the language. This makes it possible to work with fully declarative subsets of Prolog and also with syntactic and semantic extensions which can be activated separately on each program module. Its design also facilitates global program analysis, static debugging, and optimization. We review some Ciao's extensions: assertions (types, modes, determinacy, cost, ...), functions, records, higherorder, constraints, objects, persistence, alternative control rules, concurrency (threads/engines), distributed execution (agents), parallel execution, etc. We also review the use of the compiler and overall environment while programming both in the large and in the small: modular program development, separate/incremental compilation, documenting programs with assertions, static/dynamic debugging, external interfaces, producing small executables, linking regimes, scripts, etc.
A large number of people have contributed to the development of Ciao, including F. Bueno, D. Cabeza, M. Carro, M. Garcia de la Banda, M. Hermenegildo, P. Lopez, and G. Puebla.
Using Deduction Techniques for Natural Language Understanding
The tutorial surveys a recent trend in computational linguistics that uses deduction techniques for natural language understanding. As natural language processing applications are booming and the field is becoming a commercial sector with a multibillion dollar turnover, this connection is promising to become a significant market for computational logic in general, and automated deduction techniques in particular.
We start out by explaining how NL understanding tasks such as
 common ground maintenance,
 determining discourse structure and coherence,
 semantic disambiguation,
 resolution of definite descriptions, anaphora, and presuppositions, or
 the reconstruction of linguistically underspecified material
is influenced by world knowledge. As a consequence the whole process of semantic and pragmatic analysis has to be modelled as an inference process.We will take a look at the application of firstorder automated theorem proving and model generation techniques in all of these processes, and investigate the necessary integration and communication between deduction systems and the linguistic modules. Not very surprisingly it will turn out that while a coarsegrained integration of whole deduction systems can yield significant improvements to the NL understanding systems, a tighter integration of deduction and linguistic analysis leads to better results.
Term Rewriting and Narrowing
Term rewriting and narrowing are important computational models with applications in algebra, software engineering, declarative programming, and theorem proving. In term rewriting, computation is achieved by directed equations and pattern matching. In narrowing pattern matching is replaced by unification. The aim of this tutorial is to provide an introduction to term rewriting and narrowing. The tutorial is organized as follows. After presenting a few motivating examples, we explain the basic concepts and results in term rewriting and narrowing: abstract rewriting, equational reasoning, termination techniques, confluence criteria, completion, soundness and completeness issues, strategies, and modularity. The tutorial concludes with a selection of more specialized topics as well as more recent developments in term rewriting: advanced termination techniques (dependency pairs), conditional rewriting, rewriting modulo, tree automata techniques, and higherorder rewriting.
Applications of Inductive Logic Programming
This tutorial will provide a) a general introduction to logical aspects of ILP, b) an overview of the major areas of ILP applications, c) a small number of indepth case studies and d) some general directions for ILP research based on limitations brought out by the case studies. For each case study I will examine in detail not only the degree of success of ILP systems compared to other statistical and machine learning approaches but also the theoretical factors which either led to success, or hampered the application of ILP. In discussing general directions I will try to indicate novel areas for ILP application which have been ignored due to technical (especially logical) limitations of existing approaches. This tutorial will be aimed at providing inspiration and a starting point to the interested outsider who would like to either supervise or carry out research in ILP.
Stable Model Semantics: From Theory to Implementations and Applications
The stable model semantics provides an interesting constraintoriented interpretation of logic programs where the rules of a program are seen as constraints on an answer set for the program. This leads to a novel logic programming paradigm, answer set programming, where the underlying idea is to encode an application problem using logic program rules so that the solutions to the problem are captured by the stable models of the rules. What makes the approach particularly interesting is the emergence of automated reasoning tools supporting stable model computation and the rapid improvement of their performance in recent years. This has led to the possibility of applying the paradigm to various areas including combinatorial problems, satisfiability checking, constraint satisfaction, planning, model checking, and product configuration.
In the tutorial the current state of the art in implementing and applying the stable model semantics is reviewed. We start by explaining the stable model semantics and its basic properties. We discuss the application methodology for stable models and review areas where the approach is being applied. We examine available systems implementing stable model computation and consider their implementation techniques, primarily focusing on search methods, pruning techniques, heuristics, and handling of rules with variables. We also present experimental results illustrating the current level of performance of systems computing stable models.
Constraints for Program Analysis and Model Checking
Many problems in analyzing imperative, logic, functional or concurrent programs and in model checking for finitestate or infinitestate systems are constraint solving problems. Several abstract domains in the abstract interpretation framework are based on constraints. We will investigate different approaches in program analysis and in model checking with respect to the fundamental properties of the underlying constraint systems.
High Performance Logic Programming Systems
The tutorial presents the major performance issues for sequential and parallel logic programming systems.
We first survey the evolution of sequential implementations, since the original Marseille Prolog implementation. Focus is then given to the WAM based techniques that are the basis for most Prolog systems. We next discuss further developments, such as improvements in emulator design and compilation to native code or to C.
A modern Logic Programming Systems introduces more than just a standard WAM engine. We briefly survey other major issues in implementing a real logic program system system, such as the module system and the garbage collector. Moreover, these systems introduce several extensions over traditional Prolog systems, such as coroutining, constraints, and tabulation. We discuss how these extensions can be implemented and how they impact performance.
Parallel logic programming is an alternative approach to obtain high performance. We discuss the key ideas in And/Or parallel systems, concentrating on the major techniques used for the shared memory systems that implement Orparallelism or Andparallelism, such as Aurora, Muse, &Prolog, DDAS, &ACE, PARLOG and KLIC, AndorraI, SBA, ACE, and FIRE.
The OPL Optimization Programming Language
OPL is a modeling language for solving mathematical programming and combinatorial optimization problems. It is the first modeling language to combine highlevel algebraic and set notations from modeling languages with a rich constraint language and the ability to specify search procedures and strategies which are the essence of constraint programming. In addition, OPL models can be controlled and composed using OPL Script, a script language that simplifies the development of applications that solve sequences of models, several instances of the same model, or a combination of both as in columngeneration applications. This tutorial surveys the functionalities and application of OPL and OPL Script.
Phase Transition Behaviour
We are all familiar with phase transition behaviour in physics. Ice melts. Water freezes. Steam condenses. So what has this got to with computation? In this tutorial, I'll show how a large body of recent work has shown that it has a lot to do with computation, and especially with our understanding of what makes computational problems "hard". I'll describe how you can observe phase transition behaviour in your favourite computational problem. And I'll attempt to answer some of the following questions. Is phase transition behaviour just limited to NPhard problems? Can phase transition behaviour be observed in problems with structure as opposed to just purely random problems? What about phase transition beaviour in real problems? What can we learn from studying phase transition behaviour? Can we devise better algorithms based upon knowledge gained from observing phase transitions? And what open questions face this research field? This is an exciting new research area, and I can promise lots of neat graphs.
The Logic of Rational Agency
The metaphor of rational agency is increasingly used within computer science. Just as we find it useful in many applications to conceptualise the world as consisting of a collection of interacting but essentially passive objects, so for many others it is convenient to view it as consisting of a collection of interacting, purposeful, rational agents. There are two main paradigms for developing theories of rational agents. The first borrows the tools of economics, game theory, and decision theory. The second borrows the tools of computational logic, and uses them to axiomatise the properties of rational agents and multiagent systems. In this tutorial, I will survey work in the application of computational logic to the problem of reasoning about rational agents. In particular, I will show how notions such as belief, desire, and intention can be defined and axiomatised in a logical framework, and then build upon these definitions to show how more complex concepts such as communication and cooperation can be axiomatised.