Ray Reiter Department of Computer Science, University of Toronto (currently visiting King's College, London)
Friday, 26th June 1998.
Abstract: I shall describe the specification and implementation of a family of planning systems in the situation calculus. These planners are conceptually very simple; they reason forward from the initial database until they encounter a situation that satisfies the planning goal. By itself, such a search strategy would be hopelessly inefficient for all but the simplest of plans. However, I pursue the intuitions of Kibler and Morris, and later, of Bacchus and Kabanza, that if suitable domain specific knowledge is available to the planner for pruning away bad branches of the search tree, very good performance can be obtained.
I have implemented four planners based on this idea:
These are all deductive planners, in the spirit of Green's 1969 definition. They are implemented in Golog. Preliminary experience with the blocks world has been very encouraging, but other planning domains have yet to be explored.
Further details in Chapter 8 of "Knowledge in Action: Logical Foundations for Describing and Implementing Dynamical Systems", http://www.cs.toronto.edu/~cogrobo
Alessandra Di Pierro Herbert Wiklicky Department of Computer Science, City University
Friday, 12th June 1998.
Abstract: A program in Concurrent Constraint Programming (CCP) can make at a certain point of its execution a choice among several possible continuations. In the classical model this choice is completely nondeterministic. We propose a slight alteration of the basic model by allowing the choice to be driven by some given probability distribution on the possible alternative paths. Operationally the execution of an agent is now modeled in analogy to what is called in probability theory a ``random walk on a graph'', where in our case the graph is provided by the transition system defining the operational semantics of the language. We present a denotational semantics for this stochastic execution model of CCP, which assigns to each agent a linear operator (matrix) on the operator algebra on the free (real) vector space generated by the constraint system underlying the language. A fixpoint construction is also presented, which allows us to deal with recursively defined procedure calls. The resulting model is equivalent to the set of all constraints computed in both finite and infinite computations together with their associated probability.
Michael Norrish Computer Laboratory, University of Cambridge.
Friday 1st May, 1998
Abstract: Expressions in the programming language C have such an under-specified semantics that one would expect them to be non-deterministic. However, with the help of a mechanised formalisation, we have shown that the semantics' additional constraints actually result in a large class of C expressions having only one possible behaviour.
Donal Syme Computer Laboratory, University of Cambridge
Friday 20th March, 1998
Abstract: This talk will describe a machine checked proof of the type soundness of a subset of the Java language called JavaS. A formal semantics for this subset has been developed by Drossopoulou and Eisenbach, and they have previsously published an outline of a type soundness proof. The work presented here complements theirs by correcting and clarifying details; and it demonstrates the utility of formal, machine checking when exploring a large and detailed proof based on operational semantics. The work also `independently rediscovered' a significant error in the Java Language Specification. The proof has been developed using the author's DECLARE system, and serves as a case study in the application of `declarative' proof techniques to a major property of an operational system. An updated description of this work can be found in the report at http://www.cl.cam.ac.uk/users/drs1004/java.ps
Jacques Fleuriot Computer Laboratory, University of Cambridge
Friday 13th March, 1998
Abstract: Sir Isaac Newton's Philosophiae Naturalis Principia Mathematica was first published in 1687 and set much of the foundations that led to profound changes in modern science. Despite the influence of the work, the elegance of the geometrical techniques used by Newton is little known since the demonstrations of most of the theorems set out in it are usually done using calculus. This talk describes my attempts to reproduce mechanically Newton's geometrical reasoning using formal tools I have developed in Isabelle. I will start by giving an overview of Isabelle, a generic theorem prover, that has provided the logical framework within which I have tried to mechanise aspects of Newton's Principia. I will then describe how Newton's geometry goes beyond the traditional boundaries of Euclidean geometry with the presence of both motion and infinitesimals. I will also describe the constructions leading to a hyperreal number system that contains infinitesimal numbers. I will show that this can be used with a powerful geometric theory, also in Isabelle, to introduce the concept of an ``infinitesimal''geometry in which quantities can be infinitely small or infinitesimal. This approach, I believe, reveals new properties of this geometry that only hold because infinitesimal elements are allowed. My work uses these properties to prove lemmas and propositions from the Principia that deal with the motion of bodies.
Simon Parsons Queen Mary and Westfield College, London
Friday 6th March, 1998
Abstract: The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Negotiation provides a solution to these problems by giving the agents the means to resolve their conflicting objectives, correct inconsistencies in their knowledge of other agents' world views, and coordinate a joint approach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to negotiate in order to establish acceptable ways of solving problems. The framework provides a formal model of argumentation-based reasoning and negotiation, details a design philosophy which ensures a clear link between the formal model and its practical instantiation, and describes a case study of this relationship for a particular class of architectures (namely those for belief-desire-intention agents).
Elizabeth Pollitzer Imperial College, London
Friday 27th February, 1998
Abstract: When designing computer systems that require human interaction we need to understand how effective the proposed interface design is in terms of facilitating better performance. After a very brief introduction to the various models proposed in HCI to support interface and interaction design, their various strenghts and weaknesses, I will show how Event Calculus can be used to represent and reason about user and interface interactive behaviours. This approach offers the important advantage of automation and of simulation of the proposed designs for testing purposes. In a more tentative way, I will also suggest that Event Calculus may be usefully deployed to support the analysis of models of biological processes. As an example I will use recent research efforts to extend a model for the mechanisms of RNA chain elongation to RNA chain initiation. A plausible model for elongation has to provide information about initiation events.
Guido Governatori CIRFID, University of Bologna, Italy
Friday 20th February, 1998
Abstract: In this paper we present a theorem proving method for computing nonmonotonic consequence relations in a conditional logic setting. The method is based on the usual possible world semantics for conditional logics. The label formalism introduced in previous works to account for the semantics of normal modal logics is easily adapted to the semantics of conditional logics by simply indexing labels with formulas. The inference rules are provided by the propositional system KE+---a tableau-like analytic proof system devised to be used both as a refutation and a direct method of proof---enlarged with suitable elimination rules for the conditional connective. The resulting algorithmic framework is able to compute cumulative consequequence relation in so far as they can be expressed as conditional implications. Suprisingly enough, there have been few attempts to provide automated deduction methods in conditional logic suitable for investigating nonmonotonic consequence relations in a computational setting. The methods described by Groeneboer and Delgrande and by Lammarre are valuable steps in this direction. The method presented in this paper can be viewed as a further, more direct step in the same direction.
Murray Shanahan Queen Mary and Westfield College, London
Friday 6th February, 1998
Abstract: This talk describes a system which executes a sense-plan-act cycle. The heart of the system is a resource-bounded abductive event calculus planner which does hierarchical planning by decomposing compound events, producing actions in progression (as opposed to regression) order. As the definitions of compound events can include standard programming constructs like recursion, sequence and choice, one way to view the resulting system is as an interpreter which dynamically assembles code fragments to take account of ongoing events.
Marcello D'Agostino Dipartimento di Scienze Umane, Universita' di Ferrara, Italy
Friday 23th January 1998
Abstract: WinKE is a new interactive theorem proving assistant based on the KE calculus, an improvement of the tableau method. It is part of a joint project involving Imperial College, the University of Berlin and the University of Ferrara. It has been developed to support teaching logic and reasoning to undergraduate students. The WinKE process of constructing a proof tree is as faithful as possible to the pen-and-paper procedure. Running under Windows 95 it is easy to learn and visually satisfying. Important features include a comfortable GUI, on- and off-line proof checking, automated deduction, hints, bookkeeping-facilities, editing problem files, and many more. The software is supportive of and complementary to an introductory textbook on classical logic, but can also be used on its own. This talk will give a brief introduction to the KE calculus and a demonstration of WinKE's interface and functionality are described.
Pierangelo Dell'Acqua Computing Science Department, Uppsala University, Sweden
Friday 16th January 1998
Abstract: We argue by means of a number of examples, that a formalism equipped with metalogic and reflection capabilities allows us to easily model (by means of the metalevel) and implement (by means of reflection) the desired degree of flexibility in a query-answering system interacting with a data/knowledge base. We also try to show that a main feature of building a really flexible system is the possibility of modeling metalogical agents, which are able to interact and to communicate in various different ways. This because it is useful to view a query-answering system as a collection of agents including, in particular, some external (typically human) agents, in order to make a system behave more rationally and flexibly towards users.
Click here for a postscript version of a related paper.
Howard Williams Heriot-Watt University, Edinburgh
Friday 5th December 1997
Abstract: One important thread within advanced database systems research is the notion of rule-based database systems. The emergence of deductive databases in the 1980s provided one form of rule-based database. However, while this type of database system provides more advanced functionality, it suffers from other limitations of relational database systems such as the lack of data structures. The realisation that the object-oriented approach is complementary to the deductive one and that the two can be combined to produce deductive object-oriented databases with all the benefits of both represents an important breakthrough for rule-based database systems.
An alternative to the deductive rule approach is the active rule approach. Active rules are more powerful than deductive rules but lack the advantages of the sound theoretical foundation of the latter. The two ideas can be combined to produce an active DOOD provided that the integration is treated with care.
This presentation will focus on work done at Heriot-Watt in producing a DOOD (ROCK&ROLL) and an active extension to it (RAP).
Wilfried Meyer-Viol Imperial College, London
Friday 28th November 1997
Abstract: In this talk we will give a description of the overall structure and the dynamics of a parser which incrementally creates a representation, a term, in some formal language, while traveling through a natural language string in a left-to-right, word-by-word fashion. In a recursively defined formal language formulas can be represented in tree format and the representation used in this talk will render them in the form of linked binary trees the nodes of which are decorated by labelled lambda terms corresponding to the constituents of the sentence. Note that the trees that are the output of the parser here described are not syntactic trees, but trees of (sub)terms of some recursively defined representation language. The interpretation process is formalized within an LDS framework in which labels guide the parser in the goal-directed process of constructing labelled formulas, in a logical language. They consist of pairs of sequences of labels followed by a content formula. The formula of a declarative unit is the side representing the content of the words supplied in the course of a parse. The labels annotate this content with linguistic features and control information guiding the direction of the parse process. Dynamically, the formal model is the parser as a transition system. The states of the transition system correspond to state descriptions of the parser. The transitions of the system correspond to inference steps between state descriptions. In this sense we represent parsing as deduction. The process is goal-directed in the sense that there are final-states which are required to correspond to the construction of a sentence interpretation.
The novelty of the parser is that, unlike other parsing formalisms in which the system is defined relative to some independent set of syntactic axioms, the definitions comprising the parser themselves define a natural language system.
Phan Minh Dung Asian Institute of Technology, Bangkok
Friday 21st November 1997
Abstract: In recent years there is a shift in AI from the classical paradigm of rational agents to the notion of reactive agents that have an intelligent ongoing interaction with dynamic and uncertain environments. The emphasis on an agent's interaction to its environment is an important change from the traditional theories on representation and control.
The correctness of an agent's behavior depends on its capability to deliver timely responses to relevant changes in its environment. A common view on capability in the literature is that an agent is said to be capable to execute an action A if it is physically possible for her to do it provided that the nature is kind enough not to interfere. It is clear that this viewpoint cannot model the agent's capability in an possibly uncooperative or hostile environment.
In this paper we will develop a branching time logic for specifying agent's capability in a possibly uncooperative or hostile multiagent environment. We then develop a method for verification of the correctness of reactive plans. The semantics of our logic is grounded in terms of states and actions of automata-like agents.
Hans Jürgen Ohlbach Imperial College, London
Friday 7th November 1997
Abstract: I shall present a technique for augmenting a given logical system with a Boolean component. The atomic decomposition technique proposed reduces reasoning about the Boolean component in the combined system to reasoning in the pure basic system only.
A typical instance of this scheme is a linear programming system which is to be augmented with reasoning about cardinalities of sets, or other functions mapping sets to integers. The sets and their set-theoretic relationships are axiomatized with propositional logic. Atomic decomposition then reduces reasoning about numerical attributes of these sets to arithmetic equation solving.
The same idea works with almost any logical system meeting some very elementary conditions.
Nicola Olivetti University of Torino, Italy
Friday 24th October 1997
Abstract: I present a uniform paradigm for automated-deduction and proof-theory in non-classical logics based on a goal-directed style of deduction. The proof-systems I present can be seen as a generalization of the conventional (Horn) logic programming proof procedure to several families of non-classical logics.
The proof systems I present are strongly analytical and satisfy a basic property of cut-admissibility. The study of goal-directed provability seems promising in the following respects:
The examples of proof systems I will present range from substructural, to modal and intermediate logics.
The results I present are part of my joint work with Dov Gabbay and can also be seen as a development of Gabbay's theory of Labelled Deduction Systems (LDS).
Maarten Marx Imperial College
Friday 17th October 1997
Abstract: We discuss two strategies for finding decidable versions of first order logic and show that they are really two sides of the same coin. The semantic strategy consists of allowing more first order models by restricting the set of assignments to a subset of all assignments. This move is closely related to Henkin's generalised models for second order logic and extensively studied in the seventies by Henkin, Monk, Tarski, Andreka and Nemeti.
Recently Johan van Benthem proposed the so-called (Loosely) Guarded Fragment of first order logic, a fragment in which every quantification is "guarded" or relativised by a (certain conjunction of) atomic formula(s). This fragment is decidable and rich enough to interpret several modal logics, among others arrow logic and temporal logics of Since/Until.
We show the close connection between these two approaches, and obtain upper complexity bounds for the guarded fragment. An algorithm for deciding satisfiability of loosely guarded formulas is currently being implemented here at Imperial by Stefan Schlobach.
The title of this talk is due to Szabolcs Mikulas.
Adam Bockrath Sussex University
Friday 10th October 1997
Abstract: If Brooks' layered control architecture is correctly misunderstood, it suggests an analogous framework for logic programs. Just as Brooksian control systems provide a practical way forward in the absence of a modular theory of dynamics, such a framework may help us construct more efficient logic programs. More importantly, it may point to some of the questions we must answer if we are to better understand modularity in logic programming.
This talk will set out a clear misunderstanding of layered control architectures and relate this to the problem of modular logic programming.
Koji Tanaka University of Queensland, Australia
Friday 26th September 1997
Abstract: In this talk, I apply a paraconsistent logic to the Grove's sphere semantics, that is a model for the AGM theory of belief revision. Firstly, I construct a sphere semantics that is based on a paraconsistent logic. Secondly, I examine the soundness of the paraconsistent sphere semantics with respect to the AGM postulates. Thirdly, I discuss some differences between classical (AGM) and a paraconsistent approach. I then argue that the theory of belief revision that is based on a paraconsistent logic is simple and elegant, and of universal use.
Odinaldo Rodrigues Imperial College, London
Friday 19th September 1997
Abstract: The talk is focused on the problem of prioritised belief revision. We present a method to deal with belief bases provided with a partial ordering representing priorities associated with sentences in the base. This method separates the problem of checking satisfiability from the problem of reasoning about the priorities. For the first one, the proposed solution is called the matrix representation of a set of sentences. From this representation, it is possible to extract subsets of the base that are consistent. The second problem is solved by the definition of an auxialiary ordering which determines which of these consistent subsets of the base are most plausible with respect to the initial partial ordering provided.
Chitta Baral University of Texas
Tuesday 26th August 1997
Abstract: We give a formal characterization of reactive control using action theories. In the process we formalize the notion of a reactive control program being correct with respect to a given goal, a set of initial states, and action theories about the agent/robot and about the exogenous actions. We give sufficiency conditions that guarantee correctness and use it to give an automatic method for constructing provenly correct control modules. We then extend our approach to action theories and control modules that have specialized sensing actions and that encode conditional plans. Finally, we relate our theory with the implementation of our mobile robot Diablo, which finished third and scored 285 out of 295 points in the AAAI 96 mobile robot contest.
This talk is based on two papers, "Relating Theories of Action and Reactive Control", Co-author: T. Son (submitted for publication), and "Representing actions: Laws, Observations and Hypothesis", Co-authors: Michael Gelfond and Ale Provetti (in the Journal of Logic Programming, May 1997). Both papers are available on the WWW: http://cs.utep.edu/chitta/chitta.html
Jacinto Da'vila Imperial College, London
Friday 27th June 1997
Abstract: In 1985, Brooks stormed the field of Artificial Intelligence with his rejection of representation (and presumably logic) for the construction of robots. Kowalski has proposed a logical description of an agent that could explain how an agent reasons with some (logical) representation and, at the same time, displays the reactive behaviour that Brooks emphasizes.
To test Kowalski's proposal, we have been using it to guide the implementation of different types of artificial agents. In this talk, we discuss how to use Kowalski's cycle predicate as the "locus of control" of a small "kephera" robot, acting as a pathfinder. The reasoning mechanism of the robot (a sort of embedded planner) and the special kind of logic programs that control the robot are also discussed.
J P Lodge Imperial College, London
Friday 6th June 1997
Abstract: Many sample problems in Deontic and Preference Logics involve only two variables, usually independent, and therefore four possible worlds (assuming full knowledge) One important aspect of 'solving' these problems is to establish the most appropriate ordering between these four worlds, and from this ordering, determine the consequences.
However, even for these simple problems, there are often many orderings which might apply, and these often give rise to different consequences. In addition, several logic systems (Standard Deontic Logic, Boutilier, and Pearl) disagree on which ordering is the 'most appropriate'.
I present a number of these simple problems to illustrate the differences between the systems, and give intuitive solutions where possible.
This is very much work-in-progress, and I do not reach any firm conclusions, other than that the established systems appear to have some weaknesses.
Friday 23rd May 1997
Abstract: In 1969 Cordell Green presented his seminal description of planning as theorem proving with the situation calculus. In this talk I will endeavour to reinstate the ideal of planning via theorem proving in a more modern guise. In particular, I will show that if we adopt the event calculus as our logical formalism and employ abductive logic programming as our theorem proving technique, then the computation performed mirrors closely that of a hand-coded partial order planning algorithm. Furthermore, if we extend the event calculus in a natural way to accommodate compound actions, then using exactly the same abductive theorem prover we obtain a hierarchical planner.
The planner I will describe has been developed as part of a Cognitive Robotics project, the aim of which is to build robots whose architectures are based on the manipulation of sentences of logic. In addition to describing the planner itself, I will outline the other components of the system which will hopefully be deployed on our small fleet of miniature robots.
Tony Kakas University of Cyprus
and
Rob Miller Queen Mary and Westfield College, London
Friday 18th April 1997
Abstract: We describe the Language E, a model-based, 'action description language' version of the Event Calculus. We show how Language E domain descriptions can be used as specifications for Event Calculus style logic programs. These logic programs avoid problems with previous versions of the Event Calculus caused by the implicit completion of information about what holds at some initial timepoint. We show that Gelfond and Lifschitz's Language A is a special case of the Language E, and hence that these programs can also be used to compute consequences of Situation Calculus domain descriptions. We describe how both the Language E and its Event Calculus implementation can be extended to deal with ramifications, providing intuitive and computationally realisable solutions to many well known problems in the literature, including the 'electric circuit' example recently described by Michael Thielscher. This work has a number of potential applications, such as planning and database updates.
The paper associated with this talk is available over the World Wide Web:
Christoph "George" Jung DFKI Saarbruecken GmbH, Germany
Thursday 3rd April 1997
Abstract: Increasing efforts in (Distributed) Artificial Intelligence to reconsider unifying, intelligent architectures solidify the importance of integrating deliberative and reactive facilities of agents. We present a hybrid agent architecture, InteRRaP, that employs the meta-functionality of planning to rather procedural patterns of behaviour and discuss the application of fine-grained concurrency. The used metaphor of cognitive processes communicating via signals provides a reasonable basis for resource control constructs (virtual resources) that are inspired by the bounded rationality concept in AI and cognitive science.
Tom Maibaum Imperial College
Friday 14th March 1997
Abstract: Following Goguen's dictum that putting widgets together to form superwidgets should be modelled using colimits of 'configuration diagrams' in appropriate categories, we show how this principle can be applied to temporal/modal (action) logics. The setting elucidates issues related to the frame problem, open versus closed models of components, emergent properties, required nondeterminism. We will discuss some of these issues.
Michael Thielscher Technische Hochschule Darmstadt, Germany
Wednesday 12th March 1997
Abstract: In formal theories for reasoning about actions, the qualification problem denotes the problem to account for the many conditions which, albeit being unlikely to occur, may prevent the successful execution of an action. To overcome a crucial difficulty with globally minimizing these so-called abnormal action disqualifications, we propose to incorporate causality by treating the proposition that an action is qualified as a fluent which is initially assumed away by default but is otherwise potentially indirectly affected by the execution of actions. Our formal account of the qualification problem includes the proliferation of explanations for surprising disqualifications and also accommodates so-called miraculous disqualifications. In addition I will sketch a provably correct version of the fluent calculus which involves default rules to address abnormal disqualifications of actions.
The papers associated with this talk are available over the World Wide Web: http://aida.intellektik.informatik.th-darmstadt.de/~mit.
Friday 7th March 1997
Abstract: Action theories serve as a formalism to represent and reason about domains in which the execution of actions plays a central role. This is the case, for example, if one wants to model an agent who interacts with its environment, i.e., the part of the world it is able to affect. Most importantly, an autonomous agent needs precise knowledge as to the effects of its actions in order to act purpose-oriented and so to achieve pre-determined goals. The latter requires to draw the right conclusions from this knowledge in view of particular situations, in which the agent has acquired partial knowledge about the current state of the environment and has a certain goal in mind. The most distinguishing aspect of action theories is that the specification of actions and their effects shall be as intuitive and natural as possible. This raises specific challenges, of which the most famous are, in historical order, the Frame Problem, the Qualification Problem, and the Ramification Problem. In my talk I will introduce into these three problems and present and discuss various approaches up to state of the art solutions. In particular, I will show how recent insight into the central role of causality led to considerable progress in the whole field.
Subrata Das Imperial College
Friday 28th February 1997
Abstract: A generic architecture for autonomous agents is presented. In common with other current proposals the agent is capable of reacting to and reasoning about events which occur in its environment, executing actions and plans in order to achieve goals in the environment. In addition the agent can make decisions under uncertainty, including decisions about competing beliefs and alternative actions. The framework is grounded in a non-classical decision model; this supports many more capabilities than classical decision theory but under restricted conditions is compatible with it. The model is embodied in a well-defined knowledge representation language, R2L, which explicitly supports the central concepts of decisions and plans, and associated constructs of goals, arguments and commitments. The language provides a sound basis for building knowledge based agents for practical applications including safety-critical ones. This is illustrated with examples of medical applications.
J P Lodge Imperial College
Friday 21st February 1997
Abstract: This paper builds on a simple but powerful variant of logic programming, first presented by Kakas, Mancarella and Dung, called 'Logic Programming without Negation as Failure' (LPwNF), which uses explicit negation and priorities between rules but does not include negation as failure in its object-level language. I present a transformation from normal logic programming and extended logic programming into LPwNF, show how these transformed programs can be placed into an assumption-based framework, developed by Bondarenko, Dung, Kowalski and Toni, and shows that, in this framework, a large number of argumentation semantics are preserved by the transformation. This equivalance of semantics shows that LPwNF is at least as powerful as Normal Logic Programming and Extended Logic Programming. The assumption-based framework is then extended to cover a wider range of LPwNF programs, involving complex priority structures. Finally, some preliminary findings concerning defeasible priorities are discussed.
Peter McBrien King's College, London
Friday 14th February 1997
Abstract: Although temporal databases have received considerable attention as a topic for research, little work in the area has paid attention to the concurrency control mechanisms that might be employed in temporal databases. This talk follows on from last week's talk given by Marcelo Finger, and describes how the notion of the current time - also called NOW - in valid-time databases can cause standard serialisation theory to give what are at least unintuitive results, if not actually incorrect results. The talk will then present two modifications to standard serialisation theory which correct the behaviour to give what we term 'perceivably instantaneous transactions'; transactions where serialising T_1 and T_2 as [T_1,T_2] always implies that the current time seen by T_1 is less than or equal to the current time seen by T_2.
The paper associated with this talk is available over the World Wide Web: http://www.dcs.kcl.ac.uk/staff/pjm/now-transactions.ps.gz.
Marcelo Finger University of São Paulo, Brasil
Friday 7th February 1997
Abstract: This talk shows the several choices one is faced in defining the value of the 'current-time' in valid-time transactions and why none of them can technically be called "the best". We show that all of them bring forth some sort of semantical or technical inconvenience. Of course, we have our own prejudices, so we try to argue which of the possible semantics should be philosophically preferred, and how to cope with its shortcomings. The notion of the 'current-time' is frequently found in work on temporal databases, and is usually accessed via a special interpreted variable called "now". Whilst this variable has intuitive and simple semantics with respect to the processing of queries in temporal databases, the semantics of the variable inside transactions has not been studied. With the growing maturity of temporal databases, and the need to construct production quality temporal DBMS to demonstrate the usefulness of the technology, it is necessary that a complete study be made of transaction processing in temporal databases. This work aims to meet this objective by (1) detailing the alternatives that exist in the evaluation of "now", (2) studying the problems of using "now" in current transaction processing systems, and (3) provide a formal framework for the processing of temporal transactions, and give rules in the framework for avoiding problems in evaluating "now".
The paper associated with this talk is available over the World Wide Web: ftp://ftp.ime.usp.br/pub/mfinger/SBBD96.ps.gz.
Keywords: Temporal databases, valid-time, transaction-time, transaction processing.
Carlos Duarte Imperial College
Friday 31st January 1997
Abstract: Actors has been regarded as a promising model for open distributed systems. Although the operational semantics of actor programs has already been studied in some recent work, means of reasoning about the behaviour of communities of interconnected actors at a high abstraction level are still lacking. In this paper we argue that a proof-theoretic semantics would be better suited to this purpose. We present an abstract data type like axiomatisation for the kernel primitives of Actors, showing how to reason from specifications of actor communities and how to compose them within the framework of temporal logics of objects.
The paper associated with this talk is available over the World Wide Web: http://theory.doc.ic.ac.uk/~cd7/reports.html.
Jacinto Da'vila Imperial College
Friday 24th January 1997
Abstract: In this talk we discuss the limitations of Kowalski's cycle predicate, the basic component of the formalization of an agent architecture that interleaves observation, reasoning and execution of actions. We present an improved version of "cycle" and its subsidiary predicates as the specification of GLORIA, a General-purpose, Logic-based, Open, Reactive and Inteligent Agent. We then show alternatives ways of programming GLORIA to behave as an ELEVATOR controller.
Rob Miller Queen Mary and Westfield College
Friday 17th January 1997
Abstract: The main purpose of this talk is to show how, at least from a semantic or conceptual point of view, and with a few simple modifications, recent versions of the Event Calculus can support both a deductive and an abductive view of planning in linear time. Specifically, two of Reiter's recent criticisms of planning with the Event Calculus will be discussed; (i) the criticism that it is necessary to use 'meta-level' concepts such as abduction to formulate the notion of a plan in the context of the Event Calculus, and (ii) the criticism that planning with the Event Calculus will involve some kind of (computationally intractable) dynamic consistency checking.
The paper associated with this talk is available over the World Wide Web: http://www.dcs.qmw.ac.uk/~rsm/abstract14.html.
Ber Permpoontanalarp Imperial College
Monday 16th December 1996
Abstract: Situation calculus is a non-modal formalism for reasoning about properties which can be initiated by actions and persisted over time. On the other hand, dynamic logic is a modal logic which was initially proposed for reasoning about programs but later it has been proposed to deal with the reasoning about actions also. Superficially, the two formalisms look completely different. A closer examination reveals many similarities yet some differences in its essence. In this talk, we shall establish relationships between Reiter's situation calculus and deterministic dynamic logic in the context of reasoning about action. By establishing the relationships between the two formalisms, several limitations and problems of each framework can be identified. Then, we shall propose a unifying framework of both formalisms, which retains advantages but removes limitations and problems of both formalisms.
Alexander Bolotov Manchester Metropolitan University
Monday 9th December 1996
Abstract: We develop a clausal resolution method for the branching-time temporal logic CTL. The three key elements within this method are: a specific normal form for CTL formulae, called separated normal form; the concept of step resolution, which is essentially a form of classical resolution; and the concept of temporal resolution. Transformations of arbitrary CTL formulae to the normal form are based on the known fixed-point characterisations of temporal operators. Once in the normal form, step resolution operations can be applied. In addition, the special structure of the normal form allows us to simply represent temporal loops and thus provides the opportunity of applying temporal resolution. The Correctness of the resolution method for CTL is shown. In addition, this temporal resolution approach is adapted to the case of extended CTL (ECTL), and the possibility of developing it also for the case of CTL* - the most powerful logic of this class - is examined.
Shekhar Pradhan Dept. of Philosophy, Central Missouri State Univ., USA and Dept. of Computer Science, Univ. of Maryland, College Park, USA
Monday 2nd December 1996
Abstract: Distributed information systems and multi-agent systems answer queries and solve problems by pooling together information from different sources. Since this information is drawn from diverse sources there is the likelihood that such information may contain logical as well as non-logical inconsistencies. We develop a semantic framework for reasoning with information, encoded in the form of logic programs, containing logical, semantic and evidential conflicts. We use this semantic framework to provide a new semantics for extended logic programs and compare our semantics with the answer set semantics of Gelfond and Lifschitz.
Monday 25th November 1996
Abstract: We explain two very similar methods for proving (un)decidability in modal logic. The not so well-known mosaic-method is a handy instrument for proving decidability. The better known tiling problem can be used for undecidability proofs. We will work in the setting of two-dimensional temporal logic. We show that adding the universal modality to a logic can boost the complexity of the decision-problem from NP-completeness to undecidable. The logic we give is an example of an undecidable modal logic which is complete with respect to a universal Horn definable class of frames.
Dov Gabbay Imperial College
Monday 18th November 1996
Abstract: We introduce a generalised quantifier of the form (Qx)A(x,u1,...,uN), where the range of x depends on (u1,...,uN) taken as a sequence. We study the properties of this quantifier and compare it with the more traditional generalised quantifier considered in the current literature. We also present a reduction of generalised quantifiers into certain many dimensional propositional temporal logics, based on the modal logic K. In fact we can further translate these quantifiers into quantified modal predicate logic QK (based on modal K).
Graham White Queen Mary and Westfield College
Monday 4th November 1996
Abstract: Linear logic is a logic of consumable resources: as such it has already shown its usefulness in dealing with computational problems such as updating. The real world analogue of such computational problems is known as the frame problem: I show that it is possible to use the linear logic programming language Lygon to give solutions for the frame problem which are theoretically elegant and computationally efficient. The paper associated with this talk is available over the WWW in compressed postscript form: ftp://www.dcs.qmw.ac.uk/pub/applied_logic/graham/metadesign.ps.gz
Murray Shanahan Queen Mary and Westfield College
Monday 28th October 1996
Abstract: This talk and the associated paper describe a logic-based formalism which combines techniques for reasoning about actions with standard mathematical techniques for modelling dynamic systems using the differential calculus. The formalism inherits a robust solution to the frame problem which can handle concurrency, non-determinism, domain constraints and narrative. It also incorporates a mechanism for reasoning about the boundary conditions associated with systems of differential equations defined over various intervals. This mechanism overcomes a number of drawbacks of previous systems.
Brian Knight Greenwich University
Monday 21st October 1996
Abstract: This talk introduces a discrete formalism for temporal reasoning about actions and change which enriches the ontology of the situation calculus, by means of the provision of an explicit representation of time and action/event occurrences. The formalism allows the expression of truth values for given fluents over various times including non-decomposable points/moments and decomposable intervals. Two major problems which beset most existing interval-based theories of action and change, i.e., the so-called dividing instant problem and the intermingling problem, are absent from this new formalism. The dividing instant problem is overcome by excluding the concepts of ending points of intervals, and the intermingling problem is bypassed by means of characterising the fundamental time structure as a well-ordered discrete set of non-decomposable times (points and moments), from which decomposable intervals are constructed. A comprehensive characterization about the relationship between the negation of fluents and the negation of involved sentences is formally provided. The formalism enjoyes a satisfactory expression of delay effects of actions, which remains a problematic question in the existing extensions to the situation calculus.
Senanu Etrey Imperial College
Monday 24th June 1996
Abstract: Part 1 We can think of the world wide web as a means of storing referential units and the resources that they refer to. (This means it could be termed a database). These resources come in the form of many different media types. The purpose of this storage medium is to facilitate our ability to gain access to up to date information via different instances of referring units. These are termed URI's, universal resource identifiers. However, these referring units are constantly changing and so are their referents (as well as their internal structure). We want these referring units and their referents to change automatically over time as opposed to their users having to alter them manually. Part 2 Looked at this way, what we want to have is not just a static store of referring units and their references. We want an active temporal database. We can put this another way. We now want to store the referring units (URIs) and their referents alongside temporal labels which mark out the temporal interval during which a referring unit and/or its referent can be used, and provide the associated information provision service.
Marek Sergot (joint work with David Evans) Imperial College
Monday 17th June 1996
This talk will follow on from David Evans' presentation earlier in the year, to pick up on some of the points that arose in the subsequent discussion. The talk will be self-contained however: attendance at the earlier presentation is not a pre-requisite.
Abstract: The concept of persistence (of the truth of propositions over time) is central to much current work in temporal reasoning. A variety of different formulations have been described in the literature, employing frame axioms, default reasoning techniques, and special purpose semantics in various combinations.
Our aim in this talk is to investigate the properties of some simple notions of persistence independently of the formalisms in which they are expressed. We present an abstract semantic framework in which different assumptions about persistence can be located and compared. Persistence is viewed in terms of extending a partial temporal model of a standard kind: given the truth values of some propositions at some points in a temporal structure, persistence is represented as a rule for extending this valuation to provide the truth values of the propositions at other points in the temporal structure; the extended (partial) valuation, not necessarily unique, is the closure of the initial valuation under the given rule of persistence. The link to the formalisms typically employed in temporal reasoning is via the stable model semantics of logic programs; the paper can also be regarded as an investigation of properties of certain classes of logic programs, and, by association, of certain sets of axioms in other formalisms.
We focus on two main classes of persistence rules, the `stratified' and the `stable'. We investigate some of their properties - consistency, completeness, uniqueness - and some relationships between them. The final part of the talk is devoted to a discussion of change. Combinations of assertional and `persistence disturbing' components can be used to generate a range of primitives for representing different kinds of change.
Sanjay Modgil Imperial College
Monday 3rd June 1996
Abstract: I will present a natural deduction proof theory for a non-monotonic logic in the framework of Gabbay's labelled deductive systems. The logic allows meaningful reasoning in the presence of inconsistency. A modal semantics will be described, and the process of selecting preferred consequences will be defined in terms of a choice among possible worlds. Belief Change operators will be defined, which I argue more adequately satisfy persistence criteria, in comparison with existing formalisms.
Mark Reynolds King's College, London
Monday 20th May 1996
Abstract: The talk will describe the application of an interesting general technique to various two-dimensional temporal logics and some similar modal logics. The technique, the ``mosaic method'' was initially used by Istvan Németi in the area of algebraic logic. It can be used to prove axiomatic completeness and decidability results.
The logics all involve the interplay of modality - temporal or otherwise - along two dimensions. Such logics may arise by directly composing two temporal dimensions, by considering temporal logic based on intervals of time, or by combining a temporal logic with a logic of possibility. Similar logics arise in many contexts, for example from trying to find a modal approximate to the first-order logic of two variable symbols.
The talk will survey these logics and their uses before describing an application of the mosaic method to decidability.
The paper associated with this talk is available on the WWW:
Mark Reynolds A Decidable Temporal Logic of Parallelism. Available as compressed postscript.
Odinaldo Rodrigues Imperial College
Monday 11th March 1996
Abstract: We point out a simple but hitherto ignored link between the theory of updates and counterfactuals and classical modal logic: update is a classical existential modality, counterfactual is a classical universal modality, and the link between the two (called the Ramsey rule) is simply the link between two inverse accessibility relations of a classical Kripke model.
The paper associated with this talk is available over the World Wide Web: http://theory.doc.ic.ac.uk/~otr/papers/inv-tark96.html.
Ian Hodkinson Imperial College
Monday 19th February 1996
Abstract: We consider the complexity of decision procedures for linear time temporal logic. The main known result is that of Sistla & Clarke (early 80s) - that the problem for Until and Since over the natural numbers is PSPACE-complete. We attempt to determine the complexity for Until and Since over linear time. This is work in progress.
Jacinto Davila Imperial College
Monday 12th February 1996
Abstract: To program an "intelligent" (reactive, rational) agent, one would like (the agent) to have the same kind of "reasoning structures" (language, ideas, models of the world) that one has access to while asking a person to do something. This talk presents a solution that fulfils these requirements. A well-known programming language is selected as the kernel of a new language that is then provided with a more expressive syntax and a semantics description as a logic program. The new programming language, called REACTIVE PASCAL, is part of a "specification platform" that can be based on either the Situation Calculus or the Event Calculus. Some mechanisms for common-sense reasoning are directly available to the programmer/designer that can then complete a "background theory" describing the world in which the agent will operate. The combination of REACTIVE PASCAL programs and a background theory enables the agent to do temporal reasoning such as that required to solve the Frame Problem and to do general planning.
Rob Miller Imperial College
Monday 5th February 1996
Abstract: In this talk I will start by giving a brief overview of 'action description languages'. The first of these, the 'Language A', was proposed by Gelfond and Lifschitz, partly as a mechanism for comparing and evaluating different formalisms for Reasoning about Actions. Since then, extentions or variants of the Language A have been developed to deal with ramifications, non-determinism, narrative information, etc. All are similar in that they employ a Situation-Calculus-like ontology and semantics.
In the second part of the talk, I will argue that Gelfond and Lifschiz's methodology may equally be applied within an Event-Calculus-like, or 'narrative' ontology. I will describe a 'Language E' for specifying the effects of events occurring within a given structure of time. I will explore how E, with its very restriced syntax and simple set-theoretic semantics, might be used to partially or naively characterise notions such as that of an observation, an explanation, and a plan.
The paper associated with this talk is available over the World Wide Web: http://www.dcs.qmw.ac.uk/~rsm/abstract9.html.
The transparencies for the talk are also available: as a compressed postsctipt file and as a dvi file.
Monday 29th January 1996
Abstract: Any model of the world a robot constructs on the basis of its sensor data is necessarily both incomplete, due to the robot's limited window on the world, and uncertain, due to sensor and motor noise. This talk proposes a logic-based framework in which such models are constructed through an abductive process whereby sensor data is explained by hypothesising the existence, locations, and shapes of objects. Symbols appearing in the resulting explanations acquire meaning through the theory, and yet are grounded by the robot's interaction with the world. The proposed framework draws on existing logic-based formalisms for representing action, continuous change, space, and shape, but a novel solution to the frame problem is employed. Noise is treated as a kind of non-determinism, and is dealt with by a consistency-based form of abduction.
The paper associated with this talk is available over the World Wide Web: http://www.dcs.qmw.ac.uk/~mps/robotics_long.ps.Z. A short version is also available: http://www.dcs.qmw.ac.uk/~mps/robotics_short.ps.Z.
Kostis Dryllerakis Imperial College
Tuesday 19th December 1995
Abstract: The talk discusses the application of Euclid in scientific modelling problems. In particular, we focus on the modelling of systems made up of interacting components where existing mathematical models are available for the individual components and where a model of the system as a whole can be obtained by specifying how these components interact. The examples chosen for illustration are electrical circuits where time-dependent and continous change properties play a central role. Euclid is a programming language belonging to the logic programming paradigm. Its syntax, query mechanism and operational semantics are very similar to those of the CLP(X) family of languages. Euclid imporoves on existing formalisms by integrating knowledge from a number of domains frequently encountered in scientific modelling problems including real numbers, intervals, vectors, and symbolic functions. The uses of Euclid are illustrated by showing how it may be applied to the representation of electrical circuits.
An extended abstract for this talk is available over the World Wide Web: http://www-lp.doc.ic.ac.uk/_lp/Dryllerakis/euclid.ps.gz.
Click here for more information about Euclid.
Tuesday 5th December 1995
Abstract: In this talk I will show how the Situation Calculus can be extended to deal both with 'narratives' and with domains containing real-valued parameters, whose actual values may vary continuously between the occurrences of actions. In particular, I will show how to represent a domain where action occurrences may be `triggered' at instants in time when certain parameters reach particular values. Its formalisation requires the integration of several types of default reasoning. Hence I will describe how Baker's circumscriptive solution to the frame problem can be extended to reflect the assumptions that by default a given action does not occur at a given time point, that by default a given set of parameter values does not trigger a given action, and that by default a given action occurrence does not result in a discontinuity for a given parameter. Regarding the minimisation of discontinuities, the example illustrates how circumstances can arise where, at a particular time point, discontinuities in some parameters can be `traded' for discontinuities in others. I will argue that, in general, in such cases extra domain-specific information is necessary in order to eliminate anomalous models of the domain.
The paper associated with this talk is available over the World Wide Web: http://www.dcs.qmw.ac.uk/~rsm/abstract10.html.
The transparencies for the talk are also available: as a compressed postscript file and as a dvi file.