# 2006 - Technical Reports

## VARIABLES AS RESOURCE IN HOARE LOGICS

Matthew Parkinson, Richard Bornat, Cristiano Calcagno, 9ppHoare logic is bedevilled by complex and unmemorable side conditions on the use of variables. We define a logic free of side conditions, and show that it admits translations of proofs in Hoare logic, thereby showing that nothing is lost. Our work draws on ideas from separation logic: program variables are treated as resource and separated with *, rather than as logical variables in disguise. For clarity we exclude a treatment of the heap.

## DERIVING EVENT-BASED TRANSITION SYSTEMS FROM GOAL-ORIENTED REQUIREMENTS MODELS

Emmanuel Letier, Jeff Kramer, Jeff Magee, Sebastian Uchitel, 10ppGoal-oriented methods are increasingly popular for elaborating software requirements. They offer systematic support for incrementally building intentional, structural, and operational models of the software and its environment. Event-based transition systems on the other hand are convenient formalisms for modelling and reasoning about software behaviours at the architectural level. The paper combines these two works by presenting a technique .for translating formal specification of software operations built according to the KAOS goal-oriented method into event-based transition systems analysable by the LTSA toolset. The translation involves moving from a declarative, state-based, timed, synchronous formalism typical of requirements modelling languages to an operational, event-based, untimed, asynchronous one typical of architecture description languages. The derived model is used for the formal analysis and animation of KAOS operation models In LTSA. The translation process provides insights into the two complementary formalisms and raises questions about the use o f synchronous temporal logic for requirements specification.

## DESIGNING EFFECTIVE POLICIES FOR MINIMAL AGENTS

Krysia Broda, Christopher J. Hogger, 39ppA policy for a minimal reactive agent is a set of condition-action rules used to determine its response to perceived environmental stimuli. When the policy pre-disposes the agent to achieving a stipulated goal we call it a teleo-reactive policy. This paper presents a framework for constructing and evaluating teleo-reactive policies for one or more minimal agents, based upon discounted-reward evaluation of policy-restricted subgraphs of complete situation-graphs. The main feature of the method is that it exploits explicit and definite associations of the agent's perceptions with states. The combinatorial burden that would potentially ensue from such associations can be ameliorated by suitable use of abstractions. The framework allows one to plan for a number of agents by focusing upon the behaviour of a single representative of them. It allows for varied behaviour to be modelled, including communication between agents. Simulation results presented here indicate that the method affords a good degree of scalability and predictive power.

## OPTIMIZING MINIMAL AGENTS THROUGH ABSTRACTION

Krysia Broda, Christopher J. Hogger, 12ppAbstraction is a valuable tool for dealing with scalability in large state space contexts. This paper addresses the design, using abstraction, of good policies for minimal autonomous agents applied within a situation-graph-framework. In this framework an agent's policy is some function that maps perceptual inputs to actions deterministically. A good policy disposes the agent towards achieving one or more designated goal situations, and the design process aims to identify such policies. The agents to which the framework applies are assumed to have only partial observability, and in particular may not be able to perceive fully a goal situation. A further assumption is that the environment may influence an agent's situation by unpredictable exogenous events, so that a policy cannot take advantage, of a reliable history of previous actions. The Bellman discount measure provides a means of evaluating situations and hence the overall value of a policy. When abstraction is used, the accuracy of the method can be significantly improved by modifying the standard Bellman equations. This paper describes the modification and demonstrates its power through comparison with simulation results.

## SPECIFYING NORM-GOVERNED COMPUTATIONAL SOCIETIES

Alexander Artikis, Marek Sergot, Jeremy Pitt, 39ppElectronic markets, dispute resolution and negotiation protocols are three types of application domains that can be viewed as open agent societies. Key characteristics of such societies are agent heterogeneity, conflicting individual goals and unpredictable behaviour. Members of such societies may fail to, or even choose not to, conform to the norms governing their interactions. It has been argued that systems of this type should have a formal, declarative, verifiable, and meaningful semantics. We present a theoretical and computational framework being developed for the executable specification of open agent societies. We adopt an external perspective and view societies as instances of normative systems. In this paper we demonstrate how the framework can be applied to specifying and executing a contract-net protocol. The specification is formalised in two action languages, the C+ language and the Event Calculus, and executed using respective software implementations, the Causal Calculator and the Society Visualiser. We evaluate our executable specification in the light of the presented case study, discussing the strengths and weaknesses of the employed action languages for the specification of open agent societies.

## COMPUTATIONAL STUDY OF THE GDPO DUAL PHASE-1 ALGORITHM

Istvan Maros, 17ppMaros's GDPO algorithm for phase-1 of the dual simplex method possesses some theoretical features that have potentially huge computational advantages. This paper gives account of a computational analysis of GDPO. Experience of a systematic study involving 48 problems shows that the predicted performance advantages can materialize to a large extent making GDPO an indispensable tool for dual pahase-1.

## TRACE MODELLING FOR ABDUCTION BASECALLING

David J Thornley, 10ppDNA sequencing using the fluoresence based Sanger method comprises interpretation of a sequence of signal peaks of varying size whose colour indicates the presence of a base. We have established that the ability to predict the variations effectively makes available novel error correction information which will improve sequencing efficacy. Our experiments so far have used basic models of the Sanger reaction chemistry and machine learning techniques. These have enabled us to make base calls just using context information, specfically ignoring the peak data at the base calling position. The 80% success rate of our blind experiments is striking, and will be improved by a more accurate model of trace behaviour. To this end, and to integrate the information into mainstream basecalling, we wish to develop an enzyme kinetics model susceptible to calibration of its component rates such that trace data can be accurately predicted. We describe DNA sequencing trace data, outline the trace prediction problem requirements on the model, and discuss model construction and calibration issues.

## ANISOTROPIC MULTIDIMENSIONAL SAVITZKY GOLAY KERNELS FOR SMOOTHING, DIFFERENTIATION AND RECONSTRUCTION

David J Thornley, 12ppThe archetypal Savitzky-Golay convolutional filter matches a polynomial to even-spaced data and uses this to measure smoothed derivatives. We synthesize a scheme in which heterogeneous, anisotropic linearly separable basis functions combine to provide a general smoothing, derivative measurement and reconsruction function for point coulds in multiple dimensions using a linear operator in the form of a convolution kernel. We use a matrix pseudo inverse for examples, but note that QR factorization is more stable when free weighting is introduced.

## EFFICIENT RE-INDEXING OF AUTOMATICALLY ANNOTATED IMAGE COLLECTIONS USING KEYWORD COMBINATION

Alexei Yavlinsky, Stefan Rüger, 15ppThis report presents a framework for improving the image index obtained by automated image annotation. Within this framework, the technique of keyword combination is used for fast image re-indexing based on initial automated annotations. It aims to tackle the challenges of limited vocabulary size and low annotation accuracies resulting from differences between training and test collections. It is useful for situations when these two problems are not anticipated at the time of annotation. We show that based on example images from the automatically annotated collection, it is often possible to find multiple keyword queries that can retrieve new image concepts which are not present in the training vocabulary, and improve retrieval results of those that are already present. We demonstrate that this can be done at a very small computational cost and at an acceptable performance tradeoff, compared to traditional annotation models. We present a simple, robust, and computationally efficient approach for finding an appropriate set of keywords for a given target concept. We report results on TRECVID 2005, Getty Image Archive, and Web image datasets, the last two of which were specifically constructed to support realistic retrieval scenarios.

## PRELIMINARY PROCEEDINGS OF THE 13TH INTERNATIONAL WORKSHOP ON EXPRESSIVENESS IN CONCURRENCY (EXPRESS'06)

Roberto Amadio, Iain Phillips, 104ppThis is the preliminary version of the proceedings of the 13th International Workshop on Expressiveness in Concurrency (Express'06), held in Bonn on 26 August 2006. It contains abstracts for invited talks by Robin Milner (joint with the Infinity and SOS workshops) and Hagen Völzer, together with seven contributed papers by the following authors: Diletta Cacciagrano, Flavio Corradini and Catuscia Palamidessi; Xu Wang and Marta Kwiatkowska; Ahmed Bouajjani, Jan Strejcek and Tayssir Touili; Vincent Danos, Jean Krivine and Pawel Sobocinski; Daniele Gorla; Lucy Saunders-Evans and Glynn Winskel; Luís Caires and Hugo Torres Vieira. The final version will appear in Electronic Notes in Theoretical Computer Science.

## PROGRAM VERIFICATION WITH INTERACTING ANALYSIS PLUGINS

Nathaniel Charlton, 37ppIn this paper we propose and argue for a modular framework for interprocedural program analysis, where multiple program analysis tools are combined in order to exploit the particular advantages of each. This allows for "plugging together" such tools as are required by each verification task and makes it easy to integrate new analyses. Our framework automates the sharing of information between plugins using a first order logic with transitive closure, in a way inspired by the open product of Cortesi et al.

We describe a prototype implementation of our framework, which performs static assertion checking on a simple language for heap-manipulating programs. This implementation includes plugins for three existing approaches - predicate abstraction, 3-valued shape analysis and a decidable pointer analysis - and for a simple type system. We demonstrate through a detailed example the increase in precision that our approach can provide. Finally we discuss the design decisions we have taken, in particular the tradeoffs involved in the choice of language by which the plugins communicate, and identify some future directions for our work.

## POLYNOMIAL-TIME UNDER-APPROXIMATION OF WINNING REGIONS IN PARITY GAMES

Adam Antonik, Nathaniel Charlton, and Michael Huth, 23ppWe propose a pattern for designing algorithms that run in polynomial time by construction and under-approximate the winning regions of both players in parity games. This approximation is achieved by the interaction of finitely many aspects governed by a common ranking function, where the choice of aspects and ranking function instantiates the design pattern. Each aspect attempts to improve the under-approximation of winning regions or decrease the rank function by simplifying the structure of the parity game. Our design pattern is incremental as aspects may operate on the residual game of yet undecided nodes. We present several aspects and one higher-order transformation of our algorithms --- based on efficient, static analyses --- and illustrate the benefit of their interaction as well as their relative precision within pattern instantiations. Instantiations of our design pattern can be applied for local model checking and as pre-processors for algorithms whose worst-case running time is exponential.

## AN EXECUTABLE SPECIFICATION OF A FORMAL ARGUMENTATION PROTOCOL

Alexander Artikis, Marek Sergot and Jeremy Pitt , 48ppWe present a specification, in the action language C+, of Brewka's reconstruction of a theory of formal disputation originally proposed by Rescher. The focus is on the procedural aspects rather than the adequacy of this particular protocol for the conduct of debate and the resolution of disputes. The specification is structured in three separate levels, covering (i) the physical capabilities of the participant agents, (ii) the rules defining the protocol itself, specifying which actions are `proper' and `timely' according to the protocol and their effects on the protocol state, and (iii) the permissions, prohibitions, and obligations of the agents, and the sanctions and enforcement strategies that deal with non-compliance. Also included is a mechanism by which an agent may object to an action by another participant, and an optional `silence implies consent' principle. Although comparatively simple, Brewka's protocol is thus representative of a wide range of other more complex argumentation and dispute resolution procedures that have been proposed. Finally, we show how the `Causal Calculator' implementation of C+ can be used to animate the specification and to investigate and verify properties of the protocol.

Keywords: argumentation, disputation, protocol, norm, multi-agent system, specification, action language

## MODELLING AND ANALYSIS OF PKI-BASED SYSTEMS USING PROCESS CALCULI

Benjamin Aziz and Geoff Hamilton, 31ppIn this technical report, we present a process algebra aimed at modelling PKI-based systems. The new language, SPIKY, extends the spi-calculus by adding primitives for the retrieval of certified/uncertified public keys as well as private keys belonging to users of the PKI-based system. SPIKY also formalises the notion of process ownership by PKI users, which is necessary in controlling the semantics of the key retrieval capabilities. We also construct a static analysis for SPIKY that captures the property of term substitutions resulting from message-passing and PKI/cryptographic operations. This analysis is shown to be safe and computable. Finally, we use the analysis to define the term secrecy and peer participation properties for a couple of examples of authentication protocols.

Keywords Process Calculi, Static Analysis, PKI Protocols, Security

## A STATIC ANALYSIS OF THE APPLIED PI CALCULUS

Benjamin Aziz, 21ppWe present in this technical report a non-uniform static analysis for detecting the term-substitution property in systems specified in the language of the applied pi calculus. The analysis implements a denotational framework that has previously introduced analyses for the pi calculus and the spi calculus. The main novelty of this analysis is its ability to deal with systems specified in languages with non-free term algebras, like the applied pi calculus, where non-identity equations may relate different terms of the language. We demonstrate the applicability of the analysis to one famous security protocol, which uses non-identity equations, namely the Diffie-Hellman protocol.

Keywords Applied Pi Calculus, Static Analysis, Security, Diffie-Hellman