2010 - Technical Reports
Number Report Title 2010/1 Undecidability of propositional separation logic and its neighbours
James Brotherston, Max Kanovich, 10pp
2010/2 High-Performance Event Processing with Information Security
Matteo Migliavacca, Ioannis Papagiannis, David M. Eyers, Brian Shand, Jean Bacon, Peter Pietzuch, 14pp
2010/3 Distributed Fault Tolerant Controllers
Leonardo Mostarda, Rudi Ball, Naranker Dulay, 12pp
2010/4 Cost-Effective Solution to Synchronised Audio-Visual Data Capture using Multiple Sensors
Jeroen Lichtenauer, Jie Shen, Michel Valstar, Maja Pantic, 39pp
2010/5 Shinren: Non-monotonic Trust Management for Distributed Systems
Changyu Dong, Naranker Dulay, 29pp
2010/6 Object Capabilities and Isolation of Untrusted Web Applications
Sergio Maffeis, John C. Mitchell, Ankur Taly, 29pp
2010/7 Session Typed Parameterised Communication Patterns
Andi Bejleri, 33pp
2010/8 Locality Refinement
Thomas Dinsdale-Young, Philippa Gardner, and Mark Wheelhouse, 40pp
2010/9 Non-elementary speed up for model checking synchronous perfect recall
Mika Cohen and Alessio Lomuscio, 6pp
2010/10 Local Reasoning about Mashups
Philippa Gardner, Gareth Smith and Adam Wright, 1pp
2010/11 SQPR: Stream Query Planning with Reuse
Evangelia Kalyvianaki, Wolfram Wiesemann, Quang Hieu Vu, Daniel Kuhn, Peter Pietzuch, 13pp
2010/12 Secure Cross-Domain Data Sharing Architecture for Crisis Management
Vaibhav Gowadia, Enrico Scalavino, Emil C. Lupu, Dmitry Starostin, Alexey Orlov, 18pp
2010/13 A Simple and Efficient Supervised Method for Spatially Weighted PCA in Face Image Analysis
Carlos E. Thomaz, Gilson A. Giraldi, Joaquim F. P. da Costa and Duncan F. Gillies, 28pp
2010/14 Safe Parallel Programming with Session Java
Olivier Pernet, Nicholas Ng, Raymond Hu, Nobuko Yoshida, and Yiannos Kryftis, 35pp
2010/15 On Asynchronous Session Semantics
Dimitrios Kouzapas, Nobuko Yoshida, Raymond Hu, and Kohei Honda, 47pp
2010/16 Verification of temporal-epistemic properties of security protocols: a bounded model checking implementation
A Lomuscio, W Penczek, M Szretzer, 00pp
James Brotherston, Max Kanovich, 10ppReport: 2010/1
Separation logic has proven an adequate formalism for the analysis of programs that manipulate memory (in the form of pointers, heaps, stacks, etc.). In this paper, we consider the purely propositional fragment of separation logic, as well as a number of closely related substructural logical systems. We show that, surprisingly, all of these propositional logics are undecidable. In particular, we solve an open problem by establishing the undecidability of Boolean BI.
Matteo Migliavacca, Ioannis Papagiannis, David M. Eyers, Brian Shand, Jean Bacon, Peter Pietzuch, 14ppReport: 2010/2
In finance and healthcare, event processing systems handle sensitive data on behalf of many clients. Guaranteeing information security in such systems is challenging because of their strict performance requirements in terms of high event throughput and low processing latency. We describe DEFCon, an event processing system that enforces constraints on event flows between event processing units. DEFCon uses a combination of static and runtime techniques for achieving light-weight isolation of event flows, while supporting efficient sharing of events. Our experimental evaluation in a financial data processing scenario shows that DEFCon can provide information security with an order of magnitude lower processing latency compared to a current approach.
Leonardo Mostarda, Rudi Ball, Naranker Dulay, 12ppReport: 2010/3
Distributed applications are often built from sets of distributed components that must be co-ordinated in order to achieve some global behaviour. The common approach is to use a centralised controller for co-ordination, or occasionally a set of distributed entities. Centralised co-ordination is simpler but introduces a single point of failure and poses problems of scalability. Distributed co-ordination offers greater scalability, reliability and applicability but is harder to reason about and requires more complex algorithms for synchronisation and consensus among components. In this paper we present a system called GOANNA that from a state machine specification (FSM) of the global behaviour of interacting components can automatically generate a correct, scalable and fault tolerant distributed implementation. GOANNA can be used as a backend for different tools as well as an implementation platform in its own right.
Jeroen Lichtenauer, Jie Shen, Michel Valstar, Maja Pantic, 39ppReport: 2010/4
Applications such as surveillance and human behaviour analysis require high-bandwidth recording from multiple cameras, as well as from other sensors. In turn, sensor fusion has increased the required accuracy of synchronisation between sensors. Using commercial off-the-shelf components may compromise quality and accuracy, because it is difficult to handle the combined data rate from multiple sensors, the offset and rate discrepancies between independent hardware clocks, the absence of trigger inputs or -outputs in the hardware, as well as the different methods for timestamping the recorded data. To achieve accurate synchronisation, we centralise the synchronisation task by recording all trigger- or timestamp signals with a multi-channel audio interface. For sensors that don't have an external trigger signal, we let the computer that captures the sensor data periodically generate timestamp signals from its serial port output. These signals can also be used as a common time base to synchronise multiple asynchronous audio interfaces. Furthermore, we show that a consumer PC can currently capture 8-bit video data with 1024x1024 spatial- and 59.1Hz temporal resolution, from at least 14 cameras, together with 8 channels of 24-bit audio at 96kHz. We thus improve the quality/cost ratio of multi-sensor systems data capture systems."
Changyu Dong, Naranker Dulay, 29ppReport: 2010/5
The open and dynamic nature of modern distributed systems and pervasive environments presents significant challenges to security management. One solution may be trust management which utilises the notion of trust in order to specify and interpret security policies and make decisions on security-related actions. Most trust management systems assume monotonicity where additional information can only result in the increasing of trust. The monotonic assumption oversimplifies the real world by not considering negative information, thus it cannot handle many real world scenarios. In this paper we present Shinren, a novel non-monotonic trust management system based on bilattice theory and the any-world assumption. Shinren takes into account negative information and supports reasoning with incomplete information, uncertainty and inconsistency. Information from multiple sources such as credentials, recommendations, reputation and local knowledge can be used and combined in order to establish trust. Shinren also supports prioritisation which is important in decision making and resolving modality conflicts that are caused by non-monotonicity.
Sergio Maffeis, John C. Mitchell, Ankur Taly, 29ppReport: 2010/6
Andi Bejleri, 33ppReport: 2010/7
Communication patterns describe simple and elegant structured interactions in communication based applications. They are used in many parallel computing architectures of parallel algorithms, data exchange protocols and web- services. Communication patterns help programmers to design more efficient, structured, modular and understandable architectures, but they do not provide any automatic code validation.
We study this problem using global session types, a type theory that describes structured interactions from a global point of view. We then augment the syntax of global types with parameters that abstract the number of participants and an iterative construct that builds instances of parameterised communication patterns. Our formal system allows programmers to represent parameterised communication patterns by global types and then validate the code by type-checking.
Thomas Dinsdale-Young, Philippa Gardner, and Mark Wheelhouse, 40ppReport: 2010/8
We study refinement in the setting of local reasoning. In particular, we explore general translations that preserve and that break locality.
Mika Cohen and Alessio Lomuscio, 6ppReport: 2010/9
We analyse the time complexity of the model checking problem for a logic of knowledge and past time in synchronous systems with perfect recall. Previously established bounds are k-exponential in the size of the system for specifications with k nested knowledge modalities. We show that the upper bound for positive (respectively, negative) specifications is polynomial (respectively, exponential) in the size of the system irrespective of the nesting depth.
Philippa Gardner, Gareth Smith and Adam Wright, 1ppReport: 2010/10
Evangelia Kalyvianaki, Wolfram Wiesemann, Quang Hieu Vu, Daniel Kuhn, Peter Pietzuch, 13ppReport: 2010/11
When users submit new queries to a distributed stream processing system (DSPS), a query planner must allocate physical resources, such as CPU cores, memory and network bandwidth, from a set of hosts to queries. Allocation decisions must provide the correct mix of resources required by queries, while achieving an efficient overall allocation to scale in the number of admitted queries. By exploiting overlap between queries and reusing partial results, a query planner can conserve resources but has to carry out more complex planning decisions. In this paper, we describe SQPR, a query planner that targets DSPSs in data centre environments with heterogeneous resources. SQPR models query admission, allocation and reuse as a single constrained optimisation problem and solves an approximate version to achieve scalability. It prevents individual resources from becoming bottlenecks by re-planning past allocation decisions and supports different allocation objectives. As our experimental evaluation in comparison with a state-of-the-art planner shows SQPR makes efficient resource allocation decisions, even with a high utilisation of resources, with acceptable overheads.
Vaibhav Gowadia, Enrico Scalavino, Emil C. Lupu, Dmitry Starostin, Alexey Orlov, 18ppReport: 2010/12
Crisis management requires rapid sharing of data among organizations responding to the crisis. Existing crisis management practices rely on ad hoc or centralized data sharing based on agreements written in natural language. The ambiguity of natural language specifications often leads to errors and can hinder data availability. Therefore, it is desirable to develop automatic data sharing systems. The need to share data during crises presents additional challenges, such as evaluation of security constraints in different administrative domains and in situations with intermittent network connectivity. We compare two different architectural approaches to develop secure data sharing solutions. The first approach assumes reliable network connectivity, while the second approach works in ad hoc networks. We then suggest a unified architecture that caters for both scenarios.
Carlos E. Thomaz, Gilson A. Giraldi, Joaquim F. P. da Costa and Duncan F. Gillies, 28ppReport: 2010/13
Principal Component Analysis (PCA) is an example of a successful unsupervised statistical dimensionality reduction method, which is especially useful in small sample size problems. Despite the well-known attractive properties of PCA, the traditional approach does not incorporate prior information extracted from knowledge of a specific domain. The development of techniques that bring together dimensionality reduction and prior knowledge can be performed in the framework of supervised learning methods, like Fisher Discriminant Analysis. Semi-supervised methods can also be applied if only a small number of labeled samples is available. In this paper, we propose a simple and efficient supervised method that allows PCA to incorporate explicitly domain knowledge and generates an embedding space that inherits its optimality properties for dimensionality reduction. The method relies on discriminant weights given by separating hyperplanes to generate the spatially weighted PCA. Several experiments using 2D frontal face images and different data sets have been carried out to illustrate the usefulness of the method for dimensionality reduction, classification and interpretation of face images.
Olivier Pernet, Nicholas Ng, Raymond Hu, Nobuko Yoshida, and Yiannos Kryftis, 35ppReport: 2010/14
The session-typed programming language Session Java (SJ) has proved to be an effective tool for distributed programming, promoting productivity and compile-time safety. This paper investigates the use of SJ for session-typed parallel programming, and introduces new language primitives for chained iteration and multi-channel communication. These primitives allow the efficient coordination of parallel computation across multiple processes, thus enabling SJ to express the complex communication topologies often used by parallel algorithms with static safety guarantees. We demonstrate that the new primitives yield clearer and safer code for pipeline, ring and mesh topologies, and through implementations of a parallel Jacobi method and an n-Body simulation. We then present a semantics and session typing system including the new primitives, and prove type soundness and deadlockfreedom for our parallel algorithm implementations. Finally, we benchmark several implementations of the n-Body simulation on a hybrid computing cluster, demonstrating the performance gains due to the new primitives. The SJ implementation is also substantially faster than an MPJ Express1 implementation used as reference.
Dimitrios Kouzapas, Nobuko Yoshida, Raymond Hu, and Kohei Honda, 47ppReport: 2010/15
This paper studies a behavioural theory of the p-calculus with session types under the fundamental principles of the practice of distributed computing - asynchronous communication which is order-preserving inside each connection (session), augmented with asynchronous inspection of events (message arrivals). A new theory of bisimulations is introduced, distinct from either standard asynchronous or synchronous bisimilarity, accurately capturing the semantic nature of session-based asynchronously communicating processes augmented with event primitives. The bisimilarity coincides with the reduction-closed barbed congruence. We examine its properties and compare them with existing semantics. Using the behavioural theory, we verify that the program transformation of multithreaded into event-driven session based processes, using Lauer-Needham duality, is type and semantic preserving. Our benchmark results demonstrate the potential of the sessiontype based translation as semantically transparent optimisation techniques.
Verification of temporal-epistemic properties of security protocols: a bounded model checking implementationA Lomuscio, W Penczek, M Szretzer, 00ppReport: 2010/16
This report will apear shortly.