1994 - Technical Reports
D.A. Marriott, 44ppReport: 1994/1
Given the increasing size and complexity of distributed systems, there is a growing need for a policy specification framework in which management policies for possibly automated managers can be expressed in a flexible manner. The requirements and concepts for such a framework are examined, and particular reference is made to applications to the security of distributed systems. Relevant work is investigated, and conclusions are drawn. The outstanding research issues are identified and a summary is given of the aim of future work.
S.C. Cheung , J. Kramer, 36ppReport: 1994/2
Behaviour analysis of complex concurrent systems has led to the search for enhanced reachability analysis techniques which support compositionality and which control the state explosion problem. While compositionality has been achieved, state explosion is still a problem. Indeed, this problem may even be exacerbated as a locally minimised subsystem may contain many states and transitions forbidden by its context or environment. This limits the practicality of using compositional reachability analysis. This report presents a method for controlling subsystem state explosion using context constraints. Context constraints, specified as interface processes, are restrictions imposed by the environment on subsystem behaviour. The minimisation produces a simplified machine that describes the behaviour of the subsystem constrained by its context. This machine can also be used as a substitute for the original subsystem in the subsequent steps of the compositional reachability analysis. Interface processes capturing context constraints can be specified by users or automatically constructed using a simple algorithm. The global behaviour generated using the method is shown to be observationally equivalent to the generated by the compositional reachability analysis without the inclusion of context constraints. The concepts in the report are illustrated with a clients/server system. The improvements in analysis performance are confirmed by experimental results.
R. Miller, 72ppReport: 1994/3
We extend the Situation Calculus to deal with narrative information, that is, information about the actual occurrence of actions in time, and show how Baker's solution to the frame problem may be adapted for narrative theories. We define a translation of narrative Situation Calculus theories into logic programs and discuss the soundness and completeness of the translations. We discuss the relationship between this narrative Situation Calculus implementation and the Event Calculus. We show how the narrative Situation Calculus can be further extended to allow concurrent and partially overlapping actions.
K. Karagianidis, 81ppReport: 1994/4
Computer users in the research/scientific and business communities require ubiquitous and timely access to new information pertaining to their professional interests. We present an architecture for building wide-area information services capable of satisfying the above needs. The design is specifically geared towards efficiently supporting mobile client computers. A set of new methods for disseminating information in large-scale environments and caching that information at the mobile clients are being proposed.
S K. Broda, 261ppReport: 1994/5
This report contains the 20 papers that were accepted to the Third Workshop on Theorem Proving with Analytic Tableaux and Related Methods which took place in Abingdon, near Oxford, U.K., May 4-6, 1994. The first two workshops were held in Lautenbach, near Karlsruhe, Germany (1992) and Marseille, France (1993).
The workshop brought together researchers interested in the mechanisation of reasoning with tableaux and related systems such as model elimination, connection method and sequent calculi. Invited talks were given by Bob Kowalski and Peter Schmitt.
A. Finkelstein (Editor) , 0ppReport: 1994/6
S. Eaterbrook, A. Finkelstein, J. Kramer , B. Nuseibeh, 19ppReport: 1994/7
Support for Concurrent Engineering must address the 'multiple perspectives problem' - many actors, many representation schemes, diverse domain knowledge and differing development strategies, all in the context of distributed asynchronous development. Central to this problem is the issue of managing consistency between the various elements of an emerging design. In this paper, we argue the striving to maintain complete consistency at all points in the development process is unnecessary, and an approach based on tolerance and management of inconsistency can be adopted instead. We present a scenario which highlights a number of important issues raised by this approach, and we describe how these issues are addressed in our framework of distributed ViewPoints. The approach allows an engineering team to develop inconsistent ViewPoints, and to establish relationships between them incrementally. The framework provides mechanisms for expressing consistency relationships, checking that individual relationships hold, and resolving inconsistencies if necessary.
S.C. Cheung , J. Kramer, 29ppReport: 1994/8
Automated behaviour analysis is a valuable technique in the development and maintenance of distributed systems. In this paper, we present a tractable dataflow analysis technique for the detection of unreachable states and actions in distributed systems. The technique follows an approximate approach described by Reif & Smolka, but delivers a more accurate result in assessing unreachable states and actions. The higher accuracy is achieved by the use of two concepts: action dependency and history sets. Although the technique does not exhaustively detect all possible errors, it detects non-trivial errors with a worst-case complexity quadratic to the system size. It can be automated and applied to systems with arbitrary loops and non-deterministic structures. The technique thus provides practical and tractable behaviour analysis for preliminary designs of distributed systems. This makes it an ideal candidate for an interactive checker in software development tools. The technique is illustrated with case studies of a pump control system and an erroneous distributed program, Results from a prototype implementation are presented.
A. Edalat, 32ppReport: 1994/9
We present a domain-theoretic framework for measure theory and integration of bounded real-valued functions with respect to bounded Borel measures on compact metric spaces. The set of normalised Borel measures of the metric space can be embedded into the maximal elements of the normalised probabilistic power domain of its upper space. Any bounded Borel measure on the compact metric space can then be obtained as the least upper bound of an w-chain of linear combinations of point valuations (simple valuations) on the upper space, thus providing a constructive set-up for these measures. We use this setting to define a new notion of integral of a bounded real-valued function with respect to a bounded Borel measure on a compact metric space. By using an w-chain of simple valuations, whose lub is the given Borel measure, we can then obtain increasingly better approximations to the value of the integral, similar to the way the Riemann integral is obtained in calculus by using step functions. We show that all the basic results in the theory of Riemann integration can be extended in this more general setting. Furthermore, with this new notion of integration, the value of the integral, when it exists, coincides with the Lebesgue integral of the function. An immediate area for application is in the theory of iterated function systems with probabilities (fractals) on compact metric spaces, where we obtain a simple approximating sequence for the integral of a real-valued continuous function with respect to the invariant measure.
M. Nuttall, 48ppReport: 1994/10
This report details a survey of systems providing process or object migration. After an introduction to the topic a number of systems are covered in detail. Four chapters deal with systems providing migration over unmodified UNIX, those providing 'classical' process migration and containing modified or new kernels, migration over modern micro kernel based operating systems and finally those providing more general 'object' migration.
The coverage of each system contains a general overview, a detailed examination of the migration subsystem and load balancing support, if any. Migration support is sectioned into an overview, an examination of the migration mechanisms, the effects of migration on inter (task) communication, and details of any environment and associated resource transfer.
The summary provides comments on the major design issues identified within the report. It notes that solutions running over raw UNIX are the least complete, and that some level of kernel modification is generally necessary in order to provide good migration support. Micro kernel architectures are shown to be useful to the developers of a migration subsystem due to their modularity, which tends to ease the encapsulation of process state. The 'object' paradigm is similarly shown to aid both encapsulation and post migration communication.
Finally, a note is made that very few research topics seem to remain in the construction of a process migration system since the majority of design issues have now been addressed. No one system is found to provide a complete range of features, however.
B. Nuseibeh, 24ppReport: 1994/11
This paper is an introductory review of method-based software development. It begins by exploring the role of software engineering methods in the development of complex software systems. In particular, the relationship between methods and CASE tools is examined, and the role of meta-CASE technology in this context is discussed. Requirements engineering is identified as a critical software development activity, and is used as a vehicle for demonstrating a number method-based development approaches. The deployment of methods that support multiple views during requirements engineering is advocated, and method integration is identified as critical to the development of integrated software systems in this setting.
A. Edalat, 14ppReport: 1994/12
We present finite algorithms for decoding an iterated function system (IFS), and IFS with probabilities and a recurrent IFS on the digitised screen, which generate black- and-white and colour images with application in fractal image compression. These algorithms are polynomial in the number of maps in the IFS with the degree of the polynomial depending on the contractivity of the IFS and the resolution of the screen. They produce a good quality image up to several times faster than the algorithms in the state of the art, for which the number of iterations in the decoding has to be fixed in advance by trail and error. We also formulate a new algorithm to compute the expected value of a continuous function with respect to the stationary distribution of a recurrent IFS.
A. Edalat, 19ppReport: 1994/13
We use domain-theoretic models of iterated function systems (IFS) on compact metric spaces to formulate various new algorithms in the field of IFSs and fractal image generation. An IFS is modelled by the Plotkin power domain of the upper space and an IFS with probabilities, or more generally a recurrent IFS, by the probabilistic power domain of the upper space. We obtain polynomial algorithms to generate the attractor of a hyperbolic IFS, the invariant measure of an hyperbolic IFS with probabilities and the stationary distribution of a recurrent hyperbolic IFS on the digitised screen. They are the first finite algorithms of their kind in the state of the art. We also use the generalised Riemann integral to obtain a deterministic algorithm to compute the expected value of a continuous function with respect to the invariant measure of a hyperbolic IFS with probabilities or the stationary distribution of a hyperbolic recurrent IFS.
Z.N. Grant-Duff , P.G. Harrison, 17ppReport: 1994/14
We show how homomorphisms offer a direct transformation rule between the SIMD and dynamic MIMD computational models and derive a model transformer combinator. We argue that a skeleton based parallel programming system, where parallel programs are derived by systematic synthesis, should incorporate a homomorphism skeleton and the model transformer. A methodology for the identification of homomorphisms in high level functional language specifications is presented and their role illustrated with a series of examples.
S. Drossopoulou , S. Karathanos, 26ppReport: 1994/15
We introduce ST&T, a static type system for Smalltalk. ST&T is safe (i.e., no type correct program will produce a dynamic type error), requires less type checking of methods than other systems, and allows more programs to be considered type correct. In ST&T, subclasses are subtypes, the type of a method is a set of signatures, and it is possible to express a functional relation between argument types and result type.
We have developed an inference system describing our approach and have demonstrated the paracticability of these ideas in a prototype implementation.
N. Yialelis , M. Sloman, 32ppReport: 1994/16
This report discusses the authentication issue from an abstract point of view and presents the early design of an authentication service for distributed systems that make use of the notion of the Domain, as it is described in the SysMan project. Domains are used to specify access control policy in terms of groups of objects (users, files, servers, etc.) The enforcement of access control policy that is specified in terms of domains requires authentication of object memberships of domains. Conventional authentication systems (for example Kerberos) deal with the authentication of identities. As authentication of domain membership exhibits a number of peculiarities, there is a need to design a system that can deal efficiently with this kind of authentication. We propose a domain authentication system that is based on the use of symmetric cryptography and utilses trusted authentication servers with minimal state in virtues of server replication. This report, however, does not discuss the details that concern the form of certificates or the cryptographic algorithms that can be used. Rather, it presents the philosophy of the system and examines how it can be used to verify domain memberships, authenticate users and support delegation of access rights.
A. Edalat, 16ppReport: 1994/17
We show that the linear reward-penalty scheme, for binary-state stochastic learning automata can be modelled by the dynamics of an iterated function system on a probabilistic power domain, and we compute the expected value of any continuous function in the learning process. We then present a domain-theoretic framework for the distribution of synaptic couplings in the general class of forgetful neural nets in which pattern learning takes place by a local iterative scheme. We obtain algorithms to compute the decay of the embedding strength of the stored patterns, which is used in computing the storage capacity of the network.
P.G. Harrison , A. de C. Pinto, 214ppReport: 1994/18
The 4th Qmips Workshop, held at Imperial College, London, focused on the application of quantitative methods to the analysis of the performance of parallel computer systems. The main topics related to load balancing, sceduling and optimisation, but current research on the enabling theory was also presented.
The papers presented here fall into three categories:
- Optimisation and Scheduling
- Load balancing
- Work in progress, which included a study of unreliable systems, an analytical model for a distributed cache coherency protocol and an analysis of the M/G/1 queue with negative customers.