1993 - Technical Reports
H.F. Chau, 17ppReport: 1993/1
This paper describes a proof search system for a non-classical logic (modal concatenation (substructural logic) based on Labelled Deductive System (LDS). The logic concerned is treated as a case study. It has some unusual features which combine resource (linear, Lambek Calculus or relevance logics) with modality (intentional, temporal, or epistemic logics), and may have some useful applications in AI and natural language processing. We present axiomatic and LDS style proof theories (two-dimensional label structure) and semantics for the logic. Soundness and completeness results are proved. We show that, for non-classical logic theorem proving, LDS is more flexible than the existing methods, and is suitable for mechanisation directly. It also bridges the gap between proof theory and implementation. This paper also verifies Gabbay's open claims that LDS is a good framework for handling logics which combine different features, such as modality and resource restriction. We believe our approach can be extended to any variant which combines substructurality and modality.
A. Finkelstein, D. Gabbay, A. Hunter, J. Kramer, B. Nuseibeh, 15ppReport: 1993/2
The development of most large and complex systems necessarily involves many people - each with their own perspectives on the system defined by their knowledge, responsibilities, and commitments. To address this we have advocated distributed development of specifications from multiple perspectives. However, this leads to problems of identifying and handling inconsistencies between such perspectives. Maintaining absolute consistency is not always possible. Often this is not even desirable since this can unnecessarily constrain the development process, and can lead to the loss of important information. Indeed since the real-world forces us to work with inconsistencies, we should formalise some of the usually informal or extra- logical ways of responding to them. This is not necessarily done by eradicating inconsistencies but rather by supplying logical rules specifying how we should act on them. To achieve this, we combine two lines of existing research: the ViewPoints framework for perspective development, interaction and organisation, and a logic-based approach to inconsistency handling. This paper presents our technique for inconsistency handling in the ViewPoints framework by using simple examples.
S.C. Cheung, J. Kramer, 26ppReport: 1993/3
Each process in a distributed program or design can be modelled as a flow graph, where nodes represent program statements and directed edges represent control flows. This paper describes a flow analysis method to detect unreachable statements by examining the control flows and communication patterns in a collection of flow graphs. The method can be applied to programs with loops, non-deterministic structures and synchronous communication. It is tractable with a quadratic complexity in terms of program size. The method follows an approach described in [Reif 90] but delivers a more accurate result in assessing the reachability of statements. The higher accuracy is achieved by the use of three techniques: statement dependency, history sets and statement re-reachability. It has been implemented on SUN workstations. A pump control application for a mining environment is used to illustrate the method.
S.C. Cheung, 13ppReport: 1993/4
Compositional techniques have been proposed to control the state explosion problem in reachability analysis. However, this goal is not often achieved by straightforward compositional techniques because locally minimised subsystems may contain a large number of states that are unreachable in the global context of the whole system. This paper presents an effective method to alleviate this problem by enhancing compositional techniques with the concept of context constraints.
Context constraints are restrictions on subsystem behaviour asserted by the environment. The constraints are specified as interfaces either supplied by designers or constructed by tools. An efficient interface construction algorithm is presented. Introduction of interfaces allows a subsystem to be locally analysed and minimised in the context of its environment. A reduced substitute is generated as a result of the local analysis. This substitute can then safely replace the original subsystem and simplify the subsequent analysis of the whole system. Simple correctness criteria are identified to ensure the preservation of system properties after an introduction of interfaces. The techniques and concepts are demonstrated by a clients/server example.
Z. Schreiber, 24ppReport: 1993/5
Process calculi are commonly used to specify and verify communication protocols. Once verified, it is desirable to be able to implement a protocol or other concurrent design using automatic assistance and in a provably correct way.
Previously, the author has introduced a representation for a process calculus program in the form of a sequential graph with parameters. A formally proved mapping from CCS programs to this representation has also been presented. Here we present a translation from the representation to a subset of C which is given a formal semantics. The translation is formally defined and proved correct. Together with the earlier work, a verified translation from CCS to C is obtained.
While other work has already been done on such translations, this is the first translation with a correctness proof, therefore being more in the spirit of formal methods. Further, we introduce a flexible approach to resolving non- determinism which gives this translation a practical advantage.
J. Darlington, A.J. Field, P.G. Harrison, P.H.J. Kelly, D.W.N. Sharp, Q. Wu, R.L. While, 15ppReport: 1993/6
Programming parallel machines is notoriously difficult. Factors contributing to this difficulty include the complexity of concurrency, the effect of resource allocation on performance and the current diversity of parallel machine models. The net result is that effective portability, which depends crucially on the predictability of performance, has been lost.
Functional programming languages have been put forward as solutions to these problems, because of the availability of implicit parallelism. However, performance will be generally poor unless the issue of resource allocation is addressed explicitly, diminishing the advantage of using a functional language in the first place.
We present a methodology which is a compromise between the extremes of explicit imperative programming and implicit functional programming. We use a repertoire of higher-order parallel forms, skeletons, as the basic building blocks for parallel implementations and provide program transformations which can convert between skeletons, giving portability between differing machines. Resource allocation issues are documented for each skeleton/machine pair and are addressed explicitly during implementation in an interactive, selective manner, rather than by explicit programming.
D.B. Howe, G.L. Burn, 10ppReport: 1993/7
Lazy evaluation has many desirable features for the functional programmer but, although there are now several efficient implementations, it is still true that performance can be improved by transforming lazy evaluation into call- by-value.
Strictness analysis tells us which argument expressions we can evaluate in advance without changing the termination properties of the program. We have used the results of the evaluation transformer brand of strictness analysis to transform code produced by GHC, Glasgow University's Haskell compiler based on the Spineless Tagless G-Machine.
We compare how the STG Machine executes lazy code and the code resulting from several different transformations - black hole updates, in-line boxing, omitting updates and passing unboxed values. Performance results are presented for some program fragments.
R.L. While, A.J. Field, 10ppReport: 1993/8
We describe a technique for incorporating Baker's incremental garbage collection algorithm into the Spineless Tagless G-machine on stock hardware. This algorithm eliminates the stop/go execution associated with bulk copying collection algorithms, allowing the system to place an upper bound on the time taken to perform a store operation. The implementation is based on the manipulation of code-pointers and is considerably more efficient than previous software implementations of Baker's algorithm.
P.G. Harrison, A.de C. Pinto, 36ppReport: 1993/9
An approximate algorithm is developed for the performance analysis of buffered, packet-switched, asynchronous networks with no feedback. We focus on banyan networks which are important in parallel computer architectures and telecommunication systems to illustrate our approach and our algorithm is tailored to this problem. The networks we consider are organised in a finite number of stages through each of which a task passes successively in its transmission. The components which forward messages in each stage are modelled independently as small sub-networks of queues; these components are crossbars in the case of banyan networks. Blocking can occur on several upstream paths preceding a full component and the inter-stage blocking effect is taken into account iteratively, blocking probabilities being computed in terms of results of a previous step. The networks we consider may be non-homogeneous in that different servers may have different rates and the arrival processes need not be identical. We derive the queue length probability distribution of each switch from which performance measures, such as blocking probability, mean buffer occupancy and mean transmission time, follow.
A.J. Field, R.L. While, 19ppReport: 1993/10
In this paper we explore the use of functional programming languages in the design and implementation of discrete-event simulation programs. We show, in particular, how language features such as polymorphism, higher-order functions and lazy evaluation can be used to develop powerful, re-usable library modules for, amongst others, event management, simulation monitoring and statistical analysis. We give examples of simulation programs developed within the proposed framework; these illustrate the conciseness of the functional notation and at the same time demonstrate how the expressive
B. Strulo, 41ppReport: 1993/11
We present a process algebra or programming language, based on CCS, which may be used to describe discrete event simulations with parallelism. It has extensions to describe the passing of time and probabilistic choice, either discrete, between a countable number of processes, or continuous to choose a random amount of time to wait. It has a clear operational semantics and we give approaches to denotational semantics given in terms of an algebra of equivalences over processes. It raises questions about when two simulations are equivalent and what we mean by non-determinism in the context of the specification of a simulation. It also exemplifies some current approaches to adding time and probability to process algebras.
B. Strulo, P.G. Harrison, 29ppReport: 1993/12
We present a process algebra or programming language, based on CCS, which may be used to describe discrete event simulations with parallelism. It has extensions to describe the passing of time and probabilistic choice, either discrete, between a countable number of processes, or continuous to choose a random amount of time to wait. It has a clear operational semantics and we discuss equivalences over this operational semantics that imply equations that can be used to work formally with processes. It raises questions about when two simulations are equivalent and what we mean by non-determinism in the context of the specification of a simulation. It also exemplifies some current approaches to adding time and probability to process algebras.
S.C. Cheung, J. Kramer, 14ppReport: 1993/13
Compositional techniques have been proposed to control the combinatorial state explosion problem in reachability analysis. However, this goal is not often achieved by straightforward applications of the technique because locally minimised subsystems may contain a large number of states that are unreachable in a global context of the whole system. This paper presents an effective method to alleviate this problem by enhancing the compositional technique with a concept of context constraints.
Context constraints, specified as interfaces, are restrictions on subsystem behaviour asserted by the environment. Introduction of interfaces allows a subsystem to be locally minimised in the context of its environment. The contextual local minimisation produces a reduced substitute which can safely replace the original subsystem and simplify subsequent analysis of the whole system. Contextual local minimisations of disjoint subsystems can be performed independently. An efficient interface construction algorithm is presented. The enhanced compositional technique is demonstrated by a clients/server system implementing a round-robin protocol.
S.C. Cheung, J. Kramer, 14ppReport: 1993/14
Behavioural analysis is an important technique for developing and maintaining well-behaved concurrent systems. Concurrent systems can be specified as Labelled Transition Systems (LTS). A global LTS of a system is often constructed using reachability analysis. However, the LTS so constructed is generally incomprehensible and expensive to compute. These two problems can be substantially alleviated by an integrated analysis method. The method integrates dataflow and contextual compositional analysis techniques. The integration meets the requirements for an effective analysis method. A clients/server system is used as an illustration.
P.G. Harrison, R.L. While, 11ppReport: 1993/15
We apply algebraic transformation techniques to synthesise Horner's rule for polynomial evaluation. Horner's rule is then transformed into a pipeline by the application of further axioms. The syntheses demonstrate the power of the algebraic style, in which inductive proof is replaced by constructive unfolding and folding of standard higher-order functions defined on lists.
S.C. Cheung, J. Kramer, 14ppReport: 1993/16
Local analysis is the process of examining the behaviour of a subsystem in a concurrent program. Each subsystem in a program is constrained by its context (i.e. environment). If the context is ignored in local analysis, many forbidden behavioural traces may be included in analysing a subsystem behaviour. These forbidden traces are unnecessary for analysing the subsystem behaviour in a program and their removal can greatly simplify the analysis.
In this paper, we describe an elegant method to include context constraints in the local analysis. This method can be well integrated with a compositional analysis technique to support a constructive approach for concurrent software development. Such an integration leads to better control of the combinatorial state explosion problem found in reachability and compositional analysis and also facilitates early detection of anomalous behaviour in a concurrent program.
Context constraints can be specified as an interface which satisfies three correctness criteria laid down by an interface theorem. The satisfaction of these criteria permits the interface to be freely introduced into a program. A reduced substitute is generated in a local analysis. The substitute can then replace the original subsystem and simplify subsequent compositional analysis of the whole system. The method is supported by an efficient interface construction algorithm. The techniques and concepts are illustrated with a clients/server example implementing a round-robin protocol.
G.L. Burn, 12ppReport: 1993/17
Abstract interpretation is a methodology for developing provably correct program analyses. Uses of such analyses include optimising compilers and program verification.
In this introductory paper, we introduce key issues in the abstract interpretation of higher-order languages, including some open problems. Intuitions, examples and open problems are given instead of a list of theorems.
R.L. Crole, 12ppReport: 1993/18
This work expounds the notion that (structured) categories are syntax free presentations of type theories, and shows some of the ideas involved in deriving categorical semantics for given type theories. It is intended for someone who has some knowledge of category theory and type theory, but who does not fully understand some of the intimate connections between the two topics. We begin by showing how the concept of a category can be derived from some simple and primitive mechanisms of monadic type theory. We then show how the notion of a category with finite products can model the most fundamental syntactical constructions of (algebraic) type theory. The idea of naturality is shown to capture, in a syntax free manner, the notion of substitution, and therefore provides a syntax free coding of a multiplicity of type theoretical constructs. Using these ideas we give a direct derivation of a cartesian closed category as a very general model of simply typed l-calculus with binary products and a unit type. This article provides a new presentation of some old ideas. It is intended to be a tutorial paper aimed at audiences interested in elementary categorical type theory.
C. Hankin, 10ppReport: 1993/19
Graph rewriting systems are the generalisation of term rewriting systems from (finite) trees to graphs. They provide the basis for an abstract treatment of graph reduction, a well-established technique for the implementation of declarative languages. Given this last observation, it is sensible to develop tools for the compile-time analysis of graph rewrite programs; it is to be expected that opportunities for optimisation which are difficult to detect in the source program might be exposed at this level. This paper summarises the work that we have done on semantics-based static analysis of programs represented by Term Graph Rewriting Systems.
S. Vickers, 18ppReport: 1993/20
We present an introduction to geometric logic and the mathematical structures associated with it, such as categorical logic and toposes. We also describe some of its applications in computer science including its potential as a logic for specification languages.
S. Abramsky, 13ppReport: 1993/21
We propose Interaction Categories as a new paradigm for the semantics of computation. In place of the standard paradigm for categories qua universes for denotational semantics.
M. Dawson, 12ppReport: 1993/22
The logician J.-Y. Girard has proposed the calculus LU that unifies Classical, Intuitionistic and Linear logics in a single proof theoretic presentation. In this paper we show how we have used our logic environment to explore LU. Unlike other 'generic' logic environments, presenting a logic in ICLE preserves the structure, so that proof-theoretic properties can be retained, e.g. cut-elimination and consistency.
A. Edalat, 18ppReport: 1993/23
We introduce domain theory in the computation of dynamical systems, iterated function systems (fractals) and measures.
A. Edalat, 15ppReport: 1993/24
We introduce IP-categories by enriching the notion of I-category (information category) such that every inclusion morphism has a right adjoint or projection. Categories of information systems for various domains in semantics are all examples of IP-categories. We show that a weak notion of substructure relation between two Scott information systems induces general adjunctions between them. An IP-category with a zero object has the self-dual property that its opposite is again an IP-category. In a complete IP-category with a zero object, limits and colimits of chains of projections and inclusions coincide. As a consequence of the self-duality, a simple characterisation of minimal invariant objects of contravariant and mixed functors on IP-categories is obtained. IP-categories are closed under taking the Karoubi envelope. We use the arrow category of an effectively given IP-category to solve domain equations in categories of continuous information systems effectively.
L. Errington, C. Hankin, T. Jensen, 11ppReport: 1993/25
We describe an axiomatic semantics for the parallel programming language Gamma based on a framework developed by Brookes. This framework is an extension of Floyd-Hoare logic that allows compositional reasoning about the interleaved execution of Gamma programs. We present a proof system for deriving valid assertions about Gamma programs and examine its practical use through an example.
J. Fiadeiro, T. Maibaum, 22ppReport: 1993/26
The structural property of p-institutions which requires consequence to be preserved under changes of language is weakened. The proposed weakly structural p-institutions encompass logics in which consequence does depend on the choice of non-logical symbols by associating locality conditions with signatures or signature morphisms. They also enable new logics to be defined by reusing existing ones, extending and adapting them in order to build formalisms that better fit the applications whose specification they are intended to support.
S. Gay, R. Nagarajan, 11ppReport: 1993/27
Abramsky has recently proposed Interaction Categories as a new paradigm for the semantics of sequential and parallel computation. Working with the category Sproc of synchronous processes, which is a key example of an Interaction Category, we study synchronous dataflow as part of a programme of gaining experience in the use of Interaction Categories. After making some general points about representing dataflow in Sproc, we present a detailed model of the synchronous dataflow language SIGNAL. We demonstrate that dataflow is a model of concurrency which can be easily treated in a typed framework, and that the structure of Interaction Categories is appropriate for describing real concurrent languages.
R. Heckmann, 12ppReport: 1993/28
If a strong monad M is used to define the denotational semantics of a functional language with computations, a product operation x : M X x M Y --> M (X x Y) is needed to define the semantics of pairing. Every strong monad is equipped with two standard products, which correspond to left-to-right and right-to-left evaluation. We study the algebraic properties of these standard products in general. Then we define alternative products with similar properties for strict and parallel evaluation in the special case of strong monads in DCPO which are obtained as free constructions w.r.t. various theories of non-deterministic computations, including both classical and probabilistic theories.
M. Huth, 12ppReport: 1993/29
We study frameworks for the equivalence of abstract state-transition systems represented as posits. A basic notion of equivalence is proposed. A least fix point operator transforms basic equivalences into strong equivalences (= Lagois Connections) which makes Lagois Connections into a category. In the absence of divergence, the two notions of equivalence coincide. We generalise these notions by adding a logical level to express divergence more precisely. Then both generalised notions of equivalence coincide even in the presence of divergence.
S. Kent, 12ppReport: 1993/30
This paper proposes an extension of modal action logics, which typically make the assumption that an action is atomic, to include durative actions. These logics have been developed to support the formal specification of information systems: we argue, with particular reference to object oriented systems, that assuming atomicity is too restrictive to express many kinds of temporal constraint. In consequence, we propose that actions be regarded as durative, and encode this by assuming that an action occurs over a sequence of atomic transitions, or interval, rather than a single transition. With this as a pre-requisite, the paper continues to redefine and extend operators of atomic actions logics to fit the durative case.
M. Kwiatkowska, 14ppReport: 1993/31
We consider a connection between fairness and II sets of functions made recently by Darondeau, Nolte, Priese and Yoccoz, and extend their work to cater for effective asynchronous transition systems with concurrency structure represented by a Mazurkiewicz independency relation.
M. Kwiatkowska, I. Phillips, 17ppReport: 1993/32
As part of an effort to give "truly concurrent" semantics to process algebra, we refine the failures with divergence model for CSP by adding conflict information. We show that most of the CSP laws are preserved, the exception being the expansion law and the idempotency of choice.
S. Liebert, 9ppReport: 1993/33
The language CCSs, is obtained by adding an extra operator to CCS to detect stability. One application of this operator is in the testing of equivalence of processes. CCSs arose out of a method of testing which records the refusal of some action in a finite time, as well as successes. Refusal tests record processes refusing actions and then continuing with other actions. This paper discusses refusal testing and other applications of CCSs. A complete axiomatization is developed for testing equivalence on CCSs processes.
I. Mackie, L. Roman, S. Abramsky, 12ppReport: 1993/34
In this extended abstract we present an internal language for symmetric monoidal closed (autonomous) categories analogous to the typed lambda calculus being an internal language for cartesian closed categories. The language we propose is the term assignment to the multiplicative fragment of Intuitionistic Linear Logic, which possesses exactly the right structure for an autonomous theory. We prove that this language is an internal language and show as an application the coherence theorem of Kelly and Mac Lane, which becomes straightforward to state and prove. We then hint at some further applications of this language: a further treatment will be given in the full paper.
J. Muylaert-Filho, G.L. Burn, 13ppReport: 1993/35
Compiler writers have found that the continuation passing style (cps) translation exposes various code optimisations. We show that the same is true for semantically based program analysis techniques: performing a cps- translation on a program before analysing it may expose more information than can be found by analysing the original program.
I. Phillips, 5ppReport: 1993/36
We extend the work of Baeten, Bergstra & Klop by showing that every recursively enumerable transition system can be expressed in process algebra up to weak bisimulation.
M. Ryan, 11ppReport: 1993/37
We describe some ideas and results about the following problem: Given a set, a family of "preference relations" on the set, and a "priority" among those preference relations, which elements of the set are best? That is, which elements are most preferred by a consensus of the preference relations which takes account of their relative priority? The problem is posed in a deliberately general way, to capture a wide variety of examples.
Our main result gives sufficient conditions for the existence of 'best' elements for an important instance of the problem: preference relations are pre-orders, the priority among them is a partial order, and the definition of best elements uses a generalisation of lexicographic ordering.
P. Taylor, 12ppReport: 1993/38
The behaviour and interaction of finite limits (products, pullbacks and equalisers) and colimits (coproducts and coequalisers) in the category of sets is illustrated in a "hands on" way by giving the interpretation of a simple imperative language in terms of partial functions between sets of states. We show that the interpretation is a least fixed point and satisfies the usual proof rule for loop invariants.
I. Ulidowski, 12ppReport: 1993/39
The aim in the paper is to investigate four implementable formats of rules in the setting of Transition System Specifications (TSS) and in the presence of silent actions and divergence. For each of the formats the coarsest equivalence is proposed such that it refines trace equivalence and it is a congruence for the format.
C. Hankin, D. Le Metayer, 19ppReport: 1993/40
Two major formal frameworks have been proposed for static analysis of functional languages: abstract interpretation and type inference. A lot of work has been done to characterise formally the correctness and the power of abstract interpretation. However the development of algorithms has not kept pace with the theoretical developments. This is now a major barrier that is preventing the inclusion of the most advanced techniques in compilers. The majority of the effort on improving the efficiency of abstract interpretation has concentrated on frontiers-based algorithms or widening techniques. The former still has unacceptable performance for some commonly occurring higher-order programs. The latter is a general approach for accelerating convergence in fixed point computations which, in the finite case, leads to some loss in accuracy.
O. Gotel, A. Finkelstein, 27ppReport: 1993/41
In this paper, we investigate and discuss the underlying nature of the requirements traceability problem. Our work is based on empirical studies carried out with over a hundred practitioners and an evaluation of current support for requirements traceability. We introduce the distinction between pre-requirements specification traceability and post-requirements specification traceability, to demonstrate why an all-encompassing solution to the problem is unlikely, and to provide a framework through which to understand its multifaceted nature. We report how the majority of the problems attributed to poor requirements traceability are mainly due to the lack of (or inadequate) pre-RS traceability and explain the fundamental need for improvements here. In the remainder of the paper, we present an analysis of the main barriers confronting such improvements in practice, identify relevant areas in which advances have been (or can be) made, and make recommendations for further research.
PROVING THE CORRECTNESS OF COMPILER OPTIMISATIONS BASED ON A GLOBAL PROGRAM ANALYSIS: A STUDY OF STRICTNESS ANALYSISG.L. Burn, D. Le Metayer, 31ppReport: 1993/42
A substantial amount of work has been devoted to the proof of correctness of various program analyses but much less attention has been paid to the correctness of compiler optimisations based on these analyses. In this paper we tackle the problem in the context of strictness analysis for lazy functional languages. We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations made possible by a strictness analysis.
S. Green, 38ppReport: 1993/43
A number of recent approaches to requirements engineering suggest that a new paradigm is emerging based upon the use of goals. To gain insight into these approaches, and thus into the emerging paradigm, five of them were surveyed, and four of the five were also applied to a common case-study: \the meeting scheduler problem". Knowledge gained from the surveys and case-studies enabled both the main contributions of each approach to goal-oriented requirements engineering, and the similarities and differences between the approaches to be identified. In addition the knowledge facilitated the sketching of an alternative approach based upon a synthesis of the \best" features of the approaches surveyed.
S.C. Cheung, J. Kramer, 20ppReport: 1993/44
Behaviour analysis is a valuable aid for the design and maintenance of well- behaved distributed systems. Dataflow and reachability analysis are two orthogonal but complementary behaviour analysis techniques. Individually, each of these techniques is often inadequate for the analysis of large-scale distributed systems. On the one hand, dataflow analysis algorithms, while tractable, may not be sufficiently accurate to provide meaningful detection of errors. On the other hand, reachability analysis, while providing exhaustive analysis, may be computationally too expensive for complex systems. In this report, we present a method to integrate a dataflow and a reachability analysis technique to provide a flexible and effective means for analysing the design and implementation of distributed systems. Using a case study, we examine the potential of the method at preliminary and final design stages. We also describe some effective measures taken to improve the adequacy of the individual analysis techniques using concepts of action dependency and context constraints. A prototype supporting the method has been built and its performance is given in the report. A real-life example of a distributed track control system is used as a case study in our discussion.
A. Edalat, 12ppReport: 1993/45
A new algorithm, which we here call the probabilistic power domain algorithm, is introduced which generates the image corresponding to an Iterated Function System (IFS) with probabilities, a technique used in fractal image compression. It is shown that this algorithm has significant advantages compared to the two previously known algorithms, the random iteration algorithm and the greyscale photocopy algorithm, which are used in fractal image compression with measures. The advantages consist of the following:
- The probabilistic power domain algorithm terminates in finite time on any digitised screen without needing to fix a number of iterations in advance.
- There is a simple complexity analysis for the algorithm.
- The algorithm produces a good quality image up to several times faster than the other two algorithms.
A similar algorithm, which we here call the Plotkin power domain algorithm, is also introduced which generates black and white images coded by an IFS. It shares the advantages of the probabilistic power domain algorithm.
A. Edalat, 24ppReport: 1993/46
We introduce domain theory in dynamical systems, iterated function systems (fractals and measure theory in order to provide a constructive framework for these subjects. For a discrete dynamical systems given by the action of a continuous map f : X-->X on a metric space X, we study the extended dynamical systems (V X, Vf), (UX, Uf) and (LX, Lf) where V, U and L are respectively the Vietoris space, the upper space and the lower space functors. From the point of view of computing the attractors of (X, f), it is natural to study these systems: A compact attractor of (X, f) is chaotic, then so is (UX, Uf). When X is locally compact UX will be w-continuous and can be given an effective structure. We show how strange attractors of iterated function systems (fractals) and Julia sets are obtained effectively as fixed points of deterministic functions on UX or fixed points of non-deterministic functions on CU X where C is the convex (Plotkin) power domain. We show that the set, M(X), of finite Borel measures on X can be embedded in PU X, where P is the probabilistic power domain. This provides an effective framework for measure theory. We then prove that the invariant measure of an hyperbolic iterated function system with probabilities can be obtained as the unique fixed point of an associated continuous function on PU X.
I. Hodkinson, 25ppReport: 1993/47
These notes are of lectures given to the temporal logic group in the Department of Computing at Imperial College in 1990. They cover:
- linear and branching time automata
- expressive power
- games, decidability
- non-deterministic v deterministic automata.
A. Bennett, S. Cox, P. Kelly, H. Glaser, P. Hartel, 0ppReport: 1993/48