1998 - Technical Reports
A. Russo, B. Nuseibeh, J. Kramer, 24ppReport: 1998/1
This paper describes our experiences in restructuring multi-perspective requirements specifications in order to identify and analyse inconsistencies and manage change. A partial, heterogeneous and reasonably large requirements specification from a NASA project was analysed and decomposed into a structure of "viewpoints", where each viewpoint encapsulates partial requirements of some system components described in the specification. Relationships between viewpoints were identified which included not only the interactions explicitly stated in the requirements but also some implicit and potentially problematic inter-dependencies. The restructuring process and a first informal analysis of the resulting relationships enabled the detection of inconsistencies and the definition of some interesting domain-dependent consistency rules. We believe that this restructuring into viewpoints also facilitated requirements understanding through partitioning, and requirements maintenance and evolution through explicit identification of the inter-viewpoint relationships.
D. Giannakopoulou, J. Magee , J. Kramer, 17ppReport: 1998/2
The liveness characteristics of a system are intimately related to the notion of fairness. However, the task of explicitly modelling fairness constraints is complicated in practice. To address this issue, we propose to check LTS (Labelled Transition System) models under a strong fairness assumption, which can be relaxed with the use of action priority. The combination of the two provides a novel and practical way of dealing with fairness. The approach is presented in the context of a class of liveness properties termed progress, for which it yields a particularly efficient model-checking algorithm. Progress properties cover a wide range of interesting properties of systems, while presenting a clear intuitive meaning to users.
D. Wragg, S. Drossopoulou , S. Eisenbach, 21ppReport: 1998/3
The Java language description is unusual in that it defines the effect of interleaving separate compilation and source code modifications. In Java, certain source code modifications, such as adding a method to a class, are defined as binary compatible. The Java language description does not require the re-compilation of programs importing classes or interfaces which were modified in binary compatible ways, and it claims that successful linking and execution of the altered program is guaranteed. In this paper we show that Java binary compatibility does not actually guarantee successful linking and execution. We then suggest a framework in which we formalize the requirement of safe linking and execution without re-compilation and we propose a more modest definition of binary compatibility. We prove for a substantial subset of Java, that our definition guarantees safe linking and execution.
S. Drossopoulou, D. Yang, 24ppReport: 1998/4
We describe Permco, a permissive approach which allows covariant overriding of instance variables and any overriding of instance methods.
Subclasses are considered subtypes. Instance method access is treated by considering the types of all methods defined for the class of the receiver and all its possible subclasses. Instance variable access is treated by considering the types of the instance variable in all possible subclasses. Thus, Permco is permissive at the point of redefinition and restrictive at the point of call or access. Closed types describe values which may belong to a class but not to its subclasses, and allow a more precise description of types. Result types may depend on receiver or argument types.
This paper introduces Permco in terms of an example and compares with related approaches. It then demonstrates its soundness through a subject reduction theorem.
S. Drossopoulou, D. Wragg, S. Eisenbach, 18ppReport: 1998/5
Separate compilation allows the decomposition of programs into units that may be compiled separately, and linked into an executable. Traditionally, separate compilation was equivalent to the compilation of all units together, and modification and re-compilation of one unit required re-compilation of all importing units.
Java suggests a more flexible framework, in which the linker checks the integrity of the binaries to be combined. Certain source code modifications, such as addition of methods to classes, are defined as binary compatible. The language description guarantees that binaries of types (i.e. classes or interfaces) modified in binary compatible ways may be re-compiled and linked with the binaries of types that imported and were compiled using the earlier versions of the modified types.
However, this is not always the case: some of the changes considered by Java as binary compatible do not guarantee successful linking and execution. In this paper we study the concepts around binary compatibility. We suggest a formalization of the requirement of safe linking and execution without re-compilation, investigate alternatives, demonstrate several of its properties, and propose a more restricted definition of binary compatible changes. Finally, we prove for a substantial subset of Java, that this restricted definition guarantees error-free linking and execution.
A. Field, P. Kelly , 0ppReport: 1998/6
O. Rodrigues, A. Russo, 17ppReport: 1998/7
In this report we present a translation of Belnap's four-valued logic into classical first-order logic. Soundness and completeness of the translation approach with respect to Belnap's notion of entailment are proved. Examples derivations are also given. These results provide the basis for developing a belief revision approach for Belnap's logic in terms of standard AGM belief revision operators for classical logic.
M.M. Lehmann, J.F. Ramil, 7ppReport: 1998/8
However completely specified, integration of COTS software into real world systems makes it of type E even though, were it to be fully and absolutely specified, it would satisfy the definition of an S-type system. Thus, the laws of software evolution that apply to E-type systems are also relevant in the COTS context. This paper examines the wider implications of this fact and, in particular, that such systems must undergo continuing evolution. Managerial implications of the laws of software evolution in the context of COTS are also briefly highlighted.
A PRIMAL-DUAL INTERIOR POINT ALGORITHM THAT USES AN EXACT AND DIFFERENTIABLE MERIT FUNCTION TO SOLVE GENERAL NLP PROBLEMSI. Akrotirianakis, B. Rustem, 27ppReport: 1998/9
A primal-dual interior point algorithm for solving general nonlinear programming problems is presented. The algorithm solves the perturbed optimality conditions by applying a quasi-Newton method, where the Hessian of the Lagrangian is replaced by a positive definite approximation. An approximation of Fletcher's exact and differentiable merit function together with line-search procedures are incorporated into the algorithm. The line-search procedures are used to modify the length of the step so that the value of the merit function is always reduced. Different step-sizes are used for the primal and dual variables. The search directions are ensured to be descent for the merit function, which is thus used to guide the algorithm to an optimum solution of the constrained optimisation problem. The monotonic decrease of the merit function at each iteration ensures the global convergence of the algorithm. Finally, preliminary numerical results demonstrate the efficient performance of the algorithm for a variety of problems.
D. Chalmers, M. Sloman, 38ppReport: 1998/10
The specification and management of Quality of Service (QoS) is important in networks and distributed computing systems, particularly to support multimedia applications. The advent of portable lap-top computers, palmtops and Personal Digital Assistants with integrated communication capabilities facilitates mobile computing. This paper is a survey of QoS concepts and techniques for mobile distributed computing environments. The QoS attributes typically specified and negotiated for general communication systems are described as well as some QoS models. A brief overview is given of some practical systems described in the literature. The design issues relating to both mobile and nomadic computing are explained and then the specific QoS issues related to mobile and nomadic systems are discussed. The conclusion summarises the important issues relating to supporting QoS for mobile systems.
M.M. Lehman, J.F. Ramil , P.D. Wernick, 10ppReport: 1998/11
The FEAST/1 project is investigating a hypothesis that the software process is a multi-level feedback system. Using a number of complementary approaches to model, analyse and interpret metric and other data records of the evolution of systems from various application areas, the project has made significant advances in detecting the effect of feedback and related dynamics in their evolution. Results obtained to date, supported by theoretical reasoning, support the view that software process improvement must focus on the global process including its feedback mechanisms. In addition to technical aspects of the process, organisation, management, marketing, user support and other factors must also be considered. This paper discusses the conceptual framework within which this question is raised, presents preliminary evidence to support tentative conclusions and indicates how the issue might be further investigated.
I. Maros, G. Mitra, 19ppReport: 1998/12
We undertake a computational analysis of the algorithmic components of the sparse simplex (SSX) method and consider the performance of alternative SSX solution strategies. We then combine these alternative strategies within an asychronous control structure of a parallel algorithm, implemented on a distributed memory multiprocessor machine. Our experimental results not only show (limited) speedup and robust performance, they also give new insight into the internal operation of the SSX and its adaptation on a parallel platform.
S. Drossopoulou, S. Eisenbach, D. Wragg, 6ppReport: 1998/13
Java binary compatibility prescribes conditions under which modification and re-compilation of classes does not necessitate re-compilation of further classes importing the modified classes. Binary compatibility is a novel concept for language design.
We argue that the description of the term binary compatibility in the Java language specification allows for many possible interpretations. We discuss the various interpretations and their ramifications, and suggest one interpretation, which is best in our view.
A. S. d'Avila Garcez, K. Broda , D. Gabbay , 0ppReport: 1998/14
In this report, we investigate the problem of symbolic knowledge extraction from trained neural networks, and present a new extraction method. Although neural networks have shown very good performance in terms of learnability, generalizability and speed in many application domains, one of their main drawbacks lies in their incapacity to provide an explanation to the underlying reasoning mechanisms that justify a given answer. As a result, their use in many application areas, for instance in safety-critical domains, has become limited. The so called "explanation capability" of neural networks can be achieved by the extraction of symbolic knowledge from it, using "rules extraction" methods.
We start by discussing some of the main problems of knowledge extraction methods. In an attempt to ameliorate these problems, we identify, in the case of neural networks, a partial ordering on the input vector space. A number of pruning rules and simplification rules that interact with this ordering are defined. These rules are used in our extraction algorithm in order to help reduce the input vector search space during a pedagogical knowledge extraction from trained networks. They are also very useful in helping to reduce the number of rules extracted, which provides clarity and readability to the rule set. We show that, in the case of regular networks, the extraction algorithm is sound and complete.
We proceed to extend the extraction algorithm to the class of non-regular networks, the general case. We identify that non-regular networks contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in decomposable fashion. The problem, however, is how to combine the set of rules extracted from each subnetwork into the final rule set. We propose a solution to this problem in which we are able to keep the soundness of the extraction algorithm, although we have to drop completeness.