Department of Computing
Research Report 2000/2002

 

Departmental Technical Reports

2001


2001

The following ISSN (International Standard Serial Numbers) have been assigned to our series by the British Library:

  Departmental Technical Reports (Print): ISSN 1469-4166
  Departmental Technical Reports (Online): ISSN 1469-4174



2001/1 POLYMORPHIC INTERSECTION TYPE ASSIGNMENT FOR REWRITE SYSTEMS WITH ABSTRACTION AND ß-RULE
Steffen van Bakel, Franco Barbanera, Maribel Fernandez, 40pp
2001/2 A GENERALIZED DUAL PHASE-2 SIMPLEX ALGORITHM
Istvan Maros, 26pp
2001/3 A GENERAL PRICING SCHEME FOR THE SIMPLEX METHOD
Istvan Maros, 17pp
2001/4 DETECTING IMPLIED SCENARIOS IN MSCs USING LTSA
Sebastian Uchitel, Jeff Kramer and Jeff Magee, 11pp
2001/5 THE CONVEX HULL IS COMPUTABLE!
Abbas Edalat, Elham Kashefi, André Lieutier, 16pp
2001/6 "SMALL SAMPLE SIZE": A METHODOLOGICAL PROBLEM IN BAYES PLUG-IN CLASSIFIER FOR IMAGE RECOGNITION
Carlos E. Thomaz and Duncan F. Gillies, 25pp
2001/7 AN ABDUCTIVE APPROACH FOR ANALYSING EVENT-BASED REQUIREMENTS SPECIFICATIONS
Alessandra Russo, Rob Miller, Bashar Nuseibeh¸ Jeff Kramer, 37pp
2001/8 A PARADIGM FOR THE BEHAVIOURAL MODELLING OF SOFTWARE PROCESSES USING SYSTEM DYNAMICS
M. M. Lehman, J. F. Ramil, G. Kahen, 11pp
2001/9 MACROSCOPIC STRUCTURE AND PHYSIOLOGY OF THE NORMAL AND DISEASED HEART
Sharmeen Masood, Guang-Zhong Yang, 32pp



 

2001/1

 

 

 

POLYMORPHIC INTERSECTION TYPE ASSIGNMENT FOR REWRITE SYSTEMS WITH ABSTRACTION AND ß-RULE
Steffen van Bakel, Franco Barbanera, Maribel Fernández, 40pp

We define two type assignment systems for first-order rewriting extended with application, lambda-abstraction, and ß-reduction, using a combination of intersection types and second-order polymorphic types. The first system is the general one, for which we prove subject reduction, and strong normalisation of typeable terms. The second is a decidable subsystem of the first, by restricting to rank 2 (intersection and quantified) types. For this system we define, using an extended notion of unification, a notion of principal typing which is more general than ML’s principal type property, since also the types for the free variables of terms are inferred.



 

2001/2

 

 

 

A GENERALIZED DUAL PHASE-2 SIMPLEX ALGORITHM
Istvan Maros, 26pp

Real-life linear programming (LP) problems include all types of variables and constraints. Current versions of the primal simplex method are well prepared to handle such problems efficiently. At the same time, the usefulness of the dual simplex method was thought to be limited to the standard problem though it could be the ideal algorithm in many other cases. For instance, most solution methods for Mixed Integer Programming (MIP) problems require the repeated solution of closely related continuous LP problems. It is typical that the optimal basis of a node problem is dual feasible for its child problems. In such a situation the dual simplex algorithm (DSA) is undoubtedly the best solution method. The LP relaxation of MIP problems contains many bounded variables and, realistically, other types of variables may also be present. This necessitates such an implementation of the DSA that can handle variables of arbitrary type. The paper presents an algorithm called BSD for the efficient handling of all types of variables. The distinguishing features of this method are: (i) in one iteration it can make progress equivalent to many traditional dual iterations, (ii) using proper data structures it can be implemented very efficiently so that an iteration requires hardly more work than the traditional pivot method, (iii) its effectiveness just increases if more upper bounded variables are present, (iv) it has inherently better numerical stability because it creates a large flexibility in finding a pivot element, (v) it excels itself in coping with degeneracy as it can bypass dual degenerate vertices more easily than the traditional pivot procedures. The power of the method is demonstrated through examples.



 

2001/3

 

 

 

A GENERAL PRICING SCHEME FOR THE SIMPLEX METHOD
Istvan Maros, 17pp

Pricing is a term in the simplex method for linear programming used to refer to the step of checking the reduced costs of nonbasic variables. If they are all of the ‘right sign’ the current basis (and solution) is optimal. If not, this procedure selects a candidate vector that looks profitable for inclusion in the basis. While theoretically the choice of any profitable vector will lead to a finite termination (provided degeneracy is handled properly), the number of iterations until termination depends very heavily on the actual choice (which is defined by the selection rule applied). Pricing has long been an area of heuristics to help make better selection. As a result, many different and sophisticated pricing strategies have been developed, implemented and tested. So far none of them is known to dominate all others in all cases. Therefore, advanced simplex solvers need to be equipped with many strategies so that the most suitable one can be activated for each individual problem instance.

In this paper we present a general pricing scheme. It creates a large flexibility in pricing. It is controlled by three parameters. With different settings of the parameters many of the known strategies can be reproduced as special cases. At the same time, the framework makes it possible to define new strategies or variants of them. The scheme is equally applicable to general and network simplex algorithms.




 

2001/4

 

 

 

DETECTING IMPLIED SCENARIOS IN MSCs USING LTSA
Sebastian Uchitel, Jeff Kramer and Jeff Magee, 11pp

Scenario-based specifications such as Message Sequence Charts (MSCs) are becoming increasingly popular as part of a requirements specification. Scenarios describe how system components, the environment and users working concurrently interact in order to provide system level functionality. Each scenario is a partial story which, when combined with other scenarios, should conform to provide a complete system description. However, it is not always possible to build a set of components that provides exactly the same system behaviour as described with a set of scenarios. Implied scenarios may appear as a result of unexpected component interaction.

In this paper, we present an algorithm that builds a behaviour model that describes the closest possible implementation for a specification based on basic and high-level MSCs. We also present a technique for detecting and providing feedback on the existence of implied scenarios. We have integrated these procedures into the Labelled Transition System Analyser, which allows for model checking and animation of the behaviour model.





 

2001/5

 

 

 

THE CONVEX HULL IS COMPUTABLE!
Abbas Edalat, Elham Kashefi, André Lieutier, 16pp

Despite a huge number of algorithms and articles published on robsustness issues relating to the convex hull of a finite number of points in n-dimensional Euclidean space, the question of computability of the convex hull, important as it is, has never been addressed in the literature. In this paper, we use the domain-theoretic computable solid modeling framework to show that the convex hull of a finite number of computable points in n-dimensional Euclidean space is indeed computable.





 

2001/6

 

 

 

"SMALL SAMPLE SIZE": A METHODOLOGICAL PROBLEM IN BAYES PLUG-IN CLASSIFIER FOR IMAGE RECOGNITION
Carlos E. Thomaz and Duncan F. Gillies, 25pp

New technologies in the form of improved instrumentation have made it possible to take detailed measurements over recognition patterns. This increase in the number of features or parameters for each pattern of interest not necessarily generates better classification performance. In fact, in problems where the number of training samples is less than the number of parameters, i.e. “small sample size” problems, not all parameters can be estimated and traditional classifiers often used to analyse lower dimensional data deteriorate. The Bayes plug-in classifier has been successfully applied to discriminate high dimensional data. This classifier is based on similarity measures that involve the inverse of the sample group covariance matrices. However, these matrices are singular in “small sample size” problems. Thus, several other methods of covariance estimation have been proposed where the sample group covariance estimate is replaced by covariance matrices of various forms.

In this report, some of these approaches are reviewed and a new covariance estimator is proposed. The new estimator does not require an optimisation procedure, but an eigenvector-eigenvalue ordering process to select information from the projected sample group covariance matrices whenever possible and the pooled covariance otherwise. Effectiveness of the method is shown by some experimental results.

 


 

 

2001/7

 

 

 

AN ABDUCTIVE APPROACH FOR ANALYSING EVENT-BASED REQUIREMENTS SPECIFICATIONS
Alessandra Russo, Rob Miller, Bashar Nuseibeh¸ Jeff Kramer, 37pp

We present a logic-based approach for analysing event-based requirements specifications given in terms of a system's reaction to events and safety properties. The approach uses an event-based logic, called the Event Calculus, to represent such specifications declaratively. Building on this formalism, the approach uses an abductive reasoning mechanism for analysing safety properties. Given a system description and a safety property, the abductive mechanism is able to identify a complete set of counterexamples (if any exist) of the property in terms of symbolic "current" states and associated event-based transitions. If it fails to find such an answer, this establishes the validity of the safety property with respect to the system description. The approach is supported by a decision procedure that (i) always terminates and (ii) facilitates analysis of this type of properties even in the presence of incomplete domain knowledge, where initial conditions are not completely specified. A case study of an automobile cruise control system specified in SCR is used to illustrate our approach. The technique described is implemented using existing tools for abductive logic programming.





 

2001/8

 

 

 

A PARADIGM FOR THE BEHAVIOURAL MODELLING OF SOFTWARE PROCESSES USING SYSTEM DYNAMICS
M. M. Lehman, J. F. Ramil, G. Kahen, 11pp

Industrial software evolution processes are, in general, complex feedback systems. Recognition of this opens up opportunities to achieve sustained process improvement. The system dynamics approach was used in the FEAST projects to model behaviour of key process and product attributes, yielding foundations for a paradigm to achieve dynamic models of software evolution processes. It is based on top-down model refinement of a high level abstracted view of the process. Resulting models are relatively simple, when compared to others in the literature. Their simplicity, however, facilitates understanding, progressive refinement, subsequent validation, and industrial take-up.

The paper illustrates the paradigm with a model of a process highlighting a particular issue. The results of executing this model indicate how one may analyse and eventually optimise division of resources between progressive activity such as functional enhancement and anti regressive activity such as complexity control. The approach and model are based on empirical observations and interpretation of the evolution phenomena. When appropriately refined and validated to reflect a particular process, a model such as this can more generally be used for release planning and process improvement. The approach, together with its models, provides the basis for management technology and tools to achieve sustainable long-term evolution of complex software and systems.





 

2001/9

 

 

 

MACROSCOPIC STRUCTURE AND PHYSIOLOGY OF THE NORMAL AND DISEASED HEART
Sharmeen Masood, Guang-Zhong Yang, 32pp

This paper outlines the macroscopic anatomy and physiology of the heart, linking the micro and macroscopic structure of cardiac muscle fibre to its function during contraction and dilatation. The properties of cardiac muscle cells and the process of contraction at a cellular level are described. The macroscopic structure of the myocardium is further explained as one muscle band wound into a double twist. This helps to elucidate the muscle architecture and structure of the ventricles. Ventricular dynamics are also described as the twisting and untwisting of this muscle band to produce shortening and lengthening. Myocardial perfusion and causes of disease are discussed. Coronary artery disease and its effect on contractility is then described and ways of measuring contractility are introduced.

.