2000 - Technical Reports
PONDER: A LANGUAGE FOR SPECIFYING SECURITY AND MANAGEMENT POLICIES FOR DISTRIBUTED SYSTEMS - THE LANGUAGE SPECIFICATIONN. Damianou, N. Dulay, E. Lupu, M. Sloman, 39ppReport: 2000/1
This document defines a declarative, object-oriented language for specifying policies for the security and management of distributed systems. The language includes constructs for specifying the following basic policy types: authorisation policies that define permitted actions; event-triggered obligation policies that define actions to be performed by manager agents; refrain policies that define actions that subjects must refrain from performing; and delegation policies that define what authorisations can be delegated and to whom.
Filtered actions extend authorisations and allow the transformation of input or output parameters to be defined. Constraints specify limitations on the applicability of policies while meta-policies define semantic constraints on permitted policies. Policy groups define a scope for related policies to which a common set of constraints can apply. Roles define a group of policies relating to positions within an organisation. Relationships define a group of policies pertaining to the interactions between a set of roles. Management structures define a configuration of role instances as well as the relationships between them.
This document defines the grammar for the various types of policies in EBNF and provides simple examples of the constructs.
M M Lehman, 4ppReport: 2000/2
This preliminary introduction, extracted from work in progress, is intended to illustrate an approach currently under development. If successfully and convincingly completed, the result should make a significant contribution, providing software engineering technology with the theoretical foundations and framework needed to support further major process and technology improvement. Expressing the theory in an appropriate formalism will represent a still further advance. The present development is a first, essential, step to achieve this outcome.
J F Ramil, 5ppReport: 2000/3
The achievement of accurate software cost estimation based only on a few factors is a long-standing goal in software engineering. Work in this area is exemplified by a number of algorithmic approaches that have been proposed over the years. COCOMO is one of the most frequently quoted of such approaches. Evidence emerging from observational and simulation studies suggest that feedback mechanisms play an important role in determining software process behaviour, its dynamics and performance. Thus, the presence of feedback mechanisms, and in particular feedback control may have a significant influence on software project cost and interval performance, but none of the current algorithmic cost estimation approaches appears, at least explicitly, to account for such influence. Why, in spite of this, do algorithmic approaches provide satisfactory estimates? Why did they work? This paper discusses some possible answers, that at the present must only be taken as hypotheses. The paper provides suggestions for further investigation of the problem.
G Kahen, MM Lehman, J F Ramil, 13ppReport: 2000/4
This paper argues that quantitative process models must be considered essential to support sustained improvement of E-type software evolution processes and summarises some of the experiences gained in the FEAST projects to date. Modelling guidelines are provided.
MODELLING AND ANALYSIS OF THE BOUNDED-RETRANSMISSION PROTOCOL: EXPERIENCE WITH DISCRETE TIME IN THE LTSAD Giannakopoulou, 24ppReport: 2000/5
The Bounded Retransmission Protocol is an industrial protocol designed for the transfer of large data files over unreliable communication lines. The protocol relies on specific assumptions on the timed behaviour of its components. This paper describes our experience with modelling and analysing the Bounded Retransmission Protocol using the LTSA. The LTSA uses labelled transition systems to specify behaviour, and compositional reachability analysis to incrementally generate, minimise, and analyse a system, based on its software ar-chitecture. The tool was not originally designed to deal with real-time applications. However, by modelling time as a discrete entity, the LTSA does not need to be extended in order to han-dle such systems. We discuss how the features of the tool can be exploited to model and ana-lyse behaviours that involve time.
Ioannis Akrotirianakis, Istvan Maros, Berc Rustem, 20ppReport: 2000/6
A branch and cut algorithm is developed for solving 0-1 MINLP problems. The algorithm integrates Branch and Bound, Outer Approximation and Gomory Cutting Planes. Only the initial Mixed Integer Linear Programming (MILP) master problem is considered. At integer solutions Nonlinear Programming (NLP) problems are solved, using a primal-dual interior point algorithm. The objective and constraints are linearized at the optimum solution of those NLP problems and the linearizations are added to all the unsolved nodes of the enumerations tree. Also, Gomory cutting planes, which are valid throughout the tree, are generated at selected nodes. These cuts help the algorithm to locate integer solutions quickly and consequently improve the linear approximation of the objective and constraints, held at the unsolved nodes of the tree. Numerical results show that the addition of Gomory cuts can reduce the number of nodes in the enumeration tree.
Thomas von Schroeter, 4ppReport: 2000/7
Three auto-regressive spectral estimation methods are experimentally tested with a view to musical applications: the Maximum Entropy method, Marple's MODCOVAR algorithm, and an efficient version of Prony spectral line estimation due to Cybenko. A performance analysis measuring the maximum relative error of their frequency estimates for a signal consisting of three sinusoids under variations of the model order (up to 20), signal length (60 to 200 samples) and noise level shows that unless the model order is close to 2/3 of the number of data points (i.e. when it is nearly ill-conditioned), Marple's algorithm gives by far the best results. In a separate experiment, Marple's algorithm was applied to recorded piano sounds; some preliminary results are shown which demonstrate its potential for fast multicomponent analysis.
S M Rüger , S E Gauch, 9ppReport: 2000/8
Often users receive search results which contain a wide range of documents, only some of which are relevant to their information needs. To address this problem, ever more systems not only locate information for users, but also organise that information on their behalf. We look at two main automatic approaches to information organisation: interactive clustering of search results and pre-categorising documents to provide hierarchical browsing structures. To be feasible in real world applications, both of these approaches require accurate yet efficient algorithms. Yet, both suffer from the curse of dimensionality - documents are typically represented by hundreds or thousands of words (features) which must be analysed and processed during clustering or classification. In this paper, we discuss feature reduction techniques and their application to document clustering and classification, showing that feature reduction improves efficiency as well as accuracy. We validate these algorithms using human relevance assignments and categorisation.
Alessio Lomuscio , Marek Sergot, 21ppReport: 2000/9
We investigate an extension of the formalism of interpreted systems by Halpern and colleagues to model correct behaviour of agents. The semantical model allows for the representation and reasoning about states of correct and incorrect function-ing behaviour of the agents, and of the system as a whole. We axiomatise this semantic class by mapping it into a suitable class of Kripke models. The resulting logic is a stronger version of KD, the system often referred to as Standard Deontic Logic. We discuss these issues and present further directions of work related to epistemic logic.
Steffen van Bakel, Maribel Fernandez, 32ppReport: 2000/10
This paper studies normalization of typeable terms and the relation between approximation semantics and filter models for Combinator Systems. It presents notions of approximants for terms, intersection type assignment, and reduction on type derivations; the last will be proved to be strongly normalizable. With this result, it is shown that, for every typeable term, there exists an approximant with the same type, and a characterization of the normalization behaviour of terms using their assignable types is given. Then the two semantics are defined and compared, and it is shown that the approximants semantics is fully abstract but the filter semantics is not.
Istvan Maros, Mohammad Haroon Khaliq, 36ppReport: 2000/11
Developing optimization software that is capable of solving large and complex real-life problems is a huge effort. It is based on a deep knowledge of four areas: theory of optimization algorithms, relevant results of computer science, principles of software engineering, and computer technology. The paper highlights the diverse requirements of optimization software and introduces the ingredients needed to fulfill them. After a review of the hardware/software environment it gives a survey of computationally successful techniques for continuous optimization. It also outlines the perspective offered by parallel computing, and stresses the importance of optimization modeling systems. The inclusion of many references is intended to both give due credit to results in the field of optimization software and help readers obtain more detailed information on issues of interest.
Iain Phillips, 17ppReport: 2000/12
It has long been recognised that ordinary process algebra has difficulty dealing with actions of different priority, such as for instance an interrupt action of high priority. Various solutions have been proposed. We introduce a new approach, involving the addition of "priority guards" to the summation operator of Milner's process calculus CCS. In our approach, priority is unstratified, meaning that actions are not assigned fixed levels, so that the same action can have different priority depending where it appears in a program. An important feature is that, unlike in other unstratified accounts of priority in CCS (such as that of Camilleri and Winskel), we can treat inputs and outputs symmetrically. We introduce the new calculus, give examples, develop its theory (including bisimulation, equational laws and logics), and compare it with existing approaches.
Istvan Maros, 25ppReport: 2000/13
A dual phase-1 algorithm for the simplex method that handles all types of variables is presented. In each iteration it maximizes a piecewise linear function of dual infeasibilities in order to make the largest possible step towards dual feasibility with a selected outgoing variable. The new method can be viewed as a generalization of traditional phase-1 procedures. It is based on the multiple use of the expensively computed pivot row. By small amount of extra work per iteration, the progress it can make is equivalent to many iterations of the traditional method. In addition to this main achievement it has some further important and favorable features, namely, it is very efficient in coping with degeneracy and numerical diffculties. Both theoretical and computational issues are addressed. Examples are also given that demonstrate the power and flexibility of the method.
Please note that Departmental Technical Report 2002/15 is an enhanced version of this paper.
Meir M Lehman, 16ppReport: 2000/14
When first formulated in the early seventies, the laws of software evolution were, for a number of reasons, not widely accepted as relevant to software engineering practice. Over the years, they have gradually become recognised as providing useful inputs to understanding of the software process and have found their place in a number of software engineering curricula. Now eight in number, they have been supplemented by a Software Uncertainty Principle and a FEAST Hypothesis.
Based on all these and on the results of the recent FEAST/1 and current FEAST/2 research projects, this paper develops and presents some fifty rules for application in software system process planning and management and indicates tools available or to be developed to support their application. The listing is structured according to the laws that encapsulate the observed phenomena and that lead to the recommended procedure. Each sub-list is preceded by a textual discussion providing at least some of the justification for the recommended procedure. The text is fully referenced. This directs the interested reader to the literature that records observed behaviours, interpretations, models and metrics obtained from some three of the industrially evolved systems studied, and from which the recommendations were derived.
Goel Kahen, M M Lehman, 6ppReport: 2000/15
FEAST, an ongoing study of the role of feedback in the software process, was prompted by various factors including the need to identify the mechanisms underpinning observed phenomena in a series of metrics-based studies of evolving software systems conducted during the seventies and eighties. Evidence to date indicates that feedback loop mechanisms play a significant role in determining the performance and dynamics of software processes. To improve understanding of the evolutionary behaviour of software systems and to exploit feedback in the context of process improvement it is necessary to improve knowledge of the origin and sources of feedback phenomena. This is also a prerequisite for a systematic definition of control and policy mechanisms for the management of such processes. This paper refers to some of the many dimensions that appear to relate to the issue of feedback in the global software process. It is argued that empirically assessing and modelling the presence and importance of those different dimensions in industrial software processes can bring significant progress to the FEAST investigation.
Goel Kahen, Meir M Lehman, Juan F Ramil, 16ppReport: 2000/16
An approach and basic concepts for the study of the system dynamics of long-term software evolution processes is presented. The approach provides a generic context and framework that supports at least three crucial process areas requiring management decision, resource allocation, release planning, and process performance monitoring. The report exemplifies the approach with an executable model. The latter reflects the global software process at a high level of abstraction and includes phenomenological observations derived from the laws of software evolution and the behaviours thereby implied. It incorporates concepts such as progressive (e.g., functional enhancement) and anti-regressive (e.g., complexity control) activities and enables the study of policies of human resource allocation to classes of activities. The example shows how the model permits assessment of the impact of alternative policies on various evolutionary attributes. It is part of and exemplifies the methods for software process modelling being developed and applied in the FEAST projects.
Frank Kriwaczek, Berç Rustem, 7ppReport: 2000/17
In order to make a decision in the face of multiple objectives it is necessary to know the relative importance of the different objectives. Yet, it is often very difficult to specify a set of precise weights before possible alternatives solutions are known. In this paper, we present an interactive, iterative method for arriving at an acceptable solution. The decision maker gradually discerns what is achievable and adjusts his aspirations and (implicitly) the specification of weights and trade-offs between his objectives, in the light of what he learns. To aid the decision maker's cognition and to allow him to express his wishes in a natural way, we employ an intuitive interface, based on the parallel coordinates method of displaying and specifying points in multidimensional space.
Christopher J Hogger, Frank Kriwaczek, Myrto Serafinidou, 9ppReport: 2000/18
This report investigates a particular way in which enterprise portals can support the working life of their users. Through monitoring a user's behaviour and observing archetypal sequences of actions, the system constructs abstract templates of tools that assist him to carry out such sequences efficiently and easily in the future. We call these new tools "wizards" after the utilities introduced by Microsoft into their applications and operating systems, designed to guide users through potentially complex tasks such as constructing graphs in Excel or setting up the computer to access the Internet.
At the heart of the process lies a re-programmable rule-base that encodes the principles by which action records are to be assembled into intelligible generic patterns of working practice, and thence converted into templates of new wizard tools that provide direct support for those patterns.