1997 - Technical Reports
S. Vickers, 45ppReport: 1997/1
It is shown how many techniques of categorical domain theory can be expressed in the general context of topical categories (where "topical" means internal in the category Top of Grothendieck toposes with geometric morphisms). The underlying topos machinery is hidden by using a geometric form of constructive mathematics, which enables toposes as "generalized topological spaces" to be treated in a transparently spatial way, and also shows the constructivity of the arguments. The theory of strongly algebraic (SFP) domains is given as a case study in which the topical category is Cartesian closed.
Properties of local toposes and of lifting of toposes (sconing) are summarized, and it is shown that the category of toposes has a fixpoint object in the sense of Crole and Pitts. This is used to show that for a local topos, all endomaps have initial algebras, and this provides a general context in which to describe fixpoint constructions including the solution of domain equations involving constructors of mixed variance. Covariance with respect to embedding-projection pairs or adjunctions arises in a natural way.
The paper also provides a summary of constructive results concerning Kuratowski finite sets, including a novel strong induction principle; and shows that the topical categories of sets, finite sets and decidable sets are not Cartesian closed (unlike the cases of finite decidable sets and strongly algebraic domains).
S. Vickers, 33ppReport: 1997/2
We give a constructive localic account of the completion of quasimetric spaces. In the context of Lawvere's approach, using enriched categories, the points of the completion are flat left modules over the quasimetric space. The completion is a triquotient surjective image of a space of Cauchy sequences and can also be embedded in a continuous depo, the "ball domain". Various examples and constructions are given, including the lower, upper and Vietoris powerlocales, which are completions of finite powerspaces. The exposition uses the language of locales as "topology-free spaces".
A. Zisman , 0ppReport: 1997/3
In this technical report we discuss a methodology and a support tool to assist the coordinator of a federation with the construction and evolution of hierarchical information structures. The definition of the terms composing the hierarchical information structures is based on the interests of the users and the applications, and the information that each database system shares with the other components. Therefore, the different group names refer to the types of databases participating in the federation. The other levels are related to the entity names, attribute names, class names, object names, and instances of the databases. The methodology consists of constructing a hierarchical information structure by incremental addition of the participating database systems. The support tool assists with the automation of some steps during construction and evolution of the structures.
C. Meszaros, 16ppReport: 1997/4
Interior point methods, especially the algorithms for linear programming problems are sensitive if there are unconstrained (free) variables in the problem. While replacing a free variable by two nonnegative ones may cause numerical instabilities, the implicit handling results in a semidefinite scaling matrix at each interior point iteration. In this paper we investigate the effects if the scaling matrix is regularized. Our analysis will prove that the effect of the regularization can be easily monitored and corrected if necessary. We describe the regularization scheme mainly for the efficient handling of free variables, but a similar analysis can be made for the case when the small scaling factors are raised to larger values to improve the numerical stability of the systems that define the search direction. We will show the superiority of our approach over the variable replacement method on a set of test problems arising from a water management application.
D. Gabbay, R. Nossum , M. Thielscher, 4ppReport: 1997/5
Agents situated in proactive environments are acting au-tonomously while the environment is evolving alongside, whether or not the agents carry out any particular actions. A formal framework for simulating and reasoning about this generalized kind of dynamic systems is proposed. The capabilities of the agents are modeled by a set of conditional rules in a temporal-logical format. The environment itself is modeled by an independent transition relation on the state space. The temporal language is given a declarative semantics.
I. Maros , C. Meszaros, 14ppReport: 1997/6
The introduction of a standard set of linear programming problems, to be found in NETLIB/LP/DATA, had an important impact on measuring, comparing and reporting the performance of LP solvers Until recently the efficiency of new algorithmic developments has been measured using this important reference set Presently, we are witnessing an ever growing interest in the area of quadratic programming The research community is somewhat troubled by the lack of a standard format for defining a QP problem and also by the lack of a standard reference set of problems for purposes similar to that of LP In the paper we propose a standard format and announce the availability of a test set of 138 collected QP problems
C. Meszaros, 13ppReport: 1997/7
An approach to determine primal and dual stepsizes in the infeasible-interior-point primal-dual method for convex quadratic problems is presented. The approach reduces the primal and dual infeasibilities in each step and allows different stepsizes. The method is derived by investigating the efficient set of a multiobjective optimization problem. Computational results are also given.
C. Meszaros , 30ppReport: 1997/8
The purpose of this document is to describe a software package, called BPMPD, which implements the infeasible primal-dual interior point method for linear and quadratic programming problems. This manual describes how to prepare data to solve with the package, how to use BPMPD as a callable solver library and which algorithmic options can be specified by the user.
P.J. Potts, A. Edalat, 25ppReport: 1997/9
Real numbers are usually represented by finite strings of digits belonging to some digit set. However, finite strings of digits can only represent a limited subset of the real numbers exactly because many real numbers have too many significant digits or are too large or too small. In the literature, there are broadly three frameworks for exact real computer arithmetic: infinite sequences of linear maps, continued fraction expansions and infinite compositions of linear fractional transformations. We introduce here a new, feasible and incremental representation of the extended real numbers, based on the composition of linear fractional transformations with either all non-negative or all non-positive integer coefficients.
A UNIFIED COMPILATION STYLE LABELLED DEDUCTIVE SYSTEM FOR MODAL AND SUBSTRUCTURAL LOGIC USING NATURAL DEDUCTIONK. Broda , A. Russo, 48ppReport: 1997/10
This paper describes a proof theoretic and semantic approach in which logics belonging to different families can be given common notions of derivability relation and semantic entailment. This approach builds upon Gabbay's methodology of Labelled Deductive Systems (LDS) and it is called the compilation approach for labelled deductive systems (CLDS). Two different logics are here considered, (i) the modal logic of elsewhere (known also as the logic of inequality) and (ii) the multiplicative fragment of substructural linear logic. A general natural deduction style proof system is given, in which the notion of a theory is defined as a (possibly singleton) structure of points, called a configuration, and a "general" model-theoretic semantic approach is described using a translation technique based on first-order logic. Then it is shown how both this proof theory and semantics can be directly applied to the logic of elsewhere and to linear logic, illustrating also that the same technique for proving soundness and completeness can be adopted in both logics. Finally, the proof systems for the logic of elsewhere and for linear logic are proved to correspond, under certain conditions, to standard Hilbert axiomatisation and standard sequent calculus respectively. Such results prove that the natural deduction proof systems described in this paper are proper generalisations of any proof system already developed for these two logics.
K. Broda, M. Finger , A. Russo, 33ppReport: 1997/11
In this paper a uniform methodology to perform Natural Deduction over the family of linear, relevance and intuitionistic logics is proposed. The methodology follows the Labelled Deductive Systems (LDS) discipline, where the deductive process manipulates declarative units - formulas labelled according to a labelling algebra. In the system described here, labels are either ground terms or variables of a given labelling language and inference rules manipulate formulas and labels simultaneously, generating (whenever necessary) constraints on the labels used in the rules. A set of natural deduction style inference rules is given, and the notion of a derivation is defined which associates a labelled natural deduction style "structural derivation" with a set of generated constraints. Algorithmic procedures, based on a technique called resource abduction, are defined to solve the constraints generated within a derivation, and their termination conditions discussed. A natural deduction derivation is correct with respect to a given substructural logic, if, under the condition that the algorithmic procedures terminate, the associated set of constraints is satisfied with respect to the underlying labelling algebra. This is shown by proving that the natural deduction system is sound and complete with respect to the LKE tableaux system.
D. Gabbay, N Olivetti, 203ppReport: 1997/12
This report is the draft of a book about goal directed proof theoretical formulations of non-classical logics. It evolved from a response to the existence of two camps in the applied logic (computer science/artificial intelligence) community. There are those members who believe that the new non-classical logics are the most important ones for applications and that classical logic itself is now no longer the main workhorse of applied logic, and there are those who maintain that classical logic is the only logic worth considering and that within classical logic the Horn clause fragment is the most important one.
The book presents a uniform Prolog-like formulation of the landscape of classical and non-classical logics, done in such away that the distinctions and movements from one logic to another seem simple and natural; and within it classical logic becomes just one among many. This should please the non-classical logic camp. It will also please the classical logic camp since the goal directed formulation makes it all look like an algorithmic extension of Logic Programming. The approach also seems to provide very good compuational complexity bounds across its landscape.
NB. The file available for downloading is an April 1999 draft of the forthcoming book.
P. Wernick, 14ppReport: 1997/13
This paper considers an issue raised by Lehman at ISPW 9, where he stated that the software process is a 'learning' process. We examine the ramifications of that statement, and of the nature and impact of that learning, in the context of the process controlling the evolution of a software product over time and multiple releases. We present a high-level model of the software process which describes that process in terms of the gaining and storing of knowledge of a software product and its use and environment, and of the application of that knowledge in making changes to the product.
I. Akrotirianakis , B. Rustem, 30ppReport: 1997/14
This paper presents a primal-dual interior point algorithm for solving general constrained non-linear programming problems. The initial problem is transformed to an equivalent equality constrained problem, with inequality constraints incorporated into the objective function by means of a logarithmic barrier function. Satisfaction of the equality constraints is enforced through the incorporation of an adaptive quadratic penalty function into the objective. The penalty parameter is determined using a strategy that ensures a descent property for a merit function. It is shown that the adaptive penalty does not grow indefinitely. The algorithm applies Newton's method to solve the first order optimality conditions of the equivalent equality problem. Global convergence of the algorithm is achieved through the monotonic decrease of a merit function. Locally the algorithm is shown to be quadratically convergent.