Time | Delegate | Institution | Title and abstract | Draft paper |
0910-1010, 1 Dec | Setsuo Arikawa | Kyushu University | Click here | |
1010-1110, 1 Dec | Marcel Turcotte | Imperial Cancer Research Fund | Click here | |
1400-1500, 1 Dec | Derek Sleeman | University of Aberdeen | Click here | |
1500-1600, 1 Dec | David Page | University of Louisville | Click here | |
1630-1730, 1 Dec | Akihiro Yamamoto | Hokkaido University | Click here | |
1730-1830, 1 Dec | Ted Briscoe | University of Cambridge | Click here | |
0900-1000, 2 Dec | Luc Steels | Vrije Universiteit Brussel | Click here | |
1000-1100, 2 Dec | William Cohen | AT&T | Click here | |
1130-1230, 2 Dec | James Cussens | University of York | Click here | |
1400-1500, 2 Dec | Mike Georgeff | Australian AI Institute | Click here | |
1500-1600, 2 Dec | Alan Frisch | University of York | Click here | |
1630-1730, 2 Dec | Andrei Voronkov | Univerity of Uppsala | Click here | |
1730-1830, 2 Dec | Luc De Raedt | Katholieke Universiteit Leuven | Click here | |
0900-1000, 3 Dec | Kurt Konolige | SRI International | Click here | |
1000-1100, 3 Dec | Javier Lerch | Carnegie Mellon University | Click here | |
1130-1230, 3 Dec | Stephen Muggleton | University of York | Click here | |
1400-1500, 3 Dec | Claude Sammut | University of New South Wales | Click here | |
1500-1600, 3 Dec | Dorian Suc | University of Ljubljana | Click here | |
1630-1800, 3 Dec | Wayne Wobcke | BT Labs | Click here |
Delegates:
To
link to individual abstract click on delegate name/to link to home page
click on speaker's name
Speaker: Setsuo
Arikawa
Title: Discovery Science: A new everlasting science
Abstract:
With the drastic advancement of computer technology, a huge amount
of data is regularly compiled in computers. These include data from experiments,
observations, business activities, etc., and the scale of measure becomes
over several hundreds Giga bytes. An emergent requirement from the current
social and scientific circumstances is to develop efficient methods with
computers which enable automatic discoveries of scientific knowledge and
decision making rules. In order to meet such a social requirement from
the front, we have just started a three year project: Grant-in-Aid for
Scientific Research on Priority Area "Discovery Science" sponsored by Ministry
of ESSC, Japan. This project intends to develop new methods for knowledge
discovery, install, network environments for knowledge discovery, and establish
the Discovery Science as a new area of Computer Science. A systematic research
is planned that ranges over philosophy, logic, reasoning, computational
learning and system developments. This lecture describes the outline of
the project including how it has been prepared.
Speaker: Ted
Briscoe
Title: The Acquisition of Grammar in an Evolving Population of Language
Agents
Abstract:
Human language acquisition, and in particular the acquisition of grammar,
is a partially-canalized, strongly-biased but robust and efficient procedure.
For example, children prefer to induce compositional rules (e.g. Wanner
and Gleitman, 1982) despite peripheral use of non-compositional constructions,
such as idioms, in every attested human language. And, most parameters
of grammatical variation set during language acquisition appear to have
default values retained in the absence of robust counter-evidence (e.g.Bickerton,
1984; Lightfoot, 1989). A variety of explanations have been offered for
the emergence of a partially-innate language acquisition device (LAD) with
such properties, such as exaption of a spandrel (Gould, 1987), biological
saltation (Chomsky, 1972) or genetic assimilation (Pinker and Bloom, 1990).
But none provide a coherent account of both the emergence and maintenance
of a LAD in an evolving population.
The account offered here is that an embryonic LAD emerged via exaption
of general-purpose (Bayesian) learning mechanisms (e.g. Staddon, 1983)
to a specifically-linguistic mental representation capable of expressing
mappings from the `language of thought' to `realizable' encodings of propositions
expressed in the language of thought. However, the selective pressure favouring
such an exaption, and its subsequent maintenance and refinement, is only
coherent given a coevolutionary scenario in which a (proto)language supporting
successful communication within a population had already itself evolved
on a historical timescale (e.g. Hurford, 1987; Kirby, 1998; Steels, 1997)
and continued to coevolve with the LAD (e.g. Briscoe, 1997, in press).
This account is supported by the results of a number of computational simulations
of evolving populations of software agents acquiring and communicating
with coevolving structured languages.
The model behind the simulations suggests a new dynamic framework forthe
study of communication systems in general, and human language in particular,
which both incorporates the insights gained from formalizing a language
as static well-formed stringset (Chomsky, 1957) and extends them by embedding
this model in an evolving population of distributed language agents. The
practical implication of this framework for natural language processing
is that development of static hand-coded systems should be replaced by
development of autonomous software agents capable of adapting to their
linguistic environment.
Speaker: William
Cohen
Title: Whirl: A Word-based Information Representation Language
Abstract:
We describe WHIRL, an ``information representation language'' that
synergistically combines properties of logic-based and text-based representation
systems. WHIRL is a subset of non-recursive Datalog that has been extended
by introducing an atomic type for textual entities, an atomic operation
for computing textual similarity, and a ``soft'' semantics---that is, inferences
in WHIRL are associated with numeric scores, and presented to the user
in decreasing order by score. We show that WHIRL strictly generalizes both
IR ranked retrieval, and logical deduction; that non-trivial queries concerning
large databases can be answered efficiently; that WHIRL can be used to
accurately integrate data from heterogeneous information sources, such
as those found on the Web; that WHIRL can be used effectively for inductive
classification of text; and finally, that WHIRL can be used to generate
extraction programs for structured documents semi-automatically.
Speaker: Kurt
Konolige
Title: Robots as Physical Agents
Abstract:
Indoor mobile robots are becoming reliable enough in navigation tasks
to consider working with teams of robots. Software agents are also becoming
increasingly popular as a means of fashioning complex systems that integrate
a variety of human interface, database, and reasoning components. In this
paper we develop a view of multi-robot systems in which the robots are
integrated as physical agents in a larger system that also includes software
agents. We argue that this approach allows for timely development and deployment
of complex systems that integrate physical, communicative, and inferential
capabilities. To illustrate the general ideas, we have implemented a multiple-robot
testbed using SRI's Open Agent Architecture (OAA) and Saphira robot control
system. This testbed addresses issues of robot communication, multiple-agent
planning, and human-robot interaction. In its current implementation, the
testbed ties together physical robots and a set of software agents on the
Internet to plan and act in coordination users communicate with the robots
using a variety of multimodal inputs: pen, voice, and keyboard. The robust
capabilities of the OAA and Saphira enabled us to design and implement
a winning team of three robots for the AAAI robotics contest (Summer 1996)
in the six weeks.
Speaker: Claude
Sammut
Title: The Role of Representation in Behavioural Cloning
Abstract:
Behavioural Cloning seeks to build explanations of human skill by inducing
rules of behaviour from performance traces. Finding appropriate representations
of the state of the world is crucial to building models that are accurate
and transparent. This paper discusses the role that relational representations
play in Behavioural Cloning. The domain of application in our case is learning
to pilot a simulated aircraft. The simulator used is capable of generating
symbolic descriptions of the visual scene viewed by the pilot. We discuss
the advantages and disadvantages of learning in such a rich domain and
also how high-level features can be constructed using relational learning.
Speaker:
Derek Sleeman
(Vincent Corruble)
Title: A knowledgeable system for knowledge discovery.
Abstract:
Traditionally, knowledge discovery systems tend to be domain-specific;
on the other hand, Data-mining systems tend to be domain-independent and
very much data-driven. In the latter case, usually, the only domain-tuning
allowed is in the choice of the language used to describe the data and
the hypotheses. This leads to difficulties when applying the data-mining
methodology to tasks in complex domains, such as in medicine, as the number
of hypotheses are often too numerous to be analysed fully; some of them
contradict basic domain knowledge; many are useless because, despite being
correct, they are not novel. We describe a system for knowledge discovery
which is both domain independent and able to acquire through a collaborative
process some high-level domain knowledge. The system has two modules. The
first is a standard data-mining algorithm which generates associations.
The second allows the user to label some of the associations as either
consistent (wrt a domain theory), inconsistent (wrt a theory) or innovative.
Once acquired, the system then generalizes these patterns. Subsequently
the system uses the generalized patterns to classify the remaining associations
& it simply presents only those which it believes are innovative to
the Domain expert.
We call this second module, a knowledge-based filter & we believe
it can be used in a wide variety of domains. (Clearly the generalization
process will be more effective if some domain-specific background knowledge
is available).
Speaker: Dorian Suc,
Ivan Bratko
Title: Symbolic and qualitative skill reconstruction,
Abstract:
Controlling a complex dynamic system, such as a plane or a crane, usually
requires a skilled operator. Such a control skill is typically hard to
reconstruct through introspection. Therefore an attractive approach to
the reconstruction of control skill involves machine learning from operators'
control traces, also known as behavioural cloning.
In the most common approach to behavioural cloning, a controller is
induced as a direct mapping from system states to actions. Unfortunately,
such controllers usually lack typical elements of human control strategies,
such as subgoals or desired trajectory and do not replicate the robustness
of the human control skill. In this paper we apply the GoldHorn program
to induce from the control traces a set of symbolic constraints. Those
constraints describe the operator's trajectory and are then used together
with locally weighted regression model to determine the next action.
Using the crane problem in a case study, this approach showed significant
improvements both in terms of control performance and transparency of induced
clones. Moreover, generalizing the trajectory into qualitative strategy
shows the potentials of such an approach.
Speaker:Marcelle
Turcotte (Mike
Sternberg)
Title: Application of Inductive Logic Programming to Discover Rules
Explaining Protein Three-Dimensional Structure}
Abstract:
For the last three decades, understanding protein structure has been
and still is a challenging problem for molecular biology. The problem is
to identify rules which relate the local structure to the complex three-dimensional
fold. There are now classification schemes for some 8000 three-dimensional
folds. To gain further insights into protein structure, Inductive Logic
Programming (ILP) has been applied to derive new principles governing the
formation of protein folds, such as common substructures and the relationship
between local sequence and tertiary structure.
Speaker: Akihiro
Yamamoto
Title : Hypothesis Construction and Beyond it.
Abstract:
Hypothesis construction is a basic activity of both abductive and inductive
systems. It is to find hypotheses H from a positive example E with the
support of a given background theory B when E cannot proved from B. Some
hypothesis construction procedures find H by fixing. incomplete proofs
of E from B. In this talk we compare the potential of several hypothesis
construction procedures of such type. Moreover, we discuss how we should
justify the generated hypotheses H by using network agents. We also show
the relation between abductive inference and inductive inference from the
viewpoint of fixing incomplete proofs.
Speaker: Andrei
Voronkov
Title: Formalizing bottom-up and top-down proof search in one logical
calculus.
Abstract:
Automatic reasoning methods based on proof-search in sequent calculi
are divided in two groups of methods. The tableau-based methods implement
the top-down proof-search, while the inverse method implements the bottom-up
proof-search. We show how to design a single calculus for the bottom-up
and top-down proof-search for a number of logics.
Speaker: Alan
M. Frisch
Title: Solving Constraint Satisfaction Problems with MV-Resolution
Abstract:
Though resolution and constraint satisfaction problems are two of the
most developed areas in artificial intelligence, almost no connections
between the two have been made. This talk explores the relationship
between resolution and constraint satisfaction problems. The exploration
results in the development of the MV-resolution rule of inference, a generalisation
of ordinary resolution that can be used to solve constraint satisfaction
problems. This talk proves that a clausal-form constraint satisfaction
problem is backtrack-free if its constraints are closed under MV-resolution.
This generalises the usual completeness result for resolution by telling
us not only what happens when an unsatisfiable set of clauses is closed
under resolution but also what happens when a satisfiable set of clauses
is closed under resolution.
Speaker: Luc
Steels
Title: How Language Bootstraps Cognition
Abstract:
Where do perceptually grounded categories come from? Some researchers
claim that they are innate, explaining the rapid origins of such categories
in children even with a poor stimulus. Others claim they are learned,
explaining that categories are adapted to the environments and tasks humans
encounter. This talk presents a third `ecological' approach to category
formation.
I propose a system by which discrimination networks, capable to perform
categorial distinctions, spontaneously grow, relatively independently of
specific examples. The networks are pruned to eliminate distinctions that
were not relevant in the environment. The continuous growth and pruning
dynamics leads to constant adaptation and to anticipation of distinctions
even if no examples were
seen yet. I will show through software simulations and experiments
with physical robotic agents that the ecological approach leads to an adequate
categorial repertoire. Next I will show that linguistic interaction
can be a driving force to sparkle a spiraling increase in the ontological
complexity of an agent. The ontology formation mechanism can be coupled
to adaptive language games through which a shared lexicon spontaneously
self-organises itself. Again, examples from software simulations and experiments
with robotic
agents are presented to demonstrate that this approach is effective.
The main conclusion is that a complex adaptive systems approach to the
origins of ontologies and lexicons is possible and presents a viable alternative,
both to a nativist account and to an inductive, connectionist account.
Speaker: Javier
Lerch
Title: Modeling Time Pressure and Individual Differences in a Real-Time
Dynamic Decision Making Task
Abstract:
This research investigates the impact of time pressure and individual
differences on learning in a Real-Time Dynamic Decision Making (RTDDM)
task. Our empirical results indicate that high time pressure
generates high cognitive loads inhibiting learning. The results also
show that high time pressure have a differential impact on the learning
of individuals with high or low Working Memory (WM) and rule inference
capacity. We are in the process of building a cognitive model based
on ACT-R that explains these two phenomena. Our cognitive model simulates
learning by recognizing regularities in the decision task, and building
"chunks" that guide decision
making. This cognitive model explains the impact of time pressure
and WM capacity on learning by varying the number of chunks acquired by
the system given alternative time pressure conditions and individual differences.
Speaker: Wayne Wobcke
Title: Machine Intelligence Research at BT
Absract:
In this talk, we examine the issues for Machine Learning posed by the
problem of developing human-centred intelligent systems. Thesystems we
have in mind are software systems that must continually adapt their performance
to the changing requirements and preferences of the user. Thus in this
respect, human-centred intelligent systems are similar to robotic systems
that must adapt to a continuously changing environment. We describe the
Intelligent Personal Assistant (IPA), an agent-based application recently
developed at BT Laboratories, that provides assistance with time,
information and communication management. We show how some of the
problems of desiging adaptive software assistants have been solved within
the context of the IPA, and discuss the particular
types of learning that will need to be incorporated into the next version
of the system.
Speaker: Wayne Wobcke (Kevin Irwig)
Title: Multi-Agent Reinforcement Learning with Vicarious Rewards
Abstract:
Reinforcement learning is the problem faced by an agent that must learn
behaviour through trial-and-error interactions with a dynamic
environment. In a multi-agent setting, the problem is often further
complicated by the need to take into account the behaviour of other
agents in order to learn to perform effectively. Issues
of coordination and cooperation must be addressed; in general, it is not
sufficient for each agent to act selfishly in order to arrive at a globally
optimal strategy. In this work, we apply the Adaptive
Heuristic Critic (AHC) and Q-learning algorithms to agents in a simple
artificial multi-agent domain based on the Tileworld.
We experimentally compare the performance of the AHC and Q-learning
algorithms to each other as well as to a hand-coded greedy strategy.
The overall result is that AHC agents perform better than the others, particularly
when many other agents are present or the world is dynamic. We also
examine the notion of global optimality in this system, and present a simple
method of encouraging agents to learn cooperative behaviour, which we call
vicarious reinforcement. The main result of this work is that agents that
receive additional vicarious reinforcement perform better than selfish
agents, even though the task being performed here is not inherently cooperative.
As we all know, but seem not to have fully understood (at least in the
way physicists have) the world is complex and dynamic, a place where chaos
is the norm, not the exception. We also know that computational systems
have practical limitations, which limit the information they can access
and the computations they can perform. Conventional software systems
are designed for static worlds with perfect knowledge---we are instead
interested in environments that are dynamic and uncertain (or chaotic),
and where the computational system only has a local view of the world (i.e.,
has limited access to information) and is resource bounded (i.e., has finite
computational resources). These constraints have certain fundamental implications
for the design of the underlying computational architecture. In this talk,
I will attempt to show that Beliefs, Desires, Intentions, and Plans are
an essential part of the state of such systems.
Speaker: Luc
De Raedt
Title: Relational Reinforcement Learning for Intelligent Agents
Abstract:
Relational reinforcement learning is presented, a learning technique
that combines reinforcement learning with relational learning or inductive
logic programming. Due to the use of a more expressive representation
language to represent states, actions and Q-functions, relational reinforcement
learning can be potentially applied to a new range of learning tasks. One
such task that we investigate is planning in the blocks world, where it
is assumed that the effects of the actions are unknown to the agent and
the agent has to learn a policy. Within this simple domain we show
that relational reinforcement learning solves some existing problems with
reinforcement learning. In particular, relational reinforcement
learning allows us to employ structural representations, make abstraction
of specific goals pursued and exploit the results of previous learning
phases when addressing new (more complex) situations. First results
on relational reinforcement learning have already been published elsewhere
(cf. references below).
In this presentation, we shall however analyse relational reinforcement
learning from the view point of intelligent agents. To this aim we shall
discuss further experiments (in a simple office world) and argue that relational
reinforcement provides new possibilities for learning agents.
zeroski, S., De Raedt, L. and Blockeel, H. Relational Reinforcement
Learning, in Proceedings of the 14th International Conference on Machine
Learning, Morgan Kaufmann, 1998.
zeroski, S., De Raedt, L. and Blockeel, H. Relational Reinforcement Learning, in Proceedings of the 8th International Conference on Inductive Logic Programming, Lecture Notes in Artificial Intelligence, Springer Verlag, 1998.