There is a sense of consciousness that can be understood in both computational and logical terms. It is the sense in which an agent is conscious when it is aware of what it is doing and why it is doing it. Whether or not the agent is conscious in this sense, the external manifestation of its behaviour is the same. However, if the agent is conscious of what it is doing, then its behaviour is deliberate and controlled. If it is not conscious, then its behaviour is automatic and instinctive.
The computational interpretation of consciousness as awareness is that, when an agent is conscious, its behaviour is generated by a high level program, which manipulates symbols that have meaningful interpretations in the environment. However, when an agent is not conscious, then its behaviour is generated by a lower level program or physical device, whose structure is ultimately determined by the physical characteristics of the agent’s body.
The logical interpretation is that, when an agent is conscious, its behaviour is generated by reasoning with goals and beliefs. When the agent is not conscious, then its behaviour is determined by lower-level input-output associations. These associations can be represented at different levels in turn, including both a logical, symbolic level and the lower, physical level of the agent’s body.
These two interpretations come together in computational logic.
Remember our last version of the
Goal: If there is an emergency then I get help.
Beliefs: A person gets
help if the person alerts the driver.
A
person alerts the driver if the person presses the alarm signal button.
There is an
emergency if there is a fire.
There is an
emergency if one person attacks another.
There is an
emergency if someone becomes seriously ill.
There is an
emergency if there is an accident.
There is a
fire if there are flames.
There is a
fire if there is smoke.
A passenger can use the goal and the beliefs
explicitly, reasoning forward from observations to recognise
when there is an emergency and to derive the goal of getting help, and then
reasoning backward, to get help by pressing the alarm signal button.
However, the passenger can generate the same behaviour using lower-level input-output associations or condition-action
rules, which can also be represented as goals in logical form:
Goals: If
there are flames then I press the alarm signal button.
If there is
smoke then I press the alarm signal button.
If one person attacks another then I
press the alarm signal button.
If someone becomes seriously ill
then I press the alarm signal button.
If there is an accident then I press
the alarm signal button.
The new goals are more efficient than the
original goal and belief. They need only one step of forward reasoning to associate
the appropriate action as output with the relevant observations as input. In
this respect, they are like a program written in a lower-level language, which
is more efficient than a program with the same functionality written in a
higher-level language.
In fact, in this case, the two programs are
written in the same logical language. However, the original program is written
at a higher-level, which requires, not only greater computational resources,
but also more sophisticated reasoning mechanisms.
In Computing, different levels of language and
different levels of representation in the same language have different advantages
and are complementary. Lower-level representations are more efficient. But higher-level
representations are more flexible, easier to develop, and easier to change.
In this example, the lower-level representation
lacks the awareness, which is explicit in the higher-level representation, of the
goal of getting help, which is the purpose of pressing the alarm signal button.
If something goes wrong with the lower-level representation, for example if the
button doesn’t work or the driver doesn’t get help, then the passenger might
not realise there is a problem. Also, if the
environment changes, and newer and better ways of dealing with emergencies are developed,
then it would be harder to modify the lower level representation to adapt to
the change.
In Computing, both levels of representation are
useful. Typically, the higher-level representation is developed first,
sometimes not even as a program but as an analysis of the program requirements.[1] This
higher-level representation is then transformed, either manually or by
means of another program called a compiler, into a lower-level, more
efficiently executable representation.
The
reverse process is also possible. Low-level programs can sometimes be decompiled into equivalent higher
level programs. This is useful if the low-level program needs to be
changed, perhaps because the environment has changed or because the program has
developed a fault. The higher-level representation can then be modified and recompiled
into a new, improved, lower-level form.
However, the reverse process is not always
possible. Legacy systems, developed directly in low-level languages and
modified over a period of many years, may not have enough structure for anyone to
identify their goals unambiguously and to decompile them into high-level form. But
even then it may be possible to decompile them partially and approximate them
with higher-level programs. This process of rational reconstruction can help to
improve the maintenance of the legacy system, even when wholesale
reimplementation is not possible.
Some of the work of compiling a high-level
program into a lower-level form can be done by performed in advance some of the
computation that would otherwise be performed only when it is needed.
In the underground example, instead of waiting for an emergency to
happen, a compiler can anticipate the need to reduce the conclusion of the
top-level maintenance goal:
Goal: If
there is an emergency then I get help.
by performing the necessary backward reasoning in advance. In two such inference
steps, the goal can be transformed into a declarative version of a
condition-action rule:
Goal: If
there is an emergency then I press the alarm signal button.
Similarly, the compiler can anticipate the need to recognise
there is an emergency by performing the necessary forward reasoning in advance.
In two inference steps, corresponding to the two ways of recognising
there is a fire, the compiler can derive the new beliefs:
Beliefs: There is an emergency if there are
flames.
There is an emergency if
there is smoke.
In five further inference steps, corresponding to the five kinds of
emergency, the compiler obtains the simple input-output associations we saw
before:
Goals: If there are flames then I press the alarm
signal button.
If there is smoke then I
press the alarm signal button.
If one person attacks another then I
press the alarm signal button.
If someone becomes seriously ill
then I press the alarm signal button.
If there is an accident then I press
the alarm signal button.
This representation is as low as a
representation can go, while still remaining in logical form. However, it is
possible to go lower, if these associations are implemented by direct physical
connections between the relevant parts of the human sensory and motor systems.
This is like implementing software in hardware.
In Cognitive Psychology, a similar distinction has been made between deliberative
and intuitive
thinking. Deliberative thinking, which is self-aware and controlled,
includes logical thinking. Intuitive thinking, which is opaque and automatic, extends
perceptual processing to subconscious levels of thought. Cognitive
Psychologists have proposed various dual process models of human thinking in
which the two kinds of thinking interact.
The simplest interaction occurs when deliberative thinking migrates to
the intuitive level over the course of time, as for example when a person learns
to use a keyboard, play a musical instrument or drive a car. This migration
from deliberative to intuitive thinking is like compiling a high level program
into a lower-level program – not by reasoning in advance, but by reasoning
after the fact. After a combination of high-level, general-purpose procedures is
used many times over, the combination is collapsed into a lower-level shortcut.
The shortcut is a specialised procedure, which
achieves the same result as the more general procedures, only more efficiently.
It is sometimes possible to go in the opposite
direction, reflecting upon subconscious knowledge and representing it in
conscious, explicit terms – for example when a linguist tries to construct a formal
grammar for a language. Whereas the native speaker of the language knows the
grammar tacitly and subconsciously, the linguist attempts to formulate an
explicit grammar for the language. Teachers can teach the resulting explicit
grammar to students. With sufficient practice, the students may eventually
compile the explicit grammar into their own subconscious, and learn to speak
the language more efficiently.
In the same way that many low-level programs can not be completely
decompiled, but can only be approximated by higher-level programs, it seems
likely that much of our subconscious thinking can only be approximated by
conscious thought. The formal grammars of the linguist are an example.
In human thinking, the two levels of thought can operate in tandem. In
the Kahneman-Tversky dual process model, the
intuitive, subconscious level “quickly proposes intuitive answers to judgement problems as they arise”, while the deliberative,
conscious level “monitors the quality of these proposals, which it may endorse,
correct, or override”[2]. The use of formal
grammar to monitor and correct the instinctive use of natural language is a
familiar example.
The general view of logic that emerges from these considerations is that
logic is the higher level language of thought, at which thoughts are deliberate,
controlled and conscious. At this higher level, logical reasoning reduces goals
to sub-goals, derives consequences of observations and infers consequences of candidate
actions. This reasoning can be performed only when it is needed, or it can be
performed in advance. When it is performed in advance, it transforms a
higher-level representation of goals and beliefs into a more efficient, lower-level
representation. At this lower-level of representation, behaviour
is instinctive, automatic and subconscious. These relationships can be pictured
in this way:
Highest level Achievement consequences
maintenance goals Forward goals
reasoning
Forward Backward
reasoning reasoning
Intermediate level consequences
Intermediate consequences Decide Conscious
sub-goals
Forward Backward
reasoning reasoning
Candidate Actions
consequences
Observations Actions
Forward reasoning
Perceptual Motor
Processing or processing
Subconscious
Input-output associations
The world