It’s bad enough to be a Mars explorer and not to know that your purpose in life is to find life on Mars. But it’s a lot worse to be a wood louse and have nothing more important to do with your life than just follow the meaningless rules:
Goals: If it’s
clear ahead, then I move forward.
If
there’s an obstacle ahead, then I turn right.
If
I am tired, then I stop.
In fact, it’s even worse than meaningless.
Without food the louse will die, and
without children the louse’s genes will disappear. What is the point of just wandering around if it doesn’t bother to
eat and make babies?
Part of the problem is that the louse’s body isn’t giving it the
right signals - not making it hungry when it is running out of energy, and not
making it desire a mate when it should be having children. Its body also needs
to be able to recognise food and eat, and to recognise potential mates and
propagate.
So where does the louse go from here? If it got here by natural
evolution, then it has nowhere to go and is on the road to extinction.
But if it owes its life to some Grand Designer, then it can plead
with her to start all over again, this time working from the top-down. The
Grand Designer would need to rethink the louse’s top-level goals, decide how to
reduce them to sub-goals, and derive a new specification of its input-output
behaviour.
Suppose the Grand Designer identifies these as the louse’s
top-level goals:
Top-level goals: The louse stays alive for
as long as possible and
the louse has as many children as possible.
Of course, a critic might well ask: What purpose do these goals
serve, and why these goals and not others? Perhaps staying alive is just a
sub-goal of having children. And perhaps having children is just one way of
promoting the survival of one’s genes. But eventually the critic would have to
stop. Otherwise he could continue asking such questions forever.
To reduce the louse’s top-level goals to sub-goals, the designer
needs to use her beliefs about the world, including her beliefs about the
louse’s bodily capabilities. Moreover, she can build upon her earlier design,
in which the louse moved around aimlessly, and give its movements a purpose.
She could use such beliefs as:
Beliefs:
The louse stays alive for as long as
possible,
if whenever it is hungry then it looks
for food and when there is food ahead it eats it, and
whenever it is tired then it rests,
and
whenever it is threatened with attack
then it defends itself.
The louse has as many children as
possible,
if whenever it desires a mate then it
looks for a mate and
when there is a mate ahead it tries to
make babies.
The louse looks for an object,
if whenever it is clear ahead then it
moves forward,
and whenever there is an obstacle
ahead and it isn’t the object then it turns right
and when the object is ahead then it
stops.
The louse defends itself
if it makes a pre-emptive attack.
Food is an object.
A mate is an object.
If the louse were as intelligent as the designer, then the designer
could just hand these beliefs and the top-level goal directly over to the louse
itself. The louse could then reason forwards and backwards, as the need arises,
and would be certain of achieving its goals, provided the designer’s beliefs
are actually true.
But the louse possesses neither the designer’s intellect, nor her
gorgeous body and higher education. The designer, therefore, not only has to
identify the louse’s requirements, but she has to derive an input-output
specification, which can be implemented in the louse, using its limited
physical and mental capabilities.
One way for the designer to do her job is to do the necessary
reasoning for the louse in advance. She can begin by reasoning backwards from
the louse’s top-level goals, to generate the next, lower level of sub-goals:
Sub-goals:
whenever the louse is hungry then it
looks for food and when there is food ahead it eats it, and
whenever the louse is tired then it
rests, and
whenever the louse is threatened with
attack then it defends itself and
whenever the louse desires a mate then
it looks for a mate and when there is a mate ahead
it tries to make babies.
The English words “whenever” and “when” are different ways of
saying “if”, but they carry an additional, temporal dimension[1].
It would be a distraction to deal with such temporal issues here. For that
reason, it is useful to reformulate the sub-goals in more conventional logical
terms. At the same time, we can take advantage of the reformulation to
eliminate the ambiguity associated with the scope of the words “and when”:
Sub-goals:
If the louse is hungry then it looks
for food, and
If the louse is hungry and there is
food ahead then it eats it, and
If the louse is tired then it rests,
and
If the louse is threatened with attack
then it defends itself, and
If the louse desires a mate then it
looks for a mate, and
If the louse desires a mate and there
is a mate ahead then it tries to make babies.
Unfortunately, the designer’s work is not yet finished. Some of
the conclusions of the sub-goals include other goals (like looking for food,
defending itself, and looking for a mate) that need to be reduced to still
lower-level sub-goals[2].
Fortunately, for the designer, this is easy work. It takes just a little
further backward reasoning and some logical simplification[3],
to derive a specification that a behaviourist would be proud of:
New Goals:
If the louse is hungry and it is clear
ahead
then the louse moves forward.
If the louse is hungry and there is an
obstacle ahead and it isn’t food
then the louse turns right.
If the louse is hungry and there is
food ahead
then the louse stops and it eats the
food.
If the louse is tired
then the louse rests.
If the louse is threatened with attack
then the louse makes a pre-emptive
attack.
If the louse desires a mate and it is
clear ahead
then the louse moves forward.
If the louse desires a mate and there
is an obstacle ahead and it isn’t a mate
then the louse turns right.
If the louse desires a mate and there
is an obstacle ahead and it is a mate
then the louse stops and it tries to
make babies.
The new goals specify the louse’s input-output behaviour and can
be implemented directly as a production system without memory. However, they
are potentially inconsistent. If the louse desires a mate and is hungry at the
same time, then it may find itself in a situation, for example, where it has to
both stop and eat and also turn right and look for a mate simultaneously. To
avoid such inconsistencies, the louse would need to perform conflict
resolution.
But if it’s too much to expect the louse to reason logically, it’s
probably also too much to expect the louse to perform conflict resolution. And
it’s certainly far too much to expect it to apply Decision Theory to weigh the
relative advantages of satisfying its hunger compared with those of satisfying
its longing to have a mate. The simplest solution is for the designer to make
these decisions for the louse, and to
build them into the specification:
If the louse is hungry and is not
threatened with attack and
it is clear ahead
then the louse moves forward.
If the louse is hungry and is not
threatened with attack and
there is an obstacle ahead and it
isn’t food and it doesn’t desire a mate
then the louse turns right.
If the louse is hungry and is not
threatened with attack and
there is food ahead
then the louse stops and eats the
food.
If the louse is tired and is not
threatened with attack and
is not hungry and does not
desire a mate
then the louse rests.
If the louse is threatened with attack
then the louse makes a pre-emptive
attack.
If the louse desires a mate and is not
threatened with attack and
it is clear ahead
then the louse moves forward.
If the louse desires a mate and is not threatened
with attack and
is not hungry and there is
an obstacle ahead and it isn’t a mate
then the louse turns right.
If the louse desires a mate and is not
threatened with attack and
there is a mate ahead
then the louse stops and tries to make
babies.
If the louse desires a mate and is
hungry and
is not threatened with attack and
there is an obstacle ahead and it isn’t a
mate and it isn’t food
then the louse turns right.
The new specification is a collection of
input-output associations that give highest priority to reacting to an attack,
lowest priority to resting when tired, and equal priority to mating and eating.
Now the only situation in which a conflict can arise is if there is a mate and
food ahead at the same time. Well, you can’t always worry about everything.
In general, a designer’s job
ends when she has constructed a declarative description of her object’s
input-output behaviour. How that behaviour is implemented inside the object is
not her concern.
In computer science, this
decoupling of an object’s design from its implementation is called encapsulation.
The implementation is encapsulated inside the object. Objects can interact with
other objects, taking only their input-output behaviour into account.
The notion of encapsulation partially
vindicates the behaviourist’s point of view. Not only is it impossible in many
cases to determine what goes on inside another object, but for many purposes it
is also unnecessary and even undesirable.
Our louse is no exception. It would be
easy, given the input-output specification, to implement the louse’s behaviour
using a primitive production system without memory
and without conflict resolution. But does the louse need a mind at all - to
represent concepts such as hunger and food and to derive symbolic
representations of its actions? Does the louse really need to carry around all
this mental baggage, when the necessary, instinctive behaviour can be
hardwired, as a collection of input-output associations, directly into the
louse’s body instead[4]?
Similarly, a designer might
specify the design of a thermostat in symbolic terms:
Goals:
If current temperature is T degrees and target temperature is T’
degrees and T < T’ - 2°
then the thermostat turns on the heat.
If current temperature is T degrees and target temperature is T’
degrees and T > T’ + 2°
then the thermostat turns off the heat.
But it doesn’t follow that
the thermostat needs to manipulate symbolic expressions to generate its behaviour.
Most people would be perfectly happy if the design were implemented with a
simple mechanical or electronic device.
In the same way that a
thermostat’s behaviour can be viewed externally in logical, symbolic terms,
without implying that the thermostat itself manipulates symbolic expressions,
our louse’s behaviour can also be implemented as a collection of instinctive
input-output associations in a body without a mind.
These possibilities can be pictured like
this:
Highest level Achievement
consequences
maintenance goals Forward goals
reasoning
Forward Backward
reasoning reasoning
Intermediate level consequences
Intermediate consequences Decide Louse
sub-goals Designer
Forward Backward
reasoning reasoning
Candidate Actions
consequences
Observations Actions
Forward reasoning
Perceptual Motor
Processing or processing Louse
Input-output associations
The world
Although much of our human behaviour is
instinctive and even mindless, we can often step back from what we are doing,
consciously reflect upon our goals, and to some extent control our behaviour to better achieve those goals. It is as though we
could be both a louse and a louse designer at the same time.
That, in effect, is the ultimate goal of this book – to investigate how
we can use logic to monitor our behaviour and how we
can accomplish our goals more effectively as a result. For this purpose, we
need to investigate the relationship between instinctive and logical thought.
We have seen an example of this relationship in this chapter. But in this
example the instinctive and logical levels were present in different
individuals. In the next chapter we will see how they can be present in the
same individual.
[1] It is interesting that both the temporal and logical interpretations of the ambiguous English word “then” are meaningful here.
[2] For simplicity, we assume that making a pre-emptive attack, resting and trying to make babies are all actions that the louse can execute directly without reducing them to lower-level sub-goals.
[3] The necessary simplification is to replace sentences of the form: “if A, then if B then C”
by sentences “if A and B then C”.
[4] This argument has been made, among others, by Rodney Brooks at MIT, who has implemented several generations of mindless, louse-like robots, which display impressively intelligent behaviour.