Chapter 4

How to design a louse

 

Now imagine that you have graduated from Mars explorer to wood louse designer. Not being a louse yourself, but being instead an external observer, you view the louse’s behaviour in purely declarative terms:

 

If it’s clear ahead, then the louse moves forward.

                                    If there’s an obstacle ahead, then the louse turns right.

                                    If the louse is tired, then it stops.

 

Unfortunately, it has come to your notice that something is going awfully wrong. Without food the wood louse is going to die, and without children its genes will become extinct. What is the point of your louse just wandering around if it doesn’t bother to eat and make babies?

 

Part of the problem is that the louse’s body isn’t giving it the right signals - not making it hungry when it is running out of energy, and not making it lust for a mate when it should be having children. Its body also needs to be able to sense food and to eat, and be able to sense potential mates and to propagate.

 

You have to start all over again with your design, this time from the top down. What are your louse’s top-level goals and how can they be reduced to sub-goals? To redesign your louse, you need something like this:

 

Top-level goals:                       

 

The louse stays alive for as long as possible and the louse has as many children as it can.

 

Beliefs:

 

The louse stays alive for long as possible,

if whenever it is hungry then it looks for food and when there is food ahead it eats it, and

whenever it is tired then it rests, and

whenever it is threatened with attack then it defends itself.

 

The louse has as many children as it can,

if whenever it lusts after a mate then it looks for a mate and

when there is a mate ahead it tries to make babies.

 

The louse looks for an object,

if whenever it’s clear ahead then it moves forward, and

whenever there’s an obstacle ahead and it isn’t the object then it turns right.

 

The louse defends itself

if it attacks or it runs away.

 

Now that you have identified both your louse’s higher-level goals and beliefs that can be used to reduce those goals to sub-goals, you have specified the redesign of your louse. If your louse were capable of reasoning for itself, then you could just hand those goals and beliefs over to the louse itself without further ado. But, assuming that such reasoning is beyond the louse’s capabilities, you can, instead, do much of the reasoning for it in advance.

 

This process of reasoning in advance is similar to the computer scientist’s notion of compiling a high-level program into a lower-level program. The high-level program is designer-oriented, making it easier to ensure that the lower-level program meets its requirements and making it easier to modify the design if the requirements change. The lower-level program is machine-oriented, making it easier to execute the program efficiently.

 

In computing, high-level programs are complied into lower-level programs by means of a special computer program called a compiler. In this example, however, you are both the requirements engineer, using logic to represent the louse’s goals and your beliefs, as well as the compiler, using backward reasoning to generate a production system that can be executed directly by the louse itself.

 

To compile your high-level representation, you could begin by reasoning backwards from the louse’s top-level goals, using your beliefs, to generate the next level down of sub-goals[1]:

 

Sub-goals:

 

If the louse is hungry then it looks for food and if there is food ahead it eats it.[2]

If the louse is tired then it rests.

If the louse is threatened with attack then it defends itself.

If the louse lusts after a mate then it looks for a mate and if there is a mate ahead it tries to make babies.

 

These sub-goals are similar to condition-action rules, except they are expressed in declarative form. Moreover, some of the conclusions of the rules include goals (like looking for food, defending itself, and looking for a mate) that need to be reduced to still lower-level sub-goals[3]. So if the louse can execute only condition-action rules, then these rules are not yet in the right form.

 

Fortunately, you can also do this additional reasoning for the louse. With a little further backward reasoning and some logical simplification, you can derive a declarative analogue of a relatively simple set of condition-action rules:

 

If the louse is hungry and it’s clear ahead then it moves forward.

If the louse is hungry and there’s an obstacle ahead and it isn’t food then it turns right.

If the louse is hungry and there’s an obstacle ahead and it is food then it eats it.

If the louse is tired then it rests.

If the louse is threatened with attack then it attacks or it runs away.

If the louse lusts after a mate and it’s clear ahead then it moves forward.

If the louse lusts after a mate and there’s an obstacle ahead and it isn’t a mate then it turns right.

If the louse lusts after a mate and there’s an obstacle ahead and it is a mate then it tries to make babies.

 

You can now simply translate these rules into imperative form, and hand them over to the louse, to execute them itself as a conventional production system[4].

The mind-body problem

We have assumed that your louse has a mind that can represent and execute condition-action rules. But is it really necessary for the louse to have a mind at all - to have concepts for such notions as hunger, lust, obstacle and food, to derive symbolic representations of candidate actions and to resolve conflicts when there is a choice of actions it can do? Why does the louse need to carry around all this mental baggage, when the necessary behaviour can be hardwired directly into the louse’s nervous system instead[5]?

 

If you were designing a thermostat, to regulate the temperature of a room by turning on the heat when it is too cold and turning it off when it is too hot, you wouldn’t assume that the thermostat needs to think. You might, none the less, describe your thermostat’s input-output behaviour in condition-action terms:

 

            If current temperature is T degrees and target temperature is T’ degrees and T < T’ -  2°

            then the thermostat turns on the heat.

 

            If current temperature is T degrees and target temperature is T’ degrees and T > T’ + 2°

            then the thermostat turns off the heat.

 

And, as with the design of your louse, you might use logical reasoning to derive that input-output description from a logical representation of the thermostat’s top-level goal, using your beliefs about the thermostat’s environment. In this case, the goal might be to maintain room temperature within 3° of the target temperature, and your beliefs might include the beliefs that when you turn on the heat the temperature will rise with at most only a short delay and that when you turn off the heat the temperature will fall with a similarly short delay.

 

But, like most people, you would probably be perfectly happy if the input-output description you derive were implemented with a simple mechanical or electronic device, rather than with a condition-action rule production system.

 

In general, a designer’s job ends when she has constructed a declarative description of her artifact’s input-output behaviour. As long as the implementation lives up to that description – and the more efficiently it does so the better - the designer has succeeded in her task.

 

In computer science, this decoupling of the design of an object’s input-output behaviour from the implementation of that behaviour is called encapsulation. The implementation is encapsulated inside the object, and its details are of no concern to other objects. Objects can interact with one another, taking into account only the input-output behaviour of other objects, without concerning themselves with how that behaviour is implemented.

 

This computer science notion of encapsulation partially vindicates the behaviourist’s point of view. Not only is it impossible to determine what goes on inside another person’s head, but for some purposes it is also unnecessary and even undesirable.

What it’s like to be human

Looking at a thermostat, louse or another person from the outside as an external observer, all you can be sure about is the input-output behaviour you observe – or, in some cases, the input-output behaviour you design.

 

But it’s different if you look at the same behaviour from the inside out, for example as an implementer of that behaviour. As an implementer, you have to decide what is the best way to obtain the behaviour you desire. The more complex and sophisticated the behaviour, the more complex and sophisticated the implementation.

 

A thermostat is relatively simple, and its behaviour can be implemented relatively simply, directly in hardware. A louse’s behaviour is much more varied and complex. But even its behaviour can probably be implemented directly in physical material, without the assistance of a mind.

 

Human behaviour, however, is much more complex than that of a thermostat or a louse, even though some of our instinctive behaviour might sometimes resemble that of a louse. Gluttony, lust and even the way we react to a perceived threat of attack are obvious examples. None the less, we humans can reason about our behaviour, even when it is partly instinctive, and at least to some extent can control our behaviour to better achieve our goals.

 

That, in fact, is the top-level purpose of this book – to investigate how we humans can use logic to accomplish our goals more effectively. For this purpose, we need to understand how logic is related to other ways of generating behaviour, and how logic can be reconciled and combined with those other ways. The most important of these is the production system model of human thinking.

 

We have already seen most of what we are looking for, in the first part of this chapter: Condition-action rules are goals. They can be subgoals of higher-level goals – as in the case of a louse moving forward when it is clear ahead, for the higher-level goal of finding food or finding a mate. And they can be generalised, so that their conclusions also contain goals, and not just actions – as in the case of a louse deriving the goal of defending itself when it is attacked.

 

However, in this chapter, it was one agent, the louse designer, who used logic to reason about the condition-action rule behaviour of another agent, the louse. In the next chapter, we will see how the same agent can combine logical, goal-oriented reasoning and condition-action rule behaviour in the same reasoning mechanism.

 

 



[1] The words “whenever” and “when” are different ways of saying “if”, but with a temporal dimension. We will investigate temporal reasoning later.

[2] This English sentence is ambiguous. However, the lack of a comma before “and if there is food ahead it eats it” is meant to suggest that this clause can not be read on its own, as a separate sentence. Instead, the clause inherits the extra condition “the louse is hungry” from the first part of the sentence. The same is true of the last sentence in this group of four sentences.

[3] To avoid going into detail, we assume that trying to make babies is an action sub-goal that the louse can execute directly without needing to reduce it to lower-level sub-goals.

 

[4] Although the imperative form of the rules is now in the right form for the louse to execute directly, it doesn’t mean that all of the louse’s problems are over. What should the louse do if it is tired and hungry at the same time? Or lustful and hungry? Or hungry and threatened with attack? Or tired and hungry and lustful and threatened with attack? It will be torn between resting, looking for food, finding a mate, attacking or running away. The louse needs to resolve the conflicts between competing actions, to decide what actions to perform.  In fact, it is possible to build the necessary conflict-resolution strategies into the rules themselves. But this is a topic for another occasion.

[5] This argument has been made compellingly, among others, by Rodney Brooks at MIT, who has implemented several generations of mindless, louse-like robots, which display remarkably intelligent behaviour.