Most Intelligent Agent technologies have an architecture based on Belief, Desire and Intention (BDI) methodologies. The BDI model comes from research done in the field of artificial intelligence over the past twenty years and has proven to be the most robust and flexible model for Intelligent Agent Systems.
The
Agent Oriented Software Group,
Source.
Agents
that use this belief, desire and intention model will
generally have:
•
A set of
beliefs that represent what the agent
‘believes’ or
‘knows’.
•
A set of
desires or goals that the agent is working towards
achieving.
•
A set of
intentions to achieve the agent’s current
goals.
•
A set of
abilities, effectoric capabilities, which allow the agent
to further its intentions.
Agents must decide what action to take next to pursue the
desired goal, which goal to pursue and when to change to a
new goal. There are two main ways that an agent can
reason. There is
deductive reasoning and practical reasoning.
Deductive reasoning is where an agent can take some its
beliefs or known facts and can produce new information
'deduced' from the original knowledge. One of the most
famous examples comes from logic.
Known facts:
•
All men
are mortals.
•
Socrates
is a man.
Deduced fact:
•
Therefore,
Socrates is a mortal.
This may seem simple or obvious for humans to see, but
implementing it in an agent is far more difficult.
It is often not possible or practical for an agent to
reason purely deductively. In certain situations it is
important to be able to figure out what action to take on
the spot.
Practical
reasoning is a matter of weighing conflicting
considerations for and against competing options, where the
relevant considerations are provided by what the agent
desires/values/cares about and what the agent believes.
Bratman,
Intensions in Communication, Page
17.
In short, practical reasoning is reasoning directed towards action. Practical reasoning can be broken down into two distinct parts, deliberation and means-ends reasoning. Deliberation is the process of establishing what state the agent is working towards. Deciding on how to get to that state is means-ends reasoning. For example, an agent playing chess must deliberate to see what to do. Will it be looking to defend? Attack a certain piece? Claim check-mate? Once it has deliberated and picked a task to work towards, it then performs means-ends reasoning to decide how to achieve its set task. It may be trivial such as move a particular piece to take another one, or it may be a much more complex plan that will take several moves.