Glossary

Artificial Intelligence
Artificial Intelligence is the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. (defined by the American Association for Artificial Intelligence).

BDI
A BDI agent is a particular type of rational software agent, extended with some mental attitudes : Beliefs, Desires and Intentions (BDI). The BDI agent's model derives from the BDI logic. The model has some philosophical basis in the Belief-Desire-Intention theory of human practical reasoning.

Benevolent and Competitive Agents
One of the most important issues to consider when designing a multiagent system is whether the different agents will be benevolent or competitive. Even if they have different goals, the agents can be benevolent if they are willing to help each other achieve their respective goals. On the other hand, the agents may be selfish and only consider their own goals when acting. In the extreme, the agents may be involved in a zero-sum situation so that they must actively oppose other agents' goals in order to achieve their own. Such agents are competitive.

Commitments
When agents communicate, they may decide to cooperate on a given task or for a given amount of time. In so doing, they make commitments to each other. Committing to another agent involves agreeing to pursue a given goal, possibly in a given manner, regardless of how much it serves one's own interests. Commitments can make systems run much more smoothly by providing a way for agents to trust each other, yet it is not obvious how to get self-interested agents to commit to others in a reasonable way. There are three types of commitment in general: internal commitment--an agent binds itself to do something, social commitment--an agent commits to another agent, and collective commitment--an agent agrees to fill a certain role. Setting an alarm clock is an example of internal commitment to wake up at a certain time. Decommitment is the termination of a commitment.

Compatible and Incompatible goals
Two goals are incompatible if achieving one means that the other can not be achieved, the converse being automatically true.

Deliberative Agents
Deliberative Agents behave more like they are thinking, by searching through a space of behaviors, maintaining internal state, and predicting the effects of actions. Although the line between reactive and deliberative agents can be somewhat blurry, an agent with no internal state is certainly reactive, and one which bases its actions on the predicted actions of other agents is deliberative.

Distributed Artificial Intelligence
Distributed Artificial Intelligence (DAI) has existed as a subfield of AI for less than two decades. DAI is concerned with systems that consist of multiple independent entities that interact in a domain. Traditionally, DAI has been divided into two sub-disciplines: Distributed Problem Solving (DPS) focuses on the information management aspects of systems with several branches working together towards a common goal; Multiagent Systems (MAS) deals with behavior management in collections of several independent entities, or agents.

Heterogenous Communicating Multiagent Systems
Heterogeneous multiagent systems can be very complex and powerful. However the full power of MAS can be realized when adding the ability for agents to communicate with one another. In fact, adding communication introduces the possibility of having a multiagent system turn into a system that is essentially equivalent to a single-agent system. By sending their sensor inputs to and receiving their commands from one agent, all the other agents can surrender control to that single agent. In this case, control is no longer distributed. Thus communicating heterogeneous agents can span the full range of complexity in agent systems.

Heterogenous Non-Communicating Multiagent Systems
Adding the possibility of heterogeneous agents in a multiagent domain adds a great deal of potential power at the price of added complexity. Agents might be heterogeneous in any of a number of ways, from having different goals to having different domain models and actions. An important subdimension of heterogeneous agent systems is whether agents are benevolent or competitive. Even if they have different goals, they may be friendly to each other's goals or they may actively try to inhibit each other.

Homogenous Non-Communicating Multiagent Systems
The simplest multiagent scenario involves homogeneous non-communicating agents. In this scenario, all of the agents have the same internal structure including goals, domain knowledge, and possible actions. They also have the same procedure for selecting among their actions. The only differences among agents are their sensory inputs and the actual actions they take: they are situated differently in the world.

Middleware
Middle agents are those that support information flow in a Multi-Agent System, but they don’t directly contribute to the solution of a goal. We can exemplify this by a middle agent in the airline industry: it may list all the airlines that travel a particular route, but it won’t provide tickets for the trip. However, its role is essential in order to obtain such a ticket elsewhere (i.e. knowledge of the different airlines available).

Mobile Agents
A mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer.
When the term mobile agent is used, it refers to a process that can transport its state from one environment to another, with its data intact, and still being able to perform appropriately in the new environment. Mobile agents decide when and where to move next. Just like a user doesn't really visit a website but only make a copy of it, a mobile agent accomplishes this move through data duplication. When a mobile agent decides to move, it saves its own state and transports this saved state to next host and resume execution from the saved state.
Mobile agents are a specific form of mobile code and software agents paradigms. However, in contrast to the Remote evaluation and Code on demand paradigms, mobile agents are active in that they may choose to migrate between computers at any time during their execution. This makes them a powerful tool for implementing distributed applications in a computer network.

Multi-Agent Systems
Multiagent Systems (MAS) is the emerging subfield of AI that aims to provide both principles for construction of complex systems involving multiple agents and mechanisms for coordination of independent agents' behaviors.

Perspective
Another issue to consider when building a multiagent system is how much sensor information should be available to the agents. This perspective can either be local or global. Even if it is feasible within the domain to give the agents a global perspectives of the world, it may be more effective to limit them to local views. Consider a case of multiple agents sharing a set of identical resources in which they have to adapt their resource usage policies. Since the agents are identical and do not communicate, if they all have a global view of the current resource usage, they will all move simultaneously to the most under-used resource. However, if they each see a partial picture of the world, then different agents gravitate towards different resources. Even though this is a preferable effect here, one perspective is not usually preferred over another in general.

Reactive Agents
Reactive agents simply retrieve pre-set behaviors similar to reflexes without maintaining any internal state.

Robotic Soccer
Robotic soccer is a particularly good domain for studying MAS. Originated by Alan Mackworth, it has been gaining popularity in recent years, with several international competitions taking place. It can be used to evaluate different MAS techniques in a direct manner: teams implemented with different techniques can play against each other.Robotic soccer is much more complex and interesting as a general testbed for MAS. It can be played either with real robots or a simulator. Even with many predators and several prey, the pursuit domain is not complex enough to simulate the real world. Although robotic soccer is a game, most real-world complexities are retained. A key aspect of soccer's complexity is the need for agents not only to control themselves, but also to control the ball which is a passive part of the environment.

Roles
When agents have similar goals, they can be organized into a team. Each agent then plays a separate role within the team. With such a benevolent team of agents, one must provide some method for assigning different agents to different roles. This assignment might be obvious if the agents are very specific and can each only do one thing. However in some domains, the agents are flexible enough to interchange roles.

Single-Agent Systems
Although it might seem that single-agent systems should be simpler than multiagent systems, when dealing with a fixed, complex task, the opposite is often the case. Distributing control among multiple agents allows each agent to be simpler. No one agent has to be able to complete a given task on its own. Thus centralized, single-agent systems belong at the end of the progression from simple to complex multiagent systems.
In general, the agent in a single-agent system models itself, the environment, and their interactions. Of course the agent is itself part of the environment, sometimes agents are considered to have extra-environmental components as well. They are independent entities with their own goals, actions, and knowledge.
In a single-agent system, no other such entities are recognized by the agent. Thus, even if there are indeed other agents in the world, they are not modelled as having goals, etc.: they are just considered part of the environment. The point being emphasized is that although agents are also a part of the environment, they are explicitly modeled as having their own goals, actions, and domain knowledge.

Stable and Evolving Agents
Another important characteristic to consider when designing multiagent systems is whether the agents are stable or evolving. Of course evolving agents can be useful in dynamic environments, but particularly when using competitive agents, allowing them to evolve can lead to complications. Such systems that use competitive evolving agents are said to use a technique called competitive co-evolution. Systems that evolve benevolent agents are said to use cooperative co-evolution.

Swarms
Swarm is a platform for agent-based models (ABMs) that includes:
  • A conceptual framework for designing, describing, and conducting experiments on ABMs
  • Software implementing that framework and providing many handy tools
  • A community of users and developers that share ideas, software, and experience.