SURPRISE 96
AUTOMATED NEGOTIATION

ARTICLE 2



Contents

Introduction
Negotiation among Knowledge-Based Scheduling Agents
Negotiation among computationally bounded self interested-agents
Some other examples
Conclusions
References


Introduction

In multiagent systems, the agents are provided with an interaction protocol, but each agent may choose its own strategy.This allows the agents to be constructed by separate designers and/or represent different real world parties.The agents cannot be assumed to use cooperative negotiation strategies, but instead, each agent will try to use a strategy that provides the highest possible utility for itself without concern of the other agents' utilities.


Negotiation among Knowledge-Based Scheduling Agents

This work represents a general approach for guiding the distributed search of multiple agents through negotiation to arrive at satisficing solution to a possibly over constrained problem.The domain is that of an Airline Resource Manager whose role is to insure that each flight receives the resources (gates, baggage handling, catering, fuel, cleaning, etc.) that it requires in time to meet its arrival and departure deadlines. The approach we have taken towards distributing the system is to view each resource manager as belonging to an autonomous organization possessing its own resources. Each resource manager is responsible for its own schedule, but, can request resources from other managers based on cooperative agreements.

The ARM (Airport Resource Manager) testbed has been successfully distributed as a community of two or more agents, each with its own resources (i.e., gates, fuel trucks, baggage handlers) and each responsible for satisfying its own schedule of arriving and departing flights. The need for cooperation and negotiation arises because an individual agent may lack sufficient resources to satisfy its schedule and may have to borrow these resources from other agents. By coordinating the individual scheduling efforts so that each agent understands the probable requirements of other agents, the likelihood increases that remote agents will be able to lend the appropriate resource at the time it is required. In the event that no globally satisfactory solution can be found, agents must negotiate in order to determine which local constraints can be relaxed to enable such a solution to be developed.

The primary difficulty in distributed scheduling is that no scheduling agent possesses a global view of the problem space. Current schedulers rely on being able to develop models of the most tightly constrained resources and time slots. Constructing schedules without this information can lead to considerable cost in backtracking and/or low quality schedules.Agents exchange abstractions of their orders and resource demands during the scheduling process. This information allows agents to more accurately determine whether to solve subproblems locally (through backtracking and constraint relaxation) or whether to apply to other agents for resources. Because other agents may only be able to provide resources by themselves backtracking or relaxing constraints, this results in a form of negotiation that takes place during the actual scheduling process.


Negotiation among computationally bounded self-interested agents

In many domains, self-interested real world parties need to solve combinatorial optimization problems to operate efficiently.Often they can save costs by coordinating their activities with other parties.Such settings occur for example in distributed manufacturing among multiple companies and in distributed vehicle routing among dispatch centers.When the planning activities are automated, it is useful to automate the coordination activities via negotiating software agent representing each party.In such automated negotiations among self-interested agents, the question of coordination arises: what coalitions should the agents form, are they stable, and how should costs be devided within each coalition?

Vehicle routing - An experiment

Let us look at a game with just two agents (A1 and A2), two delivery tasks (T1 and T2) and two identical vehicles, one for each agent.Say that the pickup site and the drop-off site of T1 are close to A2's depot.Now say that the depots are far from each other. Thus the sum of the rout lengths when A1 manages T1 and A2 manages T2 is lower than when either agent individually manages both tasks.

To analize a game the same algorithm was applied on the vehicle routing problem of each subgroup of agents separately.The algorithm first generates an initial solution by giving each vehicle one long delivery and then, in order, giving each vehicle the delivery that can be added to its route with the least cost without violating the constraints.The second phase algorithm is based on iterative refinement.At each step, a delivery (chosen from randomly ordered circular list) is removed from the routing solution and inserted back to the solution, but into the least expensive place while not violating the constraints.The drop-off location of the delivery has to be inserted after the pickup location into the same vehicle route.The refinement algorithm was run until no remove-insert operation enhanced the solution: a local potimum was reached.

The game was analized by choosing 3 of the 5 dispatch centers.There are 7 subgroups of the 3 agents:{1},{2},{3},{1,2},{2,3},{3,1},{1,2,3} and 5 coalition structures: {{1},{2},{3}}, {{1},{2,3}}, {{2},{1,3}}, {{3},{1,2}}, {{1,2,3}}.

Centers 2, 3 and 5 were located near each other, while 1 and 4 were far from each other and the other centers.Centers 1,3,4 and 5 transported heavy low volume items, while 2 transported light voluminous items.Centers 1..5 had 65, 200, 82, 124 and 300 deliveries, and 10, 13, 21, 18 and 15 vehicles respectively.The deliveries of 2 and 5 have considerable areal overlap due to adjacency, and the light voluminous items and heavy low volume items can be profitably joined into the weight and volume constrained vehicles.Centers 2 and 3 did not collude as much as 2 and 5 because 3's vehicle had tighter volume constraints than 5's- hindering the transport of 2's goods.No other two centers besides 2 and 5 were always best off in a 2-agent coalition independent of the third agent of the game.Relaxing the route length constraint increased collusion between the distant 2 and 4 while demoting collusion of the adjacent 2 and 3.


Some other examples

Airport landing rule

Consider a policy where airlines with less fuel are allowed to land first.So we have to prevent deception.People may not deceive, but programs can be programmed to deceive.One approach would be reduce the incentive for airlines to program agents which lie.The rule could grant landing rights to land first but at the same time levy a tax on going first.This is an example of setting up the enviornment for automated decision making which discourages deception.

Phone company competition

This is an example of dynamic competition - for each call at the time call is made. If the decision is based on closed bid, we get one bid from each company and the smallest bid wins.Companies may spend lot of energy trying to outguess other companies. The simplest strategy for the agents is to tell the truth.A protocol which favors this approach is a bidding procedure known as the "Vickrey's mechanism".This protocol selects the lowest bidder but pays him the amount bid by the next highest bidder. This protocol's incentives is towards truth.


Conclusions

In order to design agents with sophisticated interaction capabilities, we must first develop sufficiently powerful models for analyzing and modelling these capabilities. In the case of negotiation, game theory offers a starting point for the development of this formal model.

To maintain efficiency over over time of dynamic multiagent systems, the rules must also be stable.It is not enough to figure out a strategy that has good properties, such as efficiency.The agent designers have to feel that they should stick with this strategy; they should not have any incentive to move to another strategy.Stability is an important part of multiagent systems.


References