Sunday, 22 February 2009

50, Argumentation and Game Theory

Good paper (by Iyad Rahwan and Kate Larson) to come back to later when considering self-interested agents (i.e. those only interested in furthering individual goals) that argue strategically.

An agent type is such that an agent is capable of putting forward only a subset of all possible arguments in the argumentation framework. The notion of defeat (i.e. the defeat relation) is assumed common to all agents.

The kind of manipulation (lying) considered is that wherein agents hide some of their arguments. ("By refusing to reveal certain arguments, an agent might be able to break defeat chains in the argument framework, thus changing the final set of acceptable arguments.") An external verifier is assumed so that agents cannot create new arguments that they do not have in their argument set.

Reiterating, the key assumptions are:
  1. There is a common language for describing/understanding arguments.
  2. The defeat relation is common knowledge.
  3. The set of all possible arguments that might be presented is common knowledge.
  4. Agents do not know who has what arguments.
  5. Not all arguments may end up being presented by their respective agents.
Even with the above assumptions, the authors show that agents may still have incentive to manipulate the outcome by hiding arguments.

Tuesday, 10 February 2009

Distributed Coordination Procedures

Interesting paragraph found in the 'Related Research' section of 'Collective Iterative Allocation: Enabling Fast and Optimal Group Decision Making' (2008) by Christian Guttman, Michael Georgeff and Iyad Rahwan:

"Distributed coordination procedures are often investigated using the Multi-Agent Systems (MAS) paradigm, because it makes realistic assumptions of the autonomous and distributed nature of the components in system networks [...]. Many MAS approaces do not adequately address the 'Collective Iterative Allocation' problem as they use each agent's models separately to improve coordination as opposed to all agents using their models together. That is, each agent uses its own models to decide on allocating a team to a task even if other, more knowledgeable agents would suggest better allocations..."

Saturday, 7 February 2009

New Argument-Based Negotiation Policy

- An agent is initially allocated a set of resources, possibly none.
- Resources are not divisible. An agent either has a particular resource or it does not.
- Resources are not shareable. No two agents have the same resource.
- An agent has at most one goal, possibly none.
- Goals are fulfilled by single resources.
- A certain goal may be fulfilled by a choice of different resources.
- A certain resource may fulfil a choice of different goals.

Allowed dialogues:
- Request dialogue between two agents (an initiator and a responder), each agent involved gives away at most one resource.
- Proposal dialogue between three or more agents (an initiator and a set of responders), each agent involved gives away at most one resource.
- A reason (Conclusion, Support) is provided with refusal/rejection only.

Pros of the policy:
- Computes the right solution (i.e. maximum number of agents fulfil their goal) when agents share all resource-goal 'fulfils' plans (from the outset).
- It is an 'any-time algorithm', i.e. resources are reallocated in such a way that the 'social welfare' does not decrease at any point. Also, the resource allocation can be modified as agents enter the system (without decreasing the 'social welfare' at any point).

Cons of the policy:
- Wasteful in number of requests/proposals. Instead, maybe, agents should ask initially, "do you have any resource that can fulfil my goal given that I know these resources (Rs) fulfil my goal?"

Friday, 6 February 2009


Note to self: Test any (multi-agent resource reallocation) (argument-based) negotiation policies I propose/develop against alternatives proposed/developed by others, argument- or interest- based or otherwise, even if it means mapping/translating. Also, test outcomes (and maybe efficiency, complexity and such measures) of any negotiation policies I propose/develop against outcomes reached by a centralized (all-knowing, maybe) procedure.

49, An Empirical Study of Interest-Based Negotiation

Some notes noted whilst reading 'An Empirical Study of Interest-Based Negotiation' (2007) by Philippe Pasquier, Liz Sonenberg, Iyad Rahwan et al.

Assumptions of the paper (some which differ in my work (in progress)):
  • The resources are not shared and all the resources are owned. Agents also have a finite amount of "money", which is part of the resources and it is the only divisible one.
  • Uses numerical utility values ("costs", "benefits", "payments" etc based on this).
  • Negotiation restricted to 2 agents.
  • All agents have shared, common and accurate knowledge.
  • No overlap between agents' goals, plans, needed resources etc, which avoids the problems of positive and negative interaction between goals and conflicts for resources.
  • Both (i.e. all) agents use the same strategy. (Manipulable given that agents are out to maximise individual gains? Maybe but agents are assumed to be truthful.)

Additionally: "Agents do not have any knowledge about the partner's utility function (not even a probability distribution) and have erroneous estimations of the value of the resources not owned." It seems the primary benefit of IBN in this paper is to explore how agents can correct such erroneous information. (Agents trust each other not to lie about resource valuations.) A comparison is made between agents capable of bargaining only and agents capable of bargaining and reframing.

Content of the paper:

  • Introduction and Motivations
  • Agents with hierarchical goals (/plans)
  • The Negotiation Framework (Bargaining and Reframing Protocols/Strategies)
  • Simulation and Example
  • Experimental Results (Frequency and Quality of the deals; Negotiation complexity)
  • Conclusion and Future Work