Monday 19 February 2007

Thoughts on 'A Generative Inquiry Dialogue System'

Thoughts following on from 13 February’s supervisor meeting.

A Generative Inquiry Dialogue System (2007, Elizabeth Black and Anthony Hunter)

Given agents x1 and x2 with knowledge bases (KB) as follows:
KB(x1) = {(a), (~d), (b -> c)}
KB(x2) = {(e), (a ^ f -> b), (~d ^ e -> b)}

According to the Black & Hunter system an inquiry initiated by x2 to establish ‘c’ would generate a sequence of moves as follows:

(time, move, current question store)
1, [x1, open, (c)], cQS = {c}
2, [x2, open, (b -> c)], cQS = {b}
3, [x1, open, (a ^ f -> b)], cQS = {a, f}
4, [x2, assert, ({a}, a)], cQS = {f}
5, [x1, close, (a ^ f -> b)], cQS = {f}
6, [x2, close, (a ^ f -> b)], cQS = {b}
7, [x1, open, (~d ^ e -> b)], cQS = {~d, e}
8, [x2, assert, ({~d}, ~d)], cQS = {e}
9, [x1, assert, ({e}, e)], cQS = {}
10, [x2, close, (~d ^ e -> b)], cQS = {}
11, [x1, close, (~d ^ e -> b)], cQS = {b}
12, [x2, close, (b -> c)], cQS = {b}
13, [x1, assert, ({~d, e, ~d ^ e -> b}, b)], cQS = {}
14, [x2, close, (b -> c)], cQS = {}
15, [x1, close, (b -> c)], cQS = {c}
16, [x2, assert, ({b -> c, ~d ^ e -> b, ~d, e}, c)], cQS = {}
17, [x1, close, (c)], cQS = {}
18, [x2, close, (c)], cQS = {}

At the end of the inquiry, an argument for ‘c’ can be constructed and the commitment stores (CS) are as follows:
CS(x1) = {a, ~d, e, b -> c, ~d ^ e -> b}
CS(x2) = {~d, e, ~d ^ e -> b}

Using our system (yet to be defined) we wish to generate dialogues similar to the above. However, the above inquiry raises a number of questions:
- The final commitment stores may contain assertions that are irrelevant to the final argument. As with the assertion ‘a’ at t=4 in the above example.
- Agents can initiate sub-dialogues to unnecessarily establish beliefs that they already know to be true, for example ‘e’ at t=7, which agent x2 already believes to be true. This is unwanted since agents would wish to minimise the amount of information given out (particularly in the medical domain) and minimise inter-agent communication (possibly due to expense). This is a result of only allowing sub-dialogues to be opened for rules.
- As a result of the previous point, the initiating agent will have to unnecessarily assert and thus publicly commit to beliefs that it held to be true from the beginning.
- Agents still have a lot of post-inquiry work to do in order to filter out assertions that they made during the course of the dialogue which turned out to be irrelevant to the final top-level argument (e.g. ‘a’ at t=4). This is in order to build a final top-level argument that is both minimal and non-contradictory.

Thus, in our (ideal) system, we wish to:
- Minimise information given out.
- Allow for irrelevant/incorrect assertions/commitments to be retracted.

Further, we would hope to relax the assumptions of Black & Hunter’s work in order to:
- Provide the capability to fully support interactions such as multi-disciplinary medical meetings; allowing more than two agents to take part in an argument inquiry dialogue.
- Consider the implications of allowing agents’ belief bases to change during a dialogue, since it is likely that an agent may be carrying out several tasks at once and may even be involved in several different dialogues at once, and as a result it may be regularly updating its beliefs… If an agent’s belief base kept growing during a dialogue, would it be possible to generate infinite dialogues? What should an agent do if it has cause to remove a belief from its belief base that it asserted earlier in the dialogue?...
- Further explore the benchmark which we compare our dialogue outcomes to…
- Further investigate the utility of argument inquiry dialogues when embedded in dialogues of different types…

Work to do
As stated previously, adapt the work of Black & Hunter to work for assumption-based argumentation (instead of defeasible logic programming) and dialogues constraints (instead of a strategy function), yet using the same protocol and outcome functions whilst relaxing some of the above-mentioned assumptions.

No comments: