In the left hand plot of figure 4.12, the utility gain is plotted for the fixed risky strategy. This graph was obtained by evaluating the pruned and unpruned game trees and taking the difference in utility. Notice that a gain is obtained by the use of an unambiguous strategy in the central region, since this strategy is more effective when there is risk in determining the agent's intention. Away from the centre, there is no gain ( and no loss ) since both the pruned and unpruned trees yield the ask-ambiguous strategy. On the right, the utility gain is plotted against the fixed non-risky strategy, where a gain is obtained by correctly taking a risk when the intention is more clear. In the centre of the graph, both trees take the ask-unambiguous strategy. These results show that the probabilistic planner obtains a utility gain of as much as 5 units over both of the fixed-strategy planners over a significant region of the belief space. This compares well with the maximum dialogue length, which is 20 units. There is also some difference in gain between different levels of sample error, with a high degree of error having a slightly negative effect on the performance of the probabilistic system. If the value for p(intend-car-spanner) varies over the lifetime of the system, a considerable gain is obtained by the planner over a fixed strategy system.
Now the performance gain plots are shown for the complete game tree, including the clarification subtree, given in figure 4.9.
In the left hand plot of figure 4.13, the utility gain is plotted for the risky strategy. Notice that little if any utility gain is obtained by the planner. This is because in the middle region, where the risky strategy is expected to fail, the responding agent picks up the initiative on the next move, who chooses a clarification subdialogue. The scattering of points just above the x axis for n=2 is explained by the high sample error, which pushes the utility of the risky strategy slightly below that of the non-risky strategy. The conclusion drawn from these results is that as long as one agent uses probabilistic reasoning, there is no need for the other one to. Particularly, a fixed-strategy dialogue system will perform just as well with a human partner, as long as the human partner can be relied upon to pick up the initiative. If he cannot be relied upon to do this, the gains are as in figure 4.12, and probabilistic reasoning makes a considerable difference to the planner's performance.