Game theory will be used in this thesis to provide a quantitative aspect to the dialogue planner's choice. Traditional dialogue plan rules are used to generate the alternatives available to the agent, but these alternatives then form a game in which the agent chooses the alternative with the maximum expected utility.
The area of Bayesian games is particularly relevant. Bayesian games generalise standard games to those of incomplete information. In a standard game, every agent knows everything about the alternatives available to the agents and their utility functions. In a Bayesian game, such information is probabilistically known to the agents. Harsanyi  showed that uncertainty about applicability of actions can be modelled using the utility function by just using very negative values for those actions. Then, Bayesian games need only be concerned with uncertainty about the utility function. Each agent has a type, which determines its utility function. Each agent uses a set of nested beliefs about the type of the other agents. These beliefs take the form of probability distributions over types. To calculate the utility of an alternative, the agent needs to evaluate the expected utility over the types. This calculation is much the same as that used by the dialogue planner described in this thesis. Instead of using beliefs about types, the planner directly models the alternatives available to the agent through STRIPS preconditions, whose satisfaction is determined using beliefs about the domain state.
Gmytrasiewicz and Durfee  have developed a method of computing the utility of games using a probabilistic model of belief, which has its foundation in Bayesian games. While the planner presented here was developed initially without knowledge of this work, it has close similarities. They use a "Recursive Modelling Method", which represents the game in canonical form, that is, as a game matrix. At the root of a tree is a complete matrix, specifying all of the alternatives available in the game. At each node, a belief is taken, and a pair of edges are annotated with the probability of each value of the belief. For each edge, a child matrix is constructed, in which alternatives whose preconditions are disabled by the belief have had their row removed from the matrix. By performing a weighted sum over the leaf matrices in the tree, the expected utility of each alternative can be found. While the Recursive Modelling Method is equivalent to the game trees that will be used in this thesis, it does not provide any way of constructing an RMM tree from plan rules, nor is it clearly explained how the child matrices may be obtained from their parents by consulting the nested belief model. The emphasis of the RMM is on applications in military strategy, using gathered intelligence to inform the nested belief model. They have performed some experiments with human subjects, and found good agreement between the strategies chosen by a human player, and by the RMM.