To perform long-range planning, the planner would need to generate the most probable of the agents' future courses of action, many steps into the future, so that the value of the current dialogue act could be estimated. Probability mass pruning for chance nodes and heuristic search for choice nodes (see Section 3.4.9) allow the most probable and viable outcomes to be explored, so that the planner can easily generate a game tree that has a very low branching factor but is instead very deep so that distant outcomes are properly covered. As an example, in a cooking domain, a deep game tree of 1000 nodes but with a breadth of only 10 would cover most of a cook's important activities over the next few days, so that a cookery lesson can be planned, and evaluated in the context of that domain plan. Suitable examples are yet to be constructed to demonstrate how efficient long-range dialogues can be constructed.
The long-range planner could also be applied to a problem in user modelling - that of explicit user model acquisition . This is where explicit questions that are not part of the user's immediate plan are asked of the user so that the system can perform an initial classification of the user, such as whether they are experts or novices in the domain. This contrasts with implicit user modelling where the system passively observes the user. The planner could generate a long-range plan for the user at the start of a session. By attaching this game tree to the leaves of the different acquisition question subtrees, the value of information of the acquisition questions can be found. In an analogous fashion, physical acts for learning about an environment could be planned. For example, one might envisage a walking robot who must explore an environment by trying different routes, gathering information about obstacles, so that future route planning problems can be more readily solved.