Planning algorithms generate sequences of actions that achieve a goal, but they can also be used in reverse: to infer the goals that led to a sequence of actions. Traditional plan-based goal recognition assumes agents are rational and the environment is fully observable. Recent narrative planning models represent agents as believable rather than perfectly rational, meaning their actions need to be justified by their goals, but they may act in ways that are not optimal, and they may possess incorrect beliefs about the environment. In this work we propose a technique for inferring the goals and beliefs of agents in this context, where rationality and omniscience are not assumed. We present two evaluations that investigate the effectiveness of this approach. The first uses partial observation sequences and shows how this impacts the algorithm’s accuracy. The second uses human data and compares the algorithm’s inferences to those made by humans.
Rachelyn Farrell, Stephen G. Ware. Narrative planning for belief and intention recognition. In Proceedings of the 16th AAAI international conference on Artificial Intelligence and Interactive Digital Entertainment, pp. 52-58, 2020.