Towards Theory-of-Mind agents using Automated Planning and Dynamic Epistemic Logic

Mikkel Birkegaard Andersen

Research output: Book/ReportPh.D. thesisResearch

336 Downloads (Pure)


This thesis is part of a growing body of work in what we call epistemic planning. Epistemic planning is situated at the intersection of automated planning and what can broadly be called dynamic logics. Both are part of the much larger field of Artificial Intelligence.

Automated Planning has been around since at least the 1970s. It is a diverse collection of methods, models, algorithms and specification languages for giving autonomous agents the ability to come up with plans for proactively achieving goals. Autonomous agents can be understood as independent actors, given a purpose by their designer. Whether they are in a software system, connected to the real world with sensors and actuators, or used as a tool for modelling people, for instance in economics, they need to be able to imagine (or predict) outcomes of actions in order to form plans.

The feature that most distinguishes planning from other decision making methods, is that the planner does not know the full system from the beginning. Most of the time it would simply be too big to store in memory! Instead of being given the entire “game”, they use a specification of actions and the initial state to generate only a fraction of the full search space. This means that what an agent can plan for depends crucially on what domains we can describe. This is where logic comes into the picture.

For most of its more than 2500 year long history, logic has been mostly interested in the study of valid reasoning. In later years (in the scheme of things), more attention has been given to studying when reasoning fails in humans. Like using differential equations to analyse and simulate both when a bridge holds and when it collapses, we can use logic to analyse and simulate reasoning both when it is sound and when it isn’t.

The subbranch of logic applied in this work is Dynamic Epistemic Logic. The epistemic part concerns the formalisation of knowledge and belief (mainly) in multi-agent settings. We can describe situations in which many agents are present and have different knowledge and beliefs about the world and each others’ knowledge and belief. Adding the dynamic part of Dynamic Epistemic Logic to our arsenal, we can describe how situations change when, broadly speaking, things happen. In the application to Automated Planning, we let these things be actions of the agents in the system. In doing so we derive new planning formalisms that allow agents to plan under consideration of how what they do changes both the world and knowledge and belief about the world.

In this thesis we give new planning formalisms for single-agent planning and new results for the model theory of multi-agent models. The first of the two fully developed planning formalisms is conditional (single-agent) epistemic planning, allowing an agent to plan with what it knows now and what it knows it will come to know. Though this is nothing new in Automated Planning, it sets the stage for later work.

The second planning formalism extends conditional epistemic planning with beliefs, letting the agent have expectations, without probabilities, of how things will turn out. Our radically different notions of bisimulation for the multi-agent versions of these models are particularly interesting for logicians, as are surprising expressivity results for well known logics on such models.

The final part of the thesis describes ideas on extending the second formalism to a multi-agent setting. With a view towards the practical implementation of agents, we shall also see how an agent can discard the parts of its model that it does not believe to be the case. While this is not necessary for analysing reasoning agents, it does seem a requirement for practical implementations. There are simply too many possibilities for a resource-bounded agent to keep track of. If the agent does discard unlikely possibilities, it must be able to do belief revision if it later turns out to be wrong. Such a procedure is also described.

The long term potential of multi-agent aware planning algorithms is that agents that can predict and understand others in order to plan cooperation, communication, and/or competition. It is the slow edging towards a general framework for multi-agent planning that is the underlying motivation, and some of the main results, of this thesis. While regrettably we haven’t gotten there yet, we’re considerably closer than when we started.
Original languageEnglish
Place of PublicationKgs. Lyngby
PublisherTechnical University of Denmark
Number of pages177
Publication statusPublished - 2015
SeriesDTU Compute PHD-2014

Cite this