Projects per year
The development of autonomous agents is a central goal of artificial intelligence. A salient feature of autonomous agents is their ability to exhibit goal-directed behaviour, i.e., to commit to goals and search for plans to attain them. In order to plan, an agent must think about the possible outcomes of its actions and make the best choice about what to do. But to think before acting, an agent needs an internal representation of its world; a mental simulation of it, whose manipulation serves as a substitute for action. This thesis is concerned with (i) learning such internal representations from experience and planning with them. Regarding learning, we consider an agent exposed to a partially observable domain, with which the agent has never interacted before, and about which the agent wishes to learn both what she can observe and how her actions can affect it. We assume that the agent can learn about this domain from experience gathered by taking actions in the domain and observing their results. We present learning algorithms capable of learning as much as possible (in a well-defined sense) about both what is directly observable and what actions do in the domain, given the learner’s observational constraints. We distinguish the levels of domain knowledge attained by each algorithm, and characterize the type of observations required to reach such knowledge. The algorithms use dynamic epistemic logic (DEL) to represent the learned domain information symbolically. The presented work extends that of Bolander and Gierasimczuk , which developed learning algorithms based on DEL to learn domain information in fully observable domains. Regarding planning, we consider an agent that already has a representation of its environment. The agent is assumed to inhabit a social, multi-agent world. In order to plan in such a world, the agent needs to take into account, not only her own capabilities and knowledge, but also the capabilities and knowledge of other agents. This type of planning requires theory-of-mind (ToM) reasoning: the ability to reason about how the world is perceived by others, what they believe and what they intend to do. We introduce a new model for multi-agent planning, supporting ToM reasoning, which we call first-order epistemic planning. Epistemic planning using DEL was proposed by Bolander and Andersen , and first-order epistemic planning is an extension of this framework. First-order epistemic planning is specified in our own version of first-order dynamic epistemic logic called FODEL, differing from the earlier first-order dynamic epistemic logic of Kooi  by, among others, the inclusion of postconditions. FODEL is a variant of DEL, which allows for more compact representations than propositional DEL. FODEL also adds to DEL the ability to represent abstract knowledge in a natural way. We show that FODEL satisfies a number of desirable technical properties not previously established in first-order dynamic epistemic logic (soundness, completeness and decidability over models with finitely many agents). We then study first-order epistemic planning problems, and show that, in some important cases, we can decide whether such a planning problem can be solved or not. In particular, we show that the FODEL plan existence problems for both (i) single-agent planning and (ii) multi-agent planning with non-modal preconditions, are decidable. These results generalise to FODEL planning existing decidability results for Epistemic Planning with propositional DEL [10, 48]. Finally, we also study FODEL as a formalism to describe epistemic social network dynamics. We show how several network dynamics considered in the literature are modelled naturally in this framework, and examine the expressivity of FODEL relative to other network dynamics formalisms.
|Publisher||Technical University of Denmark|
|Number of pages||126|
|Publication status||Published - 2020|