Abstract
We study the learning power of iterated belief revision methods. Successful learning is understood as convergence to correct, i.e., true, beliefs. We focus on the issue of universality: whether or not a particular belief revision method is able to learn everything that in principle is learnable. We provide a general framework for interpreting belief revision policies as learning methods. We focus on three popular cases: conditioning, lexicographic revision, and minimal revision. Our main result is that conditioning and lexicographic revision can drive a universal learning mechanism, provided that the observations include only and all true data, and provided that a non-standard, i.e., non-well-founded prior plausibility relation is allowed. We show that a standard, i.e., well-founded belief revision setting is in general too narrow to guarantee universality of any learning method based on belief revision. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition, which says that only finitely many errors occur, and that every error is eventually corrected, we show that lexicographic revision is still universal in this setting, while the other two methods are not.
Original language | English |
---|---|
Journal | Studia Logica |
Volume | 107 |
Issue number | 5 |
Pages (from-to) | 917-947 |
Number of pages | 31 |
ISSN | 0039-3215 |
DOIs | |
Publication status | Published - 2019 |
Keywords
- Belief revision
- Dynamic Epistemic Logic
- Formal learning theory
- Truth-tracking