J-P: MDP. FP. PP: Characterizing Total Expected Rewards in Markov Decision Processes as Least Fixed Points with an Application to Operational Semantics of Probabilistic Programs

Kevin Batz, Benjamin Lucien Kaminski, Christoph Matheja, Tobias Winkler

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Abstract

Markov decision processes (MDPs) with rewards are a widespread and well-studied model for systems that make both probabilistic and nondeterministic choices. A fundamental result about MDPs is that their minimal and maximal expected rewards satisfy Bellmann’s optimality equations. For various classes of MDPs – notably finite-state MDPs, positive bounded models, and negative models – expected rewards are known to be the least solution of those equations. However, these classes of MDPs are too restrictive for probabilistic program verification. In particular, they assume that all rewards are finite. This is already not the case for the expected runtime of a simple probabilisitic program modeling a 1-dimensional random walk. In this paper, we develop a generalized least fixed point characterization of expected rewards in MDPs without those restrictions. Furthermore, we demonstrate how said characterization can be leveraged to prove weakest-preexpectation-style calculi sound with respect to an operational MDP model.
Original languageEnglish
Title of host publication Principles of Verification: Cycling the Probabilistic Landscape
Volume15260
PublisherSpringer
Publication date2025
Pages255-302
ISBN (Print)978-3-031-75782-2
ISBN (Electronic)978-3-031-75783-9
DOIs
Publication statusPublished - 2025
SeriesLecture Notes in Computer Science
ISSN0302-9743

Fingerprint

Dive into the research topics of 'J-P: MDP. FP. PP: Characterizing Total Expected Rewards in Markov Decision Processes as Least Fixed Points with an Application to Operational Semantics of Probabilistic Programs'. Together they form a unique fingerprint.

Cite this