Markov Decision Processes (MDPs) are extensively used to encode sequences of decisions with probabilistic effects. Markov Decision Processes with Imprecise Probabilities (MDPIPs) encode sequences of decisions whose effects are modeled using sets of probability distributions. In this paper we examine the computation of Γ-maximin policies for MDPIPs using multilinear and integer programming. We discuss the application of our algorithms to "factored" models and to a recent proposal, Markov Decision Processes with Set-valued Transitions (MDPSTs), that unifies the fields of probabilistic and "nondeterministic" planning in artificial intelligence research. Copyright © 2007 by SIPTA.
|Original language||American English|
|Number of pages||10|
|State||Published - 1 Dec 2007|
|Event||ISIPTA 2007 - Proceedings of the 5th International Symposium on Imprecise Probability: Theories and Applications - |
Duration: 1 Dec 2007 → …
|Conference||ISIPTA 2007 - Proceedings of the 5th International Symposium on Imprecise Probability: Theories and Applications|
|Period||1/12/07 → …|