POPL 2017
Sun 15 - Sat 21 January 2017

Markov processes with discrete time and arbitrary state spaces are important models in probability theory. They model the infinite steps of non-terminating programs with (not just discrete) probabilistic choice and form the basis for further probabilistic models. Their transition behavior is described by Markov kernels, i.e.~measurable functions from a state to a distribution of states. Markov kernels can be composed in a monadic way from distributions (normal, exponential, Bernoulli, etc.), other Markov kernels, and even other Markov processes.

In this paper we construct discrete-time Markov processes with arbitrary state spaces, given the transition probabilities as a Markov kernel. We show that the Markov processes form again Markov kernels. This allows us to prove a bisimulation argument between two Markov processes and derive the strong Markov property. We use the existing probability theory in Isabelle/HOL and extend its capability to work with Markov kernels.

As application we construct continuous-time Markov chains (CTMCs). These are constructed as jump & hold processes, which are discrete-time Markov processes where the state space is a product of continuous holding times and discrete states. We prove the Markov property of CTMCs using the bisimulation argument for discrete-time Markov processes, and that the transition probability is the solution of a differential equation.

Slides (slides.pdf)263KiB

Mon 16 Jan

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change