Markov processes with discrete time and arbitrary state spaces are important models in probability theory. They model the infinite steps of non-terminating programs with (not just discrete) probabilistic choice and form the basis for further probabilistic models. Their transition behavior is described by Markov kernels, i.e.~measurable functions from a state to a distribution of states. Markov kernels can be composed in a monadic way from distributions (normal, exponential, Bernoulli, etc.), other Markov kernels, and even other Markov processes.
In this paper we construct discrete-time Markov processes with arbitrary state spaces, given the transition probabilities as a Markov kernel. We show that the Markov processes form again Markov kernels. This allows us to prove a bisimulation argument between two Markov processes and derive the strong Markov property. We use the existing probability theory in Isabelle/HOL and extend its capability to work with Markov kernels.
As application we construct continuous-time Markov chains (CTMCs). These are constructed as jump & hold processes, which are discrete-time Markov processes where the state space is a product of continuous holding times and discrete states. We prove the Markov property of CTMCs using the bisimulation argument for discrete-time Markov processes, and that the transition probability is the solution of a differential equation.
Mon 16 Jan Times are displayed in time zone: (GMT+02:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
|16:00 - 16:30|
|16:30 - 17:00|
A Reflexive Tactic for Polynomial Positivity using Numerical Solvers and Floating-Point Computations
|17:00 - 17:30|
Johannes HölzlTechnische Universität MünchenDOI Pre-print File Attached
|17:30 - 18:00|