Uppsala University Department of Information Technology
June 2006
Abstract:We consider discrete infinite-state Markov chains which contain an eager finite attractor. A finite attractor is a finite subset of states that is eventually reached with probability 1 from every other state, and the eagerness condition requires that the probability of avoiding the attractor in n or more steps after leaving it is exponentially bounded in n. Examples of such Markov chains are those induced by probabilistic lossy channel systems and similar systems. We show that the expected residence time (a generalization of the steady state distribution) exists for Markov chains with eager attractors and that it can be effectively approximated to arbitrary precision. Furthermore, arbitrarily close approximations of the limiting average expected reward, with respect to state-based bounded reward functions, are also computable.
Available as compressed Postscript (233 kB, no cover) and PDF (218 kB, no cover)
Download BibTeX entry.