Limiting probabilities

From testwiki
Revision as of 18:11, 3 July 2011 by imported>Gaidheal1 (removed Category:Mathematics using HotCat)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Welcome and expand The probability that a continuous-time Markov chain will be in state j at time t often converges to a limiting value which is independent of the intial state. We call this value Pj where Pj is equal to:

(λ0)(λ1)(λ2)(λn1)(μ1)(μ2)(μn)(1+n=1(λ0)(λ1)(λ2)(λn1)(μ1)(μ2)(μn)),


For a limiting probability to exist, it is necessary that


k=1(λ0)(λ1)(λ2)(λn1)(μ1)(μ2)(μn)<,

This condition may be shown to be sufficient.

We can determine the limiting probabilities for a birth and death process using these equations and equating the rate at which the process leaves a state with the rate at which it enters the state.