Personal tools
  •  

Markov Processes

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Concepts  ::  Discrete  ::  Continuous  ::  States

Continuous-Time Markov Processes

Let is still deal with discrete outcomes. If $X_t$ is homogeneous, then

\begin{displaymath}P(X(s+t) = j\vert X(s)=j) = P(X(t) =j \vert X(0) = i).
\end{displaymath}

Let $p_{ij}(t) = P(X(t) = j\vert X(0) = i)$ , and form a matrix $P(t) =
[p_{ij}(t)]$ , with $P(0) = I$ .
\begin{example}
Suppose $X(t)$\ is a Poisson counting process,
\begin{displayma...
...t)^2e^{-\lambda t}/2&\cdots
\vdots
\end{bmatrix}\end{displaymath}\end{example}

Let us now consider the question of how long the r.p. remains in a state. Let $T_i$ be the time spent in a state $i$ . The probability of spending more than $t$ seconds in a state is

\begin{displaymath}P(T_i > t).
\end{displaymath}

Suppose that the process has been in state $i$ already for $s$ seconds. What is the probability that it remains for $t$ more seconds:

\begin{displaymath}P(T_i > t+s \vert T_i > s) = P[T_i > t+s\vert X(a)=i, 0 \leq a \leq s].
\end{displaymath}

But recall that $X(t)$ is Markov:

\begin{displaymath}P(T_i > t+s \vert T_i > s) = P[T_i > t+s\vert X(a)=i, 0 \leq a \leq s] =
P(T_i > t+s \vert X(s) = i) = P(T_i > t).
\end{displaymath}

Such a process is said to be memoryless.

Let us look at these computations again

\begin{displaymath}P(T_i > t+s\vert T_i > s) = \frac{P(T_i > t+s, T_i > s)}{P(T_i>s)} =
\frac{P(T_i > t+s)}{P(T_i>s)}.
\end{displaymath}

We have seen that this probability must be $P(T_i > t)$ :

\begin{displaymath}
\frac{P(T_i > t+s)}{P(T_i>s)} = P(T_i>t).
\end{displaymath}

There is thus a sort of cancellation that takes place. The only distribution which has this property is the exponential,

\begin{displaymath}P(T_i > t) = e^{-\lambda_i t}.
\end{displaymath}

Using this we have

\begin{displaymath}\frac{e^{-\lambda_i(t+s)}}{e^{-\lambda_i s}} = e^{-\lambda_i t}.
\end{displaymath}

So the waiting time for a Poisson r.p. is exponential. (We have derived this another way in the homework.)

This result has the following rather curious interpretation: The amount of additional time you have to wait does not depend on the amount of time you have already waited.

We can describe the operation of a continuous-time, Markov chain as follows:

  1. Enter a state $i$ .
  2. Wait a random amount of time $T_i$ . (this random variable is continuous)
  3. Select a new state according to a discrete-time Markov chain with transition probabilities we will call $\qtilde_{ij}$
  4. Repeat.
In discrete time we have the probability update $\pbf(k+1) = \pbf(k)
P$ . We will develop an analogous result for continuous time. Instead of a set of couple difference equations, we will get a set of coupled differential equations.

Let $\delta$ be a small time increment.

\begin{displaymath}P(T_i > \delta) = e^{-\nu_i \delta} \approx 1 - \nu_i \delta +
o(\delta).
\end{displaymath}

The probability that we remain in the same state at time $\delta$ is

\begin{displaymath}p_{ii}(\delta) = P(T_i > \delta) = 1-\nu_i \delta + o(\delta)
\end{displaymath}

or $1-p_{ii}(\delta) = \nu_i \delta + o(\delta).$

Now consider the transition. When leaving state $i$ , we move to state $j$ with probability $\qtilde_{ij}$ :

\begin{displaymath}p_{ij}(\delta) = \underbrace{(1-p_{ii}(\delta))}_{\text{leave
state}} \qtilde_{ij} = \nu_i \delta \qtilde_{ij} + o(\delta).
\end{displaymath}

Let $\gamma_{ij} = \nu_i \qtilde_{ij}$ :

\begin{displaymath}p_{ij}(\delta) = \gamma_{ij} \delta + o(\delta).
\end{displaymath}

We say that $\gamma_{ij}$ is the rate at which $X(t)$ enters state $j$ from state $i$ . Define $\gamma_{ii} = -\nu_i$ , so that

\begin{displaymath}1-p_{ii}(\delta) = -\gamma_{ii} \delta + o(\delta)
\end{displaymath}

or

\begin{displaymath}p_{ii}(\delta) - 1 = \gamma_{ii} \delta + o(\delta).
\end{displaymath}

Summarizing what we have so far:

\begin{displaymath}\begin{aligned}
p_{ii}(\delta) - 1 &= \gamma_{ii}\delta + o(\...
...p_{ij}(\delta) &= \gamma_{ij} \delta + o(\delta).
\end{aligned}\end{displaymath}

Divide by $\delta$ and take the limit:

\begin{displaymath}\lim_{\delta \rightarrow 0} \frac{p_{ii}(\delta)-1}{\delta} =
\gamma_{ii}
\end{displaymath}


\begin{displaymath}\lim_{\delta \rightarrow 0} \frac{p_{ij}(\delta)}{\delta} =
\gamma_{ij}
\end{displaymath}

Now define $p_j(t) = P(X(t)=j)$ . Then we have

\begin{displaymath}p_j(t+\delta) = P(X(t+\delta) = j) = \sum_i P(X(t+\delta)=j\vert X(t)=i)
P(X(t) = i) = \sum_i p_{ij}(\delta) p_i(t).
\end{displaymath}

and

\begin{displaymath}\begin{aligned}
p_j(t+\delta) - p_j(t) &= \sum_i p_{ij}(\delt...
...j} p_{ij}(\delta) p_i(t) + (p_{jj}(t) - 1)p_j(t).
\end{aligned}\end{displaymath}

Divide both sides by $\delta$ and take the limit:

\begin{displaymath}p_j'(t) = \sum_{i \neq j} \gamma_{ij} p_i(t) + \gamma_{jj} p_j(t) =
\sum_i \gamma_{ij} p_i(t)
\end{displaymath}


\begin{example}
Let us model a two-state system, having an idle state and a bus...
...d p_1(t)
\rightarrow \frac{\alpha}{\alpha+\beta}.
\end{displaymath}\end{example}

What are the steady-state conditions in general?

\begin{displaymath}0 \ sum_{i} \gamma_{ij} p_i
\end{displaymath}

Since $\gamma_{jj} = -\nu_j$ we can write

\begin{displaymath}\nu_j p_j = \sum_{i \neq j} \gamma_{ij} p_i
\end{displaymath}

and since

\begin{displaymath}\nu_j = \sum_{i \neq j} \gamma_{ji}
\end{displaymath}

we can write

\begin{displaymath}p_j(\sum_{i \neq j} \gamma_{ji}) = \sum_{i\neq j} \gamma_{ij} p_i.
\end{displaymath}

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, June 08). Markov Processes. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/lecture10_3.htm. This work is licensed under a Creative Commons License Creative Commons License