Personal tools
  •  

Markov Processes

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Concepts  ::  Discrete  ::  Continuous  ::  States

Discrete-time Markov Chains


\begin{definition}
An integer-valued Markov random process is called a Markov chain.
\end{definition}
Let the time-index set be the set of integers. Let

\begin{displaymath}p_j(0) = P[X_0 = j]
\end{displaymath}

be the initial probabilities. (Note that $\sum_j p_j(0) = 1$ .) We can write the factorization as

\begin{displaymath}P[X_n = i_n, \ldots, X_0 = i_0] = P[X_n = i_n\vert X_{n-1} = i_{n-1}]
\cdots P[X_1 = i_1\vert X_0 = i_0]P[X_0 = i_0].
\end{displaymath}

If the probability $P[X_{n+1} = j\vert X_n=i]$ is does not change with $n$ , then the r.p. $X_n$ is said to have homogeneous transition probabilities . We will assume that this is the case, and write

\begin{displaymath}p_{ij} = P[X_{n+1} = j\vert X_n = i].
\end{displaymath}

Note: $\sum_{j} P[X_{n+1} = j\vert X_n =i ] = 1$ . That is, $\sum_j p_{ij}
= 1$ .

We can represent these transition probabilities in matrix form:

\begin{displaymath}P = \begin{bmatrix}
p_{00} & p_{01} & p_{02} & \cdots \\
p_{10} & p_{11} & p_{12} & \cdots \\
\vdots
\end{bmatrix}\end{displaymath}

The rows of $P$ sum to $1$ . This is called a stochastic matrix .

Frequently discrete-time Markov chains are modeled with state diagrams.
\begin{example}
Two light bulbs are held in reserve. After a day, the probabili...
...0 \\
0 & p & 1-p
\end{bmatrix}\end{displaymath}Draw the diagram.
\end{example}

Now let us look further ahead. Let

\begin{displaymath}p_{ij}(n) = P[X_{n+k} = j \vert x_k = i]
\end{displaymath}

If the r.p. is homogeneous, then $p_{ij}(n) = P[X_k = j\vert X_i = i].$

Let us develop a formula for the case that $n=2$ .

\begin{displaymath}\begin{aligned}
P[X_2=j,X_1=l\vert x_0=i] &= \frac{P[X_2=j,X_...
..._0=i]} \\
&= p_{il}(1) p_{lj}(1) = p_{il} p_{lj}
\end{aligned}\end{displaymath}

Now marginalize:

\begin{displaymath}P[X_2=j\vert X_0=i] = \sum_{l}P[X_2=j,X_1=l\vert X_0=i] = \sum_l
p_{il}P_{lj}.
\end{displaymath}

Let $P(2)$ be the matrix of two-step transition probabilities. Then we have

\begin{displaymath}P(2) = P(1)P(1) = P^2.
\end{displaymath}

In general (by induction) we have

\begin{displaymath}P(n) = P^n.
\end{displaymath}

Let

\begin{displaymath}\pbf(n) = \begin{bmatrix}
P(X_n=0) \\
P(X_n=1) \\
\vdots
\end{bmatrix}\end{displaymath}

(or whatever the outcomes are). Then

\begin{displaymath}p_j(n) = P(X_n = j) = \sum_{i} P(X_n=j\vert X_{n-1}=i) P(X_{n-1}=i) =
p_{ij} p_i(n-1).
\end{displaymath}

Stacking these up, we obtain the equation

\begin{displaymath}\pbf(n) = \pbf(n-1) P.
\end{displaymath}

or

\begin{displaymath}\pbf(n) = \pbf(0) P^n.
\end{displaymath}

We we run the Markov r.p. for a long time what happens to the probabilities? That is, what is $p_{j}(n)$ as $n \rightarrow \infty$ ? Let us denote

\begin{displaymath}\pi_j = \lim_{n \rightarrow \infty} p_j(n).
\end{displaymath}

If there is a limit, the probability vector $\pibf$ should satisfy

\begin{displaymath}\pibf = \pibf P.
\end{displaymath}

or

\begin{displaymath}P^T \pibf^T = \pibf^T.
\end{displaymath}

This is an eigenvalue problem!

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, June 08). Markov Processes. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/lecture10_2.htm. This work is licensed under a Creative Commons License Creative Commons License