##### Personal tools
•
You are here: Home Markov Processes

# Markov Processes

##### Document Actions

Concepts  ::  Discrete  ::  Continuous  ::  States

## Discrete-time Markov Chains

Let the time-index set be the set of integers. Let

be the initial probabilities. (Note that .) We can write the factorization as

If the probability is does not change with , then the r.p. is said to have homogeneous transition probabilities . We will assume that this is the case, and write

Note: . That is, .

We can represent these transition probabilities in matrix form:

The rows of sum to . This is called a stochastic matrix .

Frequently discrete-time Markov chains are modeled with state diagrams.

Now let us look further ahead. Let

If the r.p. is homogeneous, then

Let us develop a formula for the case that .

Now marginalize:

Let be the matrix of two-step transition probabilities. Then we have

In general (by induction) we have

Let

(or whatever the outcomes are). Then

Stacking these up, we obtain the equation

or

We we run the Markov r.p. for a long time what happens to the probabilities? That is, what is as ? Let us denote

If there is a limit, the probability vector should satisfy

or

This is an eigenvalue problem!

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, June 08). Markov Processes. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/lecture10_2.htm. This work is licensed under a Creative Commons License