# Markov Processes

Concepts :: Discrete :: Continuous :: States

## Discrete-time Markov Chains

Let the time-index set be the set of integers. Let

be the initial probabilities. (Note that .) We can write the factorization as

If the probability is does not change with , then the r.p. is said to have

**homogeneous transition probabilities**. We will assume that this is the case, and write

Note: . That is, .

We can represent these transition probabilities in matrix form:

The rows of sum to . This is called a

**stochastic matrix**.

Frequently discrete-time Markov chains are modeled with state
diagrams.

Now let us look further ahead. Let

If the r.p. is homogeneous, then

Let us develop a formula for the case that
.

Now marginalize:

Let be the matrix of two-step transition probabilities. Then we have

In general (by induction) we have

Let

(or whatever the outcomes are). Then

Stacking these up, we obtain the equation

or

We we run the Markov r.p. for a long time what happens to the
probabilities? That is, what is
as
?
Let us denote

If there is a limit, the probability vector should satisfy

or

This is an eigenvalue problem!