##### Personal tools
•
You are here: Home Markov Processes

# Markov Processes

##### Document Actions

Concepts  ::  Discrete  ::  Continuous  ::  States

## Continuous-Time Markov Processes

Let is still deal with discrete outcomes. If is homogeneous, then

Let , and form a matrix , with .

Let us now consider the question of how long the r.p. remains in a state. Let be the time spent in a state . The probability of spending more than seconds in a state is

Suppose that the process has been in state already for seconds. What is the probability that it remains for more seconds:

But recall that is Markov:

Such a process is said to be memoryless.

Let us look at these computations again

We have seen that this probability must be :

There is thus a sort of cancellation that takes place. The only distribution which has this property is the exponential,

Using this we have

So the waiting time for a Poisson r.p. is exponential. (We have derived this another way in the homework.)

This result has the following rather curious interpretation: The amount of additional time you have to wait does not depend on the amount of time you have already waited.

We can describe the operation of a continuous-time, Markov chain as follows:

1. Enter a state .
2. Wait a random amount of time . (this random variable is continuous)
3. Select a new state according to a discrete-time Markov chain with transition probabilities we will call
4. Repeat.
In discrete time we have the probability update . We will develop an analogous result for continuous time. Instead of a set of couple difference equations, we will get a set of coupled differential equations.

Let be a small time increment.

The probability that we remain in the same state at time is

or

Now consider the transition. When leaving state , we move to state with probability :

Let :

We say that is the rate at which enters state from state . Define , so that

or

Summarizing what we have so far:

Divide by and take the limit:

Now define . Then we have

and

Divide both sides by and take the limit:

What are the steady-state conditions in general?

Since we can write

and since

we can write

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, June 08). Markov Processes. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/lecture10_3.htm. This work is licensed under a Creative Commons License