Personal tools
  •  

Markov Processes

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Concepts  ::  Discrete  ::  Continuous  ::  States

Basic concepts

A Markov process $\{X_t\}$ one such that

\begin{displaymath}P(X_{t_{k+1}} = x_{k+1}\vert X(t_k) = x_k, X(t_{k-1}) = x_{k-...
...ts,X(t_1) = x_1) = P(X_{t_{k+1}} = x_{k+1}\vert X_{t_k} = x_k)
\end{displaymath}

(for a discrete random process) or

\begin{displaymath}f(x_{t_{k+1}}\vert X_{t_k} = x_k \ldots, x_{t_1} = x_1) =
f(x_{t_{k+1}}\vert X_{t_k} = x_k)
\end{displaymath}

(for a continuous random process). The most recent observation determines the state of the process, and prior observations have no bearing on the outcome if the state is known.
\begin{example}
Let $X_i$\ be i.i.d. and let $S_n = X_1 + \cdots + x_n = S_{n-1...
...1} - S_n] = P[S_{n+1} = s_{n+1} \vert S_n = s_n].
\end{displaymath}\end{example}

\begin{example}
Let $N(t)$\ be a Poisson process.
\begin{displaymath}P(N(t_{k+1...
... t_k] = P[N(t_{k+1}) = n_{k+1}\vert
N(t_k) = n_k]
\end{displaymath}\end{example}

Let $\{X_t\}$ be a Markov r.p. The joint probability has the following factorization:

\begin{displaymath}P(X_{t_{3}} = x_3,X_{t_{2}} = x_2, X_{t_{1}} = x_1) =
P(X_{t_...
...t_2} = x_2)P(X_{t_2}=x_2\vert X_{t_1} = x_1)
P(X_{t_1} = x_1)
\end{displaymath}

(Why?)

Copyright 2008, Todd Moon. Cite/attribute Resource . admin. (2006, June 08). Markov Processes. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/lecture10_1.htm. This work is licensed under a Creative Commons License Creative Commons License