Personal tools

Random Vectors

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Vectors  ::  Covariance  ::  Functions  ::  Application  ::  Markov Model

Estimation in a Markov model

Suppose that $P(X_n\vert X_{n-1},X_{n-2}, \ldots, X_2,X_1) =
P(X_n\vert X_{n-1})$ . That is, given $X_{n-1}$ , $X_n$ is independent of $X_1, X_2, \ldots, X_{n-2}$ . This is actually quite common: it doesn't matter how you got to where you came from, only where you came from. Such a model is called a Markov model.

Under the assumption of a Markov model,

\begin{displaymath}E[X_n\vert X_1, \ldots, X_{n-1}] = E[X_n\vert X_{n-1}]


\begin{displaymath}\xhat_n = \mu_n + \frac{\cov(X_n,X_{n-1})}{\var(X_{n-1})}(x_{n-1} -

This can be written as

\begin{displaymath}\xhat_n = \mu_n + \rho(X_n,X_{n-1})


\begin{displaymath}\sigma^2 = \var(X_n)(1- \rho^2(X_n,X_{n-1})).

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 31). Random Vectors. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: This work is licensed under a Creative Commons License Creative Commons License