Personal tools
  •  

Random Vectors

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Vectors  ::  Covariance  ::  Functions  ::  Application  ::  Markov Model

Characteristic functions


\begin{definition}
The characteristic function of an $n$-dimensional random vec...
...[\exp(i \ubf^T \Xbf)]
\end{displaymath}where $\ubf \in \Rbb^n$.
\end{definition}
As before, this is just an $n$ -dimensional Fourier transform.
\begin{definition}
$\Xbf$ is a Gaussian random vector with parameters $\mubf$\...
... \ubf]
\end{displaymath}We write $\Xbf \sim \Nc(\mubf,\Sigma)$.
\end{definition}
Properties of Gaussian random vectors:

  1. $E[\Xbf] = \mubf$ .
  2. $X_1, X_2, \ldots, X_n$ independent if and only if $\Sigma$ is a diagonal matrix.
  3. If $\Ybf = A\Xbf + \bbf$ , then $Y$ is also Gaussian, $\Ybf \sim
\Nc(A \mubf + \bbf, A \Sigma A^T).$

    Linear functions of Gaussians are Gaussians.

    Said another way: Family of Gaussians closed under affine transformations.

    Suppose $\Sigma$ is positive definite . Then it can be factored as

    \begin{displaymath}\Sigma = CC^T
\end{displaymath}

    where $C$ is an $\matsize{n}{n}$ invertible, lower-triangular matrix. This factorization is called the Cholesky factorization . This is essentially a ``matrix square root.''

    Suppose $\Xbf \sim \Nc(\mubf,\Sigma)$ , with $\Sigma$ p.d. Let $\Ybf =
C^{-1}(\Xbf - \mubf)$ . Then $\Ybf$ is normal with $\mubf = \zerobf$ and $\Sigma = I$ .

    This process of diagonalizing the covariance matrix is called whitening. We say that uncorrelated i.i.d. components are white .

  4. If $\Sigma> 0$ (i.e., p.d.) then $X$ is a continuous r.v. with

    \begin{displaymath}\boxed{f_\Xbf(\xbf) = \frac{1}{(2\pi)^{n/2}\vert\Sigma\vert^{...
...rac{1}{2}(\xbf - \mubf)^T \Sigma^{-1} (\xbf -
\mubf)\right].}
\end{displaymath}

    where $\vert\Sigma\vert = \det(\Sigma) =$ product of eigenvalues.
  5. Important: Suppose $\Xbf \sim \Nc(\mubf,\Sigma)$ with $\Sigma> 0$ . Partition $\Xbf$ ,

    \begin{displaymath}\Xbf = \begin{bmatrix}\Xbf^{(1)}  \Xbf^{(2)}
\end{bmatrix}\end{displaymath}

    where $\Xbf^{(1)}$ has $k$ elements. It turns out that $\Xbf^{(1)}$ is also Gaussian. (How could we easily show this?) Let us partition

    \begin{displaymath}\mubf = \begin{bmatrix}\mubf^{(1)}  \mubf^{(2)} \end{bmatri...
..._{11} & \Sigma_{12}  \Sigma_{21} & \Sigma_{22}
\end{bmatrix}\end{displaymath}

    Then

    \begin{displaymath}\Xbf^{(1)} \sim \Nc(\mubf^{(1)},\Sigma_{11}) \qquad \qquad
\Xbf^{(2)} \sim \Nc(\mubf^{(2)},\Sigma_{22})
\end{displaymath}

    Consider $\Xbf^{(2)}$ conditioned on $(\Xbf^{(1)}= \xbf^{(1)})$ :

    \begin{displaymath}f_{X^{(2)}\vert X^{(1)}}(\xbf^{(2)}\vert\xbf^{(1)}) = \frac{f_\Xbf(\xbf)}
{f_{X^{(1)}}(\xbf^{(1)})}
\end{displaymath}

    Then it can be shown that

    \begin{displaymath}\Xbf^{(2)}\vert(\Xbf^{(1)}=\xbf^{(1)}) \sim \Nc(\mubf', \Sigma')
\end{displaymath}

    where

    \begin{displaymath}\mubf' = \mubf^{(2)} + \Sigma_{21} \Sigma_{11}^{-1}(\xbf^{(1)} -
\mubf^{(1)})
\end{displaymath}


    \begin{displaymath}\Sigma' = \Sigma_{22} - \Sigma_{21} \Sigma_{11}^{-1} \Sigma_{12}
\end{displaymath}

    This is ``smaller'' than $\Sigma_{22}$ .

    Discuss implications. Draw pictures.

    Note: For a Gaussian vector, the conditional density is Gaussian.

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 31). Random Vectors. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/lec3_3.html. This work is licensed under a Creative Commons License Creative Commons License