Personal tools
  •  
You are here: Home Electrical and Computer Engineering Information Theory The Gaussian Channel

The Gaussian Channel

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Definitions   ::   Band-limited   ::   Kuhn-Tucker   ::   Parallel   ::   Colored Noise

The Gaussian channel

Suppose we send information over a channel that is subjected to additive white Gaussian noise. Then the output is

Y i = X i + Z i

where Y i is the channel output, X i is the channel input, and Z i is zero-mean Gaussian with variance N : $Z_i \sim \Nc(0,N)$ . This is different from channel models we saw before, in that the output can take on a continuum of values. This is also a good model for a variety of practical communication channels.

We will assume that there is a constraint on the input power. If we have an input codeword $(x_1,x_2,\ldots,x_n)$ , we will assume that the average power is constrained so that


 \begin{displaymath}
\frac{1}{n} \sum_{i=1}^n x_i^2 \leq P
\end{displaymath}

Let is consider the probability of error for binary transmission. Suppose that we can send either $+\sqrt{P}$ or $-\sqrt{P}$ over the channel. The receiver looks at the received signal amplitude and determines the signal transmitted using a threshold test. Then

\begin{displaymath}\begin{aligned}
P_e &= \frac{1}{2}P(Y < 0 \vert X= +\sqrt{P})...
.../2N} dx \\
&= Q(\sqrt{P/N}) = 1-\Phi(\sqrt{P/N})
\end{aligned}\end{displaymath}

where


 \begin{displaymath}
Q(x) = \frac{1}{\sqrt{2\pi}}\int_x^\infty e^{-x^2/2} dx
\end{displaymath}

or


 \begin{displaymath}
\Phi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^x e^{-x^2/t}dx
\end{displaymath}


\begin{definition}
The {\bf information capacity} of the Gaussian channel with ...
...isplaymath}C = \max_{p(x):EX^2 \leq P} I(X;Y).
\end{displaymath}\end{definition}
We can compute this as follows:

\begin{displaymath}\begin{aligned}
I(X;Y) &= h(Y) - h(Y\vert X) \\
&= h(Y) - h(...
...{2} \log 2\pi e N \\
&= \frac{1}{2} \log (1+P/N)
\end{aligned}\end{displaymath}

since E Y 2 = P + N and the Gaussian is the maximum-entropy distribution for a given variance. So


 \begin{displaymath}
C = \frac{1}{2}log(1+P/N),
\end{displaymath}

bits per channel use. The maximum is obtained when X is Gaussian distributed . (How do we make the input distribution look Gaussian?)


\begin{definition}
An $(M,n)$\ code for the Gaussian channel with power constra...
...ction $g:\Yc^n \rightarrow \{ 1,2,\ldots, M\}$.
\end{enumerate}\end{definition}

\begin{definition}
A rate $R$\ is said to be {\em achievable} for a a Gaussian ...
...acity} of the channel is the supremum of the
achievable rates.
\end{definition}

\begin{theorem}
The capacity of a Gaussian channel with power constraint $P$\ a...
...frac{P}{N}\right) \text{ bits per
transmission}.
\end{displaymath}\end{theorem}

Geometric plausibility For a codeword of length n , the received vector (in n space) is normally distributed with mean equal to the true codeword. With high probability, the received vector is contained in sphere about the mean of radius $\sqrt{n(N+\epsilon)}$ . Why? Because with high probability, the vector falls within one standard deviation away from the mean in each direction, and the total distance away is the Euclidean sum:


 \begin{displaymath}
E[z_1^2 + z_2^2 + \cdots z_n^2] = n N.
\end{displaymath}

This is the square of the expected distance within which we expect to fall. If we assign everything within this sphere to the given codeword, we misdetect only if we fall outside this codeword.

Other codewords will have other spheres, each with radius approximately $\sqrt{n(N+\epsilon)}$ . The received vectors a limited in energy by P , so they all must lie in a sphere of radius $\sqrt{n(P+N)}$ . The number of (approximately) nonintersecting decoding spheres is therefore


 \begin{displaymath}
\text{number of spheres} \approx \frac{\text{volume of sphere in
    $n$-space with radius $r=\sqrt{n(P+N)}$}}
{\text{volume of sphere in $n$-space with radius $r=\sqrt{n(N+\epsilon)}$}}
\end{displaymath}

The volume of a sphere of radius r in n space is proportional to r n . Substituting in this fact we get


 \begin{displaymath}
\text{number of spheres} \approx \frac{ (n(P+N))^{n/2}}{
  (n(N+\epsilon))^{n/2} } \approx 2^{\frac{n}{2}(1+\frac{P}{N})}
\end{displaymath}


\begin{proof}
We will follow essentially the same steps as before.
\begin{enum...
...lude that the maximum
probability of error also must go to zero.
\par\end{proof}

The converse is that rate R > C are not achievable, or, equivalently, that if $P_e^{(n)} \rightarrow 0$ then it must be that $R \leq C$ .


\begin{proof}
The proof starts with Fano's inequality:
\begin{displaymath}H(W\ve...
...laymath}R \leq \frac{1}{2}\log(1+P/N) + \epsilon_n.
\end{displaymath}\end{proof}

Copyright 2008, Todd Moon. Cite/attribute Resource . admin. (2006, May 15). The Gaussian Channel. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture11.htm. This work is licensed under a Creative Commons License Creative Commons License