Personal tools
  •  

Channel Capacity

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Definitions   ::   Symmetric Channels   ::   Closer Look   ::   Typical Sequences   ::   Theorem

Symmetric channels

We now consider a specific class of channels for which the entropy is fairly easy to compute, the symmetric channels.

A channel can be characterized by a transmission matrix such as

\begin{displaymath}p(y\vert x) = \begin{bmatrix}.3 & .2 & .5 \\ .5 & .3 & .2 \\ .2 & .5 &
.3 \end{bmatrix} = P
\end{displaymath}

The indexing is x for rows, y for columns: P x , y = p ( y |x).
\begin{definition}
A channel is said to be {\bf symmetric} if the rows of the c...
... row, and all the column
sums $\sum_x p(y\vert x)$\ are equal.
\end{definition}

\begin{theorem}
For a weakly symmetric channel,
\begin{displaymath}C = \log\ver...
...eved by a (discrete) uniform distribution over the input alphabet.
\end{theorem}

To see this, let $\rbf$ denote a row of the transition matrix. Then

\begin{displaymath}I(X;Y) = H(Y) - H(Y\vert X) = H(Y) - H(\rbf) \leq \log \vert\Yc\vert - H(\rbf).
\end{displaymath}

Equality holds if the output distribution is uniform.

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 17). Channel Capacity. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture8_2.htm. This work is licensed under a Creative Commons License Creative Commons License