Personal tools
  •  

Channel Capacity

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Definitions   ::   Symmetric Channels   ::   Closer Look   ::   Typical Sequences   ::   Theorem

A closer look at capacity

The rationale for the coding theorem: "for large block lengths, every channel looks like the noisy typewriter. The channel has a subset of inputs that produce essentially disjoint sequences at the output .''

For each typical input n -sequence, there are approximately 2 nH ( Y |X) possible Y sequences, each of them more or less equally likely by the AEP. In order to reliably detect them, we want to ensure that no two X sequences produce the same Y sequence. The total number of possible (typical) Y sequences is $\approx 2^{nH(Y)}.$ This has to be divided into sets of size 2 nH ( Y |X) , corresponding to the different X sequences. The total number of disjoint sets is therefore approximately 2 n ( H ( Y ) - H ( Y |X)) = 2 nI ( X ; Y ) . Hence we can send at most $\approx
2^{nI(X;Y)}$ distinguishable sequences of length n .


\begin{definition}
A {\bf discrete channel}, denoted by $(\Xc,p(y\vert x),\Yc)$...
...s depends only on the current input ({\em memoryless} channel).
\end{definition}

When we talk about data transmission through a channel, the issue of coding arises. We define a code as follows:

\begin{definition}
An $(M,n)$\ code for the channel $(\Xc,p(x\vert y),\Yc)$\ co...
...function $g:\Yc^n\rightarrow \{1,2,\ldots,M\}$.
\end{enumerate}\end{definition}
In other words, the code takes symbols from W , and encodes them to produce a sequence of n symbols in $\Xc$ .

The probability of error is defined as

 

\begin{displaymath}\lambda_i = \text{Pr}(g(Y^n) \neq i \vert X^n = X^n(i))
\end{displaymath}

 

In other words, if the message symbol is i , but the output symbol is not i , then we have an error. This can be written using the indicator function $I(\cdot)$ as

 

\begin{displaymath}\lambda_i = \text{Pr}(g(Y^n) \neq i \vert X^n = X^n(i)) = \sum_{y^n}
p(y^n\vert x^n(i)) I(g(y^n) \neq i)
\end{displaymath}

 

In our development, it may be convenient to deal with the maximal probability of error. If it can be shown that the maximal probability of errors goes to zero, then clearly the other probabilities of error do also. The maximal probability of error is

 

\begin{displaymath}\lambda^{(n)} = \max_{i} \lambda_i
\end{displaymath}

 

The average probability of error is

 

\begin{displaymath}P_e^{(n)} = \frac{1}{M}\sum_{i=1}^M \lambda_i
\end{displaymath}

 


\begin{definition}
The {\bf rate} of an $(M,n)$\ code is
\begin{displaymath}R = \frac{\log M}{n} \text{ bits per transmission}.
\end{displaymath}\end{definition}

\begin{example}
Suppose $M=4$, and codewords are $n=4$\ bits long. Then every i...
...and takes $4$\ symbol
transmissions to send. The rate is $R=1/2$.
\end{example}

\begin{definition}
A rate $R$\ is said to be {\em achievable} if there exists a...
... of error $\lambda^{(n)}$\ tends to 0 as $n\rightarrow \infty$.
\end{definition}

\begin{definition}
The {\bf capacity} of a DMC is the supremum of all achievable rates.
\end{definition}

This is a different definition than the "information'' channel capacity of a DMC, presented above. What we will show (this is Shannon's theorem) is that the two definitions are equivalent.

The implication of this definition of capacity is that for an achievable rate, the probability of error tends to zero as the block length gets large. Since the capacity is the largest achievable rates, then for rates less than the capacity, the probability of error goes to zero with the block length.

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 17). Channel Capacity. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture8_3.htm. This work is licensed under a Creative Commons License Creative Commons License