Personal tools
You are here: Home Electrical and Computer Engineering Information Theory More On Channel Capacity

More On Channel Capacity

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Converse   ::   Source/Channel

The Converse To The Coding Theorem

We will begin with a special case: a zero-error channel, and show that this requires that the rate be less than the capacity. A zero error channel is one in which the sequence Y n determines the input symbol W without any error, so H ( W |Y n )=0. We can obtain a bound by assuming that W is uniformly distributed over the 2 nR input symbols, so H ( W ) = nR . Now

nR &= H(W) = H(W\vert Y^n) + I(W;Y^n) \\
&= ...
...\text{definition of information channel capacity}

from which we conclude that for a zero-error channel, $R \leq C$ .

Recall that Fano's inequality related the probability of error to the entropy. Using the notation of the current context, it can be written as

H(X^n|Y^n) \leq 1 + P_e^{(n)} nR

We also observe (and could prove) that

I(X^n;Y^n) \leq nC

(Using the channel n times, the capacity per transmission is not increased.)

We now have the tools necessary to prove the converse to the coding theorem: any sequence of (2 nR , n ) codes with $\lambda^{(n)}
\rightarrow 0$ must have $R \leq C$ ..
Observe that $\lambda^{(n)} \rightarrow 0$\ implies $P_e^{(n)}
...en for $n$\ sufficiently large, $P_e^{(n)}$\ is bounded away
from 0.

Hamming codes?

Feedback channels: the same capacity!

Copyright 2008, Todd Moon. Cite/attribute Resource . admin. (2006, May 15). More On Channel Capacity. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: This work is licensed under a Creative Commons License Creative Commons License