# More On Channel Capacity

Converse :: Source/Channel

## Joint source/channel coding theorem

We have seen that for a source with entropy
*
H
*
(
*
X
*
), the data rate
cannot be less than the entropy (
*
R
*
>
*
H
*
. We have also seen that we can
transmit reliably at rates less than capacity (
*
R
*
<
*
C
*
). How do these
two major theorems tie together?

That is, is it better to remove the redundancy (source coding), then put some back in (channel coding)? Or is there some kind of joint coding method that would work better?

The joint source/channel coding theorem says (in essence) that
provided that a source has entropy
*
H
*
<
*
C
*
, then there is a code with
,
and that (conversely) if
then
*
H
*
<
*
C
*
. Note that the theorem is
*
asymptotic
*
. The proof of the theorem relies on AEP: we code only
the typical sequences, and don't worry about the rest (forward). For
the converse, we use (again) Fano.

Note that theorem is asymptotic: in practice, we have to deal with codes of finite length and take extra precautions.