Personal tools
  •  
You are here: Home Electrical and Computer Engineering Information Theory Definitions and Basic Facts

Definitions and Basic Facts

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Entropy Function   ::   Joint Entropy   ::   Relative Entropy   ::   Multivariable   ::   Convexity

Binary Entropy Function

We saw last time that the entropy of a random variable X is

\begin{displaymath}H(X) = -\sum_x p(x)\log p(x)
\end{displaymath}

Suppose X is a binary random variable,

\begin{displaymath}X = \begin{cases}1 & \text{with probability } p \\
0 & \text{with probability } 1-p
\end{cases}\end{displaymath}

Then the entropy of X is

\begin{displaymath}H(X) = -p\log p -(1-p)\log(1-p)
\end{displaymath}

Since this depends on p , this is also written sometimes as H ( p ). Plot. Observe: concave function of p . (What does this mean?) H (0) = 0, H (1) = 0. Why? Where is the max? More generally, the entropy of a binary discrete random variable with probability p is written as either H ( X ) or H ( p ).

Copyright 2008, Todd Moon. Cite/attribute Resource . admin. (2006, May 15). Definitions and Basic Facts. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture2_1.htm. This work is licensed under a Creative Commons License Creative Commons License