Personal tools
You are here: Home Electrical and Computer Engineering Information Theory Differential Entropy

Differential Entropy

Document Actions
  • Content View
  • Bookmarks
  • CourseFeed

Differential Entropy   ::   AEP   ::   Discretization   ::   Other Entropy

Joint, Conditional, and Relative Differential Entropy

More basic definitions:

h(X_1,\ldots,X_n) = -\int f(x_1,\ldots,x_n)\log f(x_1,\ldots,x_n)
dx_1 \cdots x_n

is the joint differential entropy .

h(X|Y) = -\int f(x,y) \log f(x|y) dx dy

is the conditional differential entropy .

An important special case is the following:
Let $X_1,\ldots,X_n$\ have a multivariate normal distribution w...{1}{2}\log(2\pi e)^n \vert K\vert \text{ bits}.

The relative entropy is

D(f\|g) = \int f \log \frac{f}{g}.

The mutual information is

I(X;Y) = \int f(x,y) \log \frac{f(x,y)}{f(x)f(y)} \, dx dy =
D(f(x,y) \| f(x)f(y))

Some properties:

An important result is the following:

That is, for a given covariance, the normal (Gaussian) distribution has the one which maximizes the entropy.
Let $g(\Xbf)$\ be a distribution with the same covariance, and le...
...laymath}and that both $g$\ and $\phi$\ have the same second moments.

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 19). Differential Entropy. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: This work is licensed under a Creative Commons License Creative Commons License