Information Theory is a field which addresses two questions
Source Coding: How many bits do I need to losslessly represent an observation.
Channel Coding: How reliably and quickly can I communicate a message over a noisy channel.
Quantifying Information
Definition 40
Definition 41
Definition 42
Theorem 23 (Chain Rule of Entropy)
In addition to knowing how much our surprise changes for a random variable when we observe a different random variable, we can also quantify how much additional information observing a random variable gives us about another.
Definition 43
Source Coding
Theorem 24 (Asymptotic Equipartition Property)
Definition 44
Two important properties of the typical set are that
This makes the average number of bits required to describe a message
This is the first half of a central result of source coding.
Theorem 25 (Source Coding Theorem)
Channel Coding
Definition 45
In words, the capacity describes the maximum mutual information between the channel input and output.
Intuitively, for a PMF of a disrete random variable, the surprise associated with a particular realization is −logpX(x) since less probable realizations are more surprising. With this intuition, we can try and quantify the “expected surprise” of a distribution.
For a Discrete Random Variable X∼pX, the Entropy of X is given by
H(x)=E[−log2pX(x)]=−∑x∈XpX(x)log2pX(x).
Alternative interpretations of entropy are the average uncertainty and how random X is. Just like probabilites, we can define both joint and conditional entropies.
For Discrete Random Variables X and Y, the joint entropy is given by
For Discrete Random Variable X and Y, the conditional entropy is given by
H(Y∣X)=E[−log2pY∣X(y∣x)]=∑x∈XpX(x)H(Y∣X=x).
Conditional entropy has a natural interpretation which is that it tells us how surprised we are to see Y=y given that we know X=x. If X and Y are independent, then H(Y)=H(Y∣X) because realizing X gives no additional information about Y.
H(X,Y)=H(X)+H(X∣Y).
For random variables X and Y, the mutual information is given by
I(X;Y)=H(X)−H(X∣Y)=H(Y)−H(Y∣X).
Source coding deals with finding the minimal number of bits required to represent data. This is essentially the idea of lossless compression. In this case, our message is the sequence of realizations of independently and identically distributed random variables (Xi)i=1n∼pX. The probability of observing a particular sequence is then
P(x1,x2,⋯,xn)=∏i=1npX(xi).
If we have a sequence of independently and identically distributed random variables (Xi)i=1n∼pX, then −n1logP(x1,x2,⋯,xn) converges to H(X)in probability.
Theorem 24 tells us that with overwhelming probability, we will observe a sequence that is assigned probability 2−nH(X). Using this idea, we can define a subset of possible observed sequences that in the limit, our observed sequence must belong to with overwhelming probability.
For a fixed ϵ>0, for each n≥1, the typical set is given by
The typical set gives us an easy way to do source coding. If I have N total objects, then I only need logN bits to represent each object, so I can define a simple protocol which is
If (xi)i=1n∈A2ϵ(n), then describe them using the log∣A2ϵ(n)∣≤n(H(X)+2ϵ) bits
If (xi)i=1n∈A2ϵ(n), then describe them naiively with nlog∣X∣ bits.
E[# of Bits]≤n(H(X)+2ϵ)P((xi)i=1n∈A2ϵ(n))+nlog∣X∣P((xi)i=1n∈A2ϵ(n))≤n(H(X)+2ϵ)+n2ϵ≤n(H(X)+ϵ)
If (Xi)i=1n∼pX are a sequence of independently and identically distributed random varibles, then for any ϵ>0 and n sufficiently large, we can represent (Xi)i=1n using fewer than n(H(X)+ϵ) bits. Conversely, we can not losslessly represent (Xi)i=1n using fewer than nH(X)bits.
This lends a new interpretation of the entropy H(X): it is the average number of bits required to represent X.
Whereas source coding deals with encoding information, channel coding deals with transmitting it over a noisy channel. In general, we have a message M, and encoder, a channel, and a decoder as in Figure 1.
Each channel can be described by a conditional probability distribution pY∣X(y∣x) for each time the channel is used.
For a channel described by pY∣X, the capacity is given by
C=maxpXI(X;Y).
Suppose we use the channel n times to send a message that takes on average H(m) bits to encode, then the rate of the channel is
R=nH(M)
For a channel decsribed by pY∣X and ϵ>0 and R<C, for all n sufficiently large, there exists a rate R communication scheme that achieves a probability of error less than ϵ. If R>C, then the probability of error converges to 1 for any communication scheme.