Intuitively, Theorem 18 gives gives a “better” bound than Theorem 17 because it incorporates the variance of the random variable. Using this idea, we can define an even better bound that incorporates information from all moments of the random variable.
Definition 36 (Chernoff Bound)
Convergence
This question is not as straightforward as it seems because random variables are functions, and there are many ways to define the convergence of functions.
Definition 37
One result of almost sure convergence deals with deviations around the mean of many samples.
Theorem 19 (Strong Law of Large Numbers)
The strong law tells us that for any observed realization, there is a point after which there are no deviations from the mean.
Definition 38
A sequence of random variables converges in probability if
Convergence in probability can help us formalize the intuition that we have which says probability is the frequency with which an even happens over many trials of an event.
Theorem 20 (Weak Law of Large Numbers)
By Theorem 20,
meaning over many trials, the empirical frequency is equal to the probility of the event, matching intuition.
Definition 39
A sequence of random variables converges in distribution if
An example of convergence in distribution is the central limit theorem.
Theorem 21 (Central Limit Theorem)
These notions of convergence are not identical, and they do not necessarily imply each other. It is true that almost sure convergence implies convergence in probability, and convergence in probability implies convergence in distribution, but the implication is only one way.
Once we know how a random variable converges, we can then also find how functions of that random variable converge.
In real life, for the most part, we can’t compute probabilities in closed form. Instead, we either bound them, or we want to show that P(A)≈0 or P(A)≈1.
For a non-negative random variable X,
Pr{X≥t}≤tE[X],t≥0.
If X is a random variable, then
Pr{∣X−E[X]∣≥t}≤t2Var(X).
For a random variable X and a∈R,
Pr{X≥a}≤etaE[etX]=e−taMx(t).
After computing the Chernoff bound for a general t, we can then optimize over it to compute the best bound possible.
The idea of convergence brings the mathematical language of limits into probability. The fundamental question we want to answer is given random variables X1,X2,⋯, what does it mean to compute
limn→∞Xn.
A sequence of random variables converges almost surely to X if
P(limn→∞Xn=X)=1
If X1,X2,⋯,Xn are independently and identically distributed to X where E[X]<∞, then n1∑iXi converges almost surely to E[X].
∀ϵ>0,limn→∞P(∣Xn−X∣>ϵ)=0
Let X1,X2,⋯,Xn be independently and identically distributed according to X, and let Mn=n1∑Xi. Then for ϵ>0,
limn→∞Pr{∣Mn−E[X]∣>ϵ}=0.
It tells us that the probability of a deviation of ϵ from the true mean will go to 0 in the limit, but we can still observe these deviations. Nevertheless, the weak law helps us formalize our intuition about probability. If X1,X2,⋯,Xn are independently and identically distributed according to X, then we can define the empirical frequency
Fn=n∑1Xi∈B⟹E[Fn]=P(X∈B).
limn→∞Pr{∣Fn−P(X∈B)∣>ϵ}=0,
limn→∞FXn(x)=Fx(x).
If X1,X2,⋯ are independently and identically distributed according to X with Var(X)=σ2 and E[X]=μ, then
limn→∞P(σn∑i=1nXi−nμ≤x)=Φ(x)
In other words, a sequence of random variables converges in distribution to a normal distribution with variance σ2 and mean μ.
If f is a continuous function, then if Xn converges to X, then f(Xn) converges to f(X). The convergence can be almost surely, in probability, or in distribution.