Concentration
Last updated
Last updated
In real life, for the most part, we can’t compute probabilities in closed form. Instead, we either bound them, or we want to show that or .
Intuitively, Theorem 18 gives gives a “better” bound than Theorem 17 because it incorporates the variance of the random variable. Using this idea, we can define an even better bound that incorporates information from all moments of the random variable.
After computing the Chernoff bound for a general , we can then optimize over it to compute the best bound possible.
This question is not as straightforward as it seems because random variables are functions, and there are many ways to define the convergence of functions.
One result of almost sure convergence deals with deviations around the mean of many samples.
The strong law tells us that for any observed realization, there is a point after which there are no deviations from the mean.
Convergence in probability can help us formalize the intuition that we have which says probability is the frequency with which an even happens over many trials of an event.
By Theorem 20,
meaning over many trials, the empirical frequency is equal to the probility of the event, matching intuition.
An example of convergence in distribution is the central limit theorem.
These notions of convergence are not identical, and they do not necessarily imply each other. It is true that almost sure convergence implies convergence in probability, and convergence in probability implies convergence in distribution, but the implication is only one way.
Once we know how a random variable converges, we can then also find how functions of that random variable converge.
The idea of convergence brings the mathematical language of limits into probability. The fundamental question we want to answer is given random variables , what does it mean to compute
A sequence of random variables converges almost surely to if
If are independently and identically distributed to where , then converges almost surely to .
Let be independently and identically distributed according to , and let . Then for ,
It tells us that the probability of a deviation of from the true mean will go to 0 in the limit, but we can still observe these deviations. Nevertheless, the weak law helps us formalize our intuition about probability. If are independently and identically distributed according to , then we can define the empirical frequency
If are independently and identically distributed according to with and , then
In other words, a sequence of random variables converges in distribution to a normal distribution with variance and mean .
If is a continuous function, then if converges to , then converges to . The convergence can be almost surely, in probability, or in distribution.