Concentration

In real life, for the most part, we can’t compute probabilities in closed form. Instead, we either bound them, or we want to show that P(A)0P(A) \approx 0 or P(A)1P(A) \approx 1.

Concentration Inequalities

Theorem 17 (Markov's Inequality)

For a non-negative random variable XX,

Pr{Xt}E[X]t,t0.\text{Pr}\left\{X \geq t\right\} \leq \frac{\mathbb{E}\left[X\right] }{t}, \quad t \geq 0.

Theorem 18 (Chebyshev's Inequality)

If XX is a random variable, then

Pr{XE[X]t}Var(X)t2.\text{Pr}\left\{|X - \mathbb{E}\left[X\right] | \geq t\right\} \leq \frac{\text{Var}\left(X\right) }{t^2}.

Intuitively, Theorem 18 gives gives a “better” bound than Theorem 17 because it incorporates the variance of the random variable. Using this idea, we can define an even better bound that incorporates information from all moments of the random variable.

Definition 36 (Chernoff Bound)

For a random variable XX and aRa\in\mathbb{R},

Pr{Xa}E[etX]eta=etaMx(t).\text{Pr}\left\{X \geq a\right\} \leq \frac{\mathbb{E}\left[e^{tX}\right] }{e^{ta}} = e^{-ta}M_x(t).

After computing the Chernoff bound for a general tt, we can then optimize over it to compute the best bound possible.

Convergence

The idea of convergence brings the mathematical language of limits into probability. The fundamental question we want to answer is given random variables X1,X2,X_1, X_2, \cdots, what does it mean to compute

limnXn.\lim_{n\to\infty}X_n.

This question is not as straightforward as it seems because random variables are functions, and there are many ways to define the convergence of functions.

Definition 37

A sequence of random variables converges almost surely to XX if

P(limnXn=X)=1P\left(\lim_{n\to \infty}X_n = X\right) = 1

One result of almost sure convergence deals with deviations around the mean of many samples.

Theorem 19 (Strong Law of Large Numbers)

If X1,X2,,XnX_1, X_2, \cdots, X_n are independently and identically distributed to XX where E[X]<\mathbb{E}\left[X\right] < \infty, then 1niXi\frac{1}{n}\sum_i X_i converges almost surely to E[X]\mathbb{E}\left[X\right] .

The strong law tells us that for any observed realization, there is a point after which there are no deviations from the mean.

Definition 38

A sequence of random variables converges in probability if

ϵ>0,limnP(XnX>ϵ)=0\forall \epsilon > 0, \quad \lim_{n\to\infty}P(|X_n - X| > \epsilon) = 0

Convergence in probability can help us formalize the intuition that we have which says probability is the frequency with which an even happens over many trials of an event.

Theorem 20 (Weak Law of Large Numbers)

Let X1,X2,,XnX_1, X_2, \cdots, X_n be independently and identically distributed according to XX, and let Mn=1nXiM_n = \frac{1}{n}\sum X_i. Then for ϵ>0\epsilon > 0,

limnPr{MnE[X]>ϵ}=0.\lim_{n\to\infty} \text{Pr}\left\{|M_n - \mathbb{E}\left[X\right] | > \epsilon\right\} = 0.

It tells us that the probability of a deviation of ϵ\epsilon from the true mean will go to 0 in the limit, but we can still observe these deviations. Nevertheless, the weak law helps us formalize our intuition about probability. If X1,X2,,XnX_1, X_2, \cdots, X_n are independently and identically distributed according to XX, then we can define the empirical frequency

Fn=1XiBn    E[Fn]=P(XB).F_n = \frac{\sum\mathbb{1}_{X_i\in B}}{n} \implies \mathbb{E}\left[F_n\right] = P(X \in B).

By Theorem 20,

limnPr{FnP(XB)>ϵ}=0,\lim_{n\to\infty}\text{Pr}\left\{|F_n - P(X\in B)| > \epsilon\right\} = 0,

meaning over many trials, the empirical frequency is equal to the probility of the event, matching intuition.

Definition 39

A sequence of random variables converges in distribution if

limnFXn(x)=Fx(x).\lim_{n\to\infty}F_{X_n}(x) = F_x(x).

An example of convergence in distribution is the central limit theorem.

Theorem 21 (Central Limit Theorem)

If X1,X2,X_1, X_2, \cdots are independently and identically distributed according to XX with Var(X)=σ2\text{Var}\left(X\right) = \sigma^2 and E[X]=μ\mathbb{E}\left[X\right] = \mu, then

limnP(i=1nXinμσnx)=Φ(x)\lim_{n\to\infty}P\left(\frac{\sum_{i=1}^nX_i - n\mu}{\sigma\sqrt{n}} \leq x\right) = \Phi(x)

In other words, a sequence of random variables converges in distribution to a normal distribution with variance σ2\sigma^2 and mean μ\mu.

These notions of convergence are not identical, and they do not necessarily imply each other. It is true that almost sure convergence implies convergence in probability, and convergence in probability implies convergence in distribution, but the implication is only one way.

Once we know how a random variable converges, we can then also find how functions of that random variable converge.

Theorem 22 (Continuous Mapping Theorem)

If ff is a continuous function, then if XnX_n converges to XX, then f(Xn)f(X_n) converges to f(X)f(X). The convergence can be almost surely, in probability, or in distribution.

Last updated