A random variable is a function X:Ω→R with the property ∀α∈R,{ω∈Ω:X(ω)≤α}∈F.
The condition in Definition 5 is necessary to compute P(X≤α),∀α∈R. This requirement also let us compute P(X∈B) for most sets by leveraging the fact that F is closed under complements, unions, and intersections. For example, we can also compute P(X>α) and P(α<X≤β). In this sense, the property binds the probability space to the random variable.
Definition 5 also implies that random variables satisfy particular algebraic properties. For example, if X,Y are random variables, then so are X+Y,XY,Xp,limn→∞Xn, etc.
Definition 6
A discrete random variable is a random variable whose codomain is countable.
Definition 7
A continuous random variable is a random variable whose codomain is the real numbers.
Although random variables are defined based on a probability space, it is often most natural to model problems without explicitly specifying the probability space. This works so long as we specify the random variables and their distribution in a “consistent” way. This is formalized by the so-called Kolmogorov Extension Theorem but can largely be ignored.
Distributions
Roughly speaking, the distribution of a random variable gives an idea of the likelihood that a random variable takes a particular value or set of values.
Definition 8
The probability mass function (or distribution) of a discrete random variable X is the frequency with which X takes on different values.
pX:X→[0,1] where X=range(X),pX(x)=Pr{X=x}.
Note that ∑x∈XpX(x)=1 since ⋂x∈X{w:X(w)=x}=Ω.
Continuous random variables are largely similar to discrete random variables. One key difference is that instead of being described by a probability “mass”, they are instead described by a probability “density”.
Definition 9
The probability density function (distribution) of a continuous random variable describes the density by which a random variable takes a particular value.
fX:R→[0,∞) where ∫−∞∞fX(x)dx=1 and Pr{X∈B}=∫BfX(x)dx
Observe that if a random variable X is continuous, then the probability that it takes on a particular value is zero.
The cumulative distribution function (CDF) gives us the probability of a random variable X being less than or equal to a particular value.
FX:R→[0,1],FX(x)=Pr{X≤x}
Note that by the Kolomogorov axioms, FX must satisfy three properties:
FX is non-decreasing.
limx→0FX(x)=0 and limx→∞FX(x)=1.
FX is right continuous.
It turns out that if we have any function FX that satisfies these three properties, then it is the CDF of some random variable on some probability space. Note that FX(x) gives us an alternative way to define continuous random variables. If FX(x) is absolutely continuous, then it can be expressed as
FX(x)=∫−∞xfX(x)dx
for some non-negative function fX(x), and this is the PDF of a continuous random variable.
Often, when modeling problems, there are multiple random variables that we want to keep track of.
Definition 11
If X and Y are random variables on a common probability space (Ω,F,P), then the joint distribution (denoted pXY(x,y) or fXY(x,y)describes the frequencies of joint outcomes.
Note that it is possible for X to be continuous and Y to be discrete (or vice versa).
Definition 12
The marginal distribution of a joint distribution is the distribution of a single random variable.
pX(x)=∑ypXY(x,Y=y),fX(x)=∫−∞∞fXY(x,y)dy
Definition 13
Two random variables X and Yare independent if their joint distribution is the product of the marginal distributions.
Just like independence, we can extend the notion of conditional probability to random variables.
Definition 14
The conditional distribution of X given Y captures the frequencies of X given we know the value of Y.
Often, we need to combine or transform several random variables. A derived distribution is the obtained by arithmetic of several random variables or applying a function to several (or many) random variables. Since the CDF of a distribution essentially defines that random variable, it can often be easiest to work backwards from the CDF to the PDF or PMF. In the special case where we want to find Y=g(X) for a function g.
The expectation of a random variable describes the center of a distribution,
E[X]=∑x∈XxpX(x),E[X]=∫−∞∞xfX(x)dx
provided the sum or integral converges.
Expectation has several useful properties. If we want to compute the expectation of a function of a random variable, then we can use the law of the unconscious statisitician.
Theorem 6 (Law of the Unconscious Statistician)
E[g(X)]=∑x∈Xg(x)pX(x),E[g(X)]=∫−∞∞g(x)fX(x)dx
Another useful property is its linearity.
E[aX+bY]=aE[X]+bE[Y],∀a,b∈R.
Sometimes it can be difficult to compute expectations directly. For disrete distributions, we can use the tail-sum formula.
Theorem 7 (Tail Sum)
For a non-negative integer random variable,
E[X]=∑k=1∞Pr{X≥k}.
When two random variables are independent, expectation has some additional properties.
Theorem 8
If X and Y are independent, then
E[XY]=E[X]E[Y].
Earlier, we saw that we find a derived distribution by transforming and combining random variables. Sometimes, we don’t need to actually compute the distribution, but only some of its properties.
Definition 16
The nth moment of a random variable is E[Xn].
It turns out that we can encode the moments of a distribution into the coefficients of a special power series.
Definition 17
The moment generating function of a random variable X is given by MX(t)=E[etX].
Notice that if we apply the power series expansion of etX, we see that
MX(t)=∑n=0∞n!t!E[Xn].
Thus the nth moment is encoded in the coefficients of the power series and we can retrieve them by taking a derivative:
E[Xn]=dtndnMX(t).
Another interesting point to notice is that for a continuous random variable
MX(t)=∫−∞∞fX(x)etxdx
is the Laplace transform of the distribution over the real line, and for a discrete random variable,
MX(t)=∑x=−∞∞pX(x)etx
is the Z-transform of the distribution evaluated along the curve at e−t.
Theorem 9
If the MGF of a function exists, then it uniquely determines the distribution.
This provides another way to compute the distribution for a sum of random variables because we can just multiply their MGF.
Variance
Definition 18
The variance of a discrete random variable X describes its spread around the expectation and is given by
Var(X)=E[(X−E[X])2]=E[X2]−E[X]2.
Theorem 10
When two random variables X and Y are independent, then
Var(X+Y)=Var(X)+Var(Y).
Definition 19
The covariance of two random variables describes how much they depend on each other and is given by
Cov(X,Y)=E[(X−E[X])(Y−E[Y])]=E[XY]−E[X]E[Y].
If Cov(X,Y)=0 then X and Y are uncorrelated.
Definition 20
The correlation coefficient gives a single number which describes how random variables are correlated.
ho(X,Y)=Var(X)Var(Y)Cov(X,Y).
Note that −1≤ρ≤1.
Common Discrete Distributions
Definition 21
X is uniformly distributed when each value of X has equal probability.
X is a Bernoulli random variable if it is either 0 or 1 with pX(1)=p.
X∼Bernoulli(p)⟹pX(x)=⎩⎨⎧1−pp0x=0,x=1, else.
E[X]=pVar(X)=(1−p)p
Bernoulli random variables are good for modeling things like a coin flip where there is a probability of success. Bernoulli random variables are frequently used as indicator random variables 1A where
1A={10if A occurs, else.
When paired with the linearity of expectation, this can be a powerful method of computing the expectation of something.
A binomial random variable can be thought of as the number of successes in n trials. In other words,
X∼Binomial(n,p)⟹X=∑i=1nXi,Xi∼Bernoulli(p).
By construction, if X∼Binomial(n,p) and Y∼Binomial(m,p) are independent, then X+Y∼Binomial(m+n,p).
Definition 24
A Geometric random variable is distributed as
X∼Geom(p)⟹pX(x)={p(1−p)x−10x=1,2,⋯ else.
E[X]=p1Var(X)=p21−p
Geometric random variables are useful for modeling the number of trials required before the first success. In other words,
X∼Geom(p)⟹X=min{k≥1:Xk=1} where Xi∼Bernoulli(p).
A useful property of geometric random variables is that they are memoryless:
Pr{X=K+M∣X>k}=Pr{X=M}.
Definition 25
A Poisson random variable is distributed as
X∼Poisson(λ)⟹pX(x)={x!λxe−λ0x=0,1,⋯ else.
E[X]=λ
Poisson random variables are good for modeling the number of arrivals in a given interval. Suppose you take a given time interval and divide it into n chunks where the probability of arrival in chunk i is Xi∼Bernoulli(pn). Then the total number of arrivals Xn=∑i=1nXi is distributed as a Binomial random variable with expectation npn=λ. As we increase n to infinity but keep λ fixed, we arrive at the poisson distribution.
A useful fact about Poisson random variables is that if X∼Poisson(λ) and Y∼Poisson(μ) are independent, then X+Y∼Poisson(λ+μ).
Common Continuous Distributions
Definition 26
A continuous random variable is uniformly distributed when the pdf of X is constant over a range.
X∼Uniform(a,b)⟹fX(x)={b−a10a≤x≤b, else.
The CDF of a uniform distribution is given by
FX(x)=⎩⎨⎧0,b−ax−a,1,x<a,x∈[a,b)x≥b.
Definition 27
A continuous random variable is exponentially distributed when its pdf is given by
X∼Exp(λ)⟹fX(x)={λe−λx0x≥0, else.
Exponential random variables are the only continuous random variable to have the memoryless property:
Pr{X>t+s∣X>s}=Pr{X>t},t≥0.
The CDF of the exponential distribution is given by
FX(x)=λ∫0xe−λudu=1−e−λx
Definition 28
X is a Gaussian Random Variable with mean μ and variance σ2 (denoted X∼N(μ,σ2)) if it has the PDF
fX(x)=2πσ21e2σ2−(x−μ)2
The standard normal is X∼N(0,1), and it has the CDF
Φ(x)=2π1∫−∞xe2−u2du
There is no closed from for Φ(x). It turns out that every normal random variable can be transformed into the standard normal (i.e σX−μ∼N(0,1)). Some facts about Gaussian random variables are
If X∼N(μx,σx2),Y∼N(μy,σy2) are independent, then X+Y∼N(μx+μy,σx2+σy2).
If X,Y are independent and (X+Y),(X−Y) are independent, then both X and Y are Gaussian with the same variance.
Jointly Gaussian Random Variables
Jointly Gaussian Random Varables, also known as Gaussian Vectors, can be defined in a variety of ways.
Definition 29
A Gaussian Random Vector X=[X1⋯Xn]T with density on Rn, Cov(X)=Σ,E[X]=μ is defined by the pdf
fX(x)=(2π)ndet(Σ)1e−21(x−μ)TΣ−1(x−μ)
Definition 30
A joint gaussian random variable is an affine transformation of independent and identically distributed standard normals.
X=μ+AW
where A=Σ1/2 is a full-rank matrix and Wis a vector of i.i.d standard normals.
Definition 31
A random variable is jointly gaussian if all 1D projections are Gaussian
aTX∼N(aTμ,aTΣa)
In addition to their many definitions, jointly gaussian random variables also have interesting properties.
Theorem 11
If X and Y are jointly gaussian random variables, then
X=μX+ΣXYΣY−1(Y−μY)+V where V∼N(0,ΣX−ΣXYΣY−1ΣYX)
Theorem 11 tells us that each entry in Gaussian Vector can be thought of as a “noisy” version of the others.
Hilbert Spaces of Random Variables
One way to understand random variables is through linear algebra by thinking of them as vectors in a vector space.
Definition 32
An real inner product space V is composed of a vector space V over a real scalar field equipped with an inner product ⟨⋅,⋅⟩ that satisfies ∀u,v,w∈V, a,b∈R,
⟨u,v⟩=⟨v,u⟩
⟨au+bv,w⟩=a⟨u,w⟩+b⟨v,w⟩
⟨u,u⟩≥0 and <u,u>=0⇔u=0
Inner products spaces are equipped with the norm ∥v∥=⟨v,v⟩.
Definition 33
A Hilbert Space is a real inner product space that is complete with respect to its norm.
Loosely, completeness means that we can take limits of without exiting the space. It turns out that random variables satisfy the definition of a Hilbert Space.
Theorem 12
Let (Ω,F,P) be a probability space. The collection of random variables X with E[X2]<∞ on this probability space form a Hilbert Space with respect to the inner product ⟨X,Y⟩=E[XY].
Hilbert spaces are important because they provide a notion of geometry that is compatible with our intuition as well as the geometry of Rn (which is a Hilbert Space). One geometric idea is that of orthogonality. Two vectors are orthogonal if ⟨X,Y⟩=0. Two random variables will be orthogonal if they are zero-mean and uncorrelated. Using orthogonality, we can also define projections.
Theorem 13 (Hilbert Projection Theorem)
Let H be a Hilbert Space and U⊆H be a closed subspace. For each vector v∈H, argmin∥u−v∥ has a unique solution (there is a unique closest point u∈U to v). If u is the closest point to v, then ∀u∈U,⟨u−v,u′⟩.
Theorem 13 is what gives rise to important properties like the Pythogorean Theorem for any Hilbert Space.
∥u∥2+∥u−v∥2=∥v∥ where u=argmin∥u−v∥.
Suppose we had to random variables X and Y. What happens if we try and project one onto the other?
Definition 34
The conditional expectation of X given Y is the bounded continuous function of Y such that X−E[X∣Y]is orthogonal to all other bounded continuous functions ϕ(Y).
∀ϕ,E[(X−E[X∣Y])ϕ(Y)]=0.
Thus, the conditional expectation is the function of Y that is closest to X. It’s interpretation is that the expectation of X can change after observing some other random variable Y. To find E[X∣Y], we can use the conditional distribution of X and Y.
Theorem 14
The conditional expectation of a conditional distribution is given by
Notice that E[X∣Y]is a function of the random variable Y, meaning we can apply Theorem 6.
Theorem 15 (Tower Property)
For all functions f,
E[f(Y)X]=E[f(Y)E[X∣Y]]
Alternatively, we could apply lineary of expectation to Definition 34 to arrive at the same result. If we apply Theorem 15 to the function f(Y)=1, then we can see that E[E[X∣Y]]=E[X].
Just as expectation can change when we know additional information, so can variance.
Definition 35
Conditional Variance is the variance of X given the value of Y.