A probability space is a triple (Ω,F,P) where Ω is a set of objects called the sample space, F is a family of subsets of Ω called events, and the probability measure P:F→[0,1].
One key assumption we make is that F is a σ-algebra containing Ω, meaning that countably many complements, unions, and intersections of events in F are also events in F. The probability measure P must obey Kolmogorov’s Axioms.
∀A∈F,P(A)≥0
P(Ω)=1
If A1,A2,⋯∈F and ∀i=j,Ai⋂Aj=∅, then P(⋃i≥1Ai)=∑i≥1P(Ai)
We choose Ω and F to model problems in a way that makes our calculations easy.
If A1,A2,⋯ partition Ω (i.e Ai are disjoint and ∪Ai=Ω), then for event B,
P(B)=∑iP(B∩Ai)
Conditional Probability
Definition 2
Theorem 4 (Bayes Theorem)
Independence
Definition 3
Definition 4
If B is an event with P(B)>0, then the conditional probability of A given B is
P(A∣B)=P(B)P(A∩B)
Intuitively, conditional probabilty is the probability of event A given that event B has occurred. In terms of probability spaces, it is as if we have taken (Ω,F,P) and now have a probabilty measure P(⋅∣C) belonging to the space (Ω,F,P(⋅∣C)).
P(A∣B)=P(B)P(B∣A)P(A)
Events A and B are independent if P(A∩B)=P(A)P(B)
If P(B)>0, then A,B are independent if and only if P(A∣B)=P(A). In other words, knowing B occurred gave no extra information about A.
If A,B,C with P(C)>0 satisfy P(A∩B∣C)=P(A∣C)P(B∣C), then A and B are conditionally independent given C.
Conditional independence is a special case of independence where A and B are not necessarily independent in the original probability space which has the measure P, but are independent in the new probability space conditioned on C with the measure P(⋅∣C).