2Axioms of probability

IA Probability



2.3 Independence
Definition (Independent events). Two events A and B are independent if
P(A B) = P(A)P(B).
Otherwise, they are said to be dependent.
Two events are independent if they are not related to each other. For example,
if you roll two dice separately, the outcomes will be independent.
Proposition. If A and B are independent, then A and B
C
are independent.
Proof.
P(A B
C
) = P(A) P(A B)
= P(A) P(A)P(B)
= P(A)(1 P(B))
= P(A)P(B
C
)
This definition applies to two events. What does it mean to say that three
or more events are independent?
Example. Roll two fair dice. Let
A
1
and
A
2
be the event that the first and
second die is odd respectively. Let
A
3
= [sum is odd]. The event probabilities
are as follows:
Event Probability
A
1
1/2
A
2
1/2
A
3
1/2
A
1
A
2
1/4
A
1
A
3
1/4
A
2
A
3
1/4
A
1
A
2
A
3
0
We see that
A
1
and
A
2
are independent,
A
1
and
A
3
are independent, and
A
2
and
A
3
are independent. However, the collection of all three are not independent,
since if A
1
and A
2
are true, then A
3
cannot possibly be true.
From the example above, we see that just because a set of events is pairwise
independent does not mean they are independent all together. We define:
Definition (Independence of multiple events). Events
A
1
, A
2
, ···
are said to
be mutually independent if
P(A
i
1
A
i
2
··· A
i
r
) = P(A
i
1
)P(A
i
2
) ···P(A
i
r
)
for any i
1
, i
2
, ···i
r
and r 2.
Example. Let
A
ij
be the event that
i
and
j
roll the same. We roll 4 dice. Then
P(A
12
A
13
) =
1
6
·
1
6
=
1
36
= P(A
12
)P(A
13
).
But
P(A
12
A
13
A
23
) =
1
36
= P(A
12
)P(A
13
)P(A
23
).
So they are not mutually independent.
We can also apply this concept to experiments. Suppose we model two
independent experiments with
1
=
{α
1
, α
2
, ···}
and
2
=
{β
1
, β
2
, ···}
with
probabilities
P
(
α
i
) =
p
i
and
P
(
β
i
) =
q
i
. Further suppose that these two
experiments are independent, i.e.
P((α
i
, β
j
)) = p
i
q
j
for all i, j. Then we can have a new sample space =
1
×
2
.
Now suppose
A
1
and
B
2
are results (i.e. events) of the two
experiments. We can view them as subspaces of by rewriting them as
A ×
2
and
1
× B. Then the probability
P(A B) =
X
α
i
A,β
i
B
p
i
q
i
=
X
α
i
A
p
i
X
β
i
B
q
i
= P(A)P(B).
So we say the two experiments are “independent” even though the term usually
refers to different events in the same experiment. We can generalize this to
n
independent experiments, or even countably many infinite experiments.