1Measures

II Probability and Measure

1.2 Probability measures

Since the course is called “probability and measure”, we’d better start talking

about probability! It turns out the notions we care about in probability theory

are very naturally just special cases of the concepts we have previously considered.

Definition

(Probability measure and probability space)

.

Let (

E, E

) be a measure

space with the property that

µ

(

E

) = 1. Then we often call

µ

a probability measure,

and (E, E, µ) a probability space.

Probability spaces are usually written as (Ω, F, P) instead.

Definition

(Sample space)

.

In a probability space (Ω

, F, P

), we often call Ω

the sample space.

Definition

(Events)

.

In a probability space (Ω

, F, P

), we often call the elements

of F the events.

Definition

(Probaiblity)

.

In a probability space (Ω

, F, P

), if

A ∈ F

, we often

call P[A] the probability of the event A.

These are exactly the same things as measures, but with different names!

However, thinking of them as probabilities could make us ask different questions

about these measure spaces. For example, in probability, one is often interested

in independence.

Definition

(Independence of events)

.

A sequence of events (

A

n

) is said to be

independent if

P

"

\

n∈J

A

n

#

=

Y

n∈J

P[A

n

]

for all finite subsets J ⊆ N.

However, it turns out that talking about independence of events is usually

too restrictive. Instead, we want to talk about the independence of σ-algebras:

Definition

(Independence of

σ

-algebras)

.

A sequence of

σ

-algebras (

A

n

) with

A

n

⊆ F

for all

n

is said to be independent if the following is true: If (

A

n

) is a

sequence where A

n

∈ A

n

for all n, them (A

n

) is independent.

Proposition.

Events (

A

n

) are independent iff the

σ

-algebras

σ

(

A

n

) are inde-

pendent.

While proving this directly would be rather tedious (but not too hard), it is

an immediate consequence of the following theorem:

Theorem. Suppose A

1

and A

2

are π-systems in F. If

P[A

1

∩ A

2

] = P[A

1

]P[A

2

]

for all A

1

∈ A

1

and A

2

∈ A

2

, then σ(A

1

) and σ(A

2

) are independent.

Proof.

This will follow from two applications of the fact that a finite measure is

determined by its values on a π-system which generates the entire σ-algebra.

We first fix A

1

∈ A

1

. We define the measures

µ(A) = P[A ∩ A

1

]

and

ν(A) = P[A]P[A

1

]

for all

A ∈ F

. By assumption, we know

µ

and

ν

agree on

A

2

, and we have that

µ(Ω) = P[A

1

] = ν(Ω) ≤ 1 < ∞. So µ and ν agree on σ(A

2

). So we have

P[A

1

∩ A

2

] = µ(A

2

) = ν(A

2

) = P[A

1

]P[A

2

]

for all A

2

∈ σ(A

2

).

So we have now shown that if

A

1

and

A

2

are independent, then

A

1

and

σ

(

A

2

) are independent. By symmetry, the same argument shows that

σ

(

A

1

)

and σ(A

2

) are independent.

Say we are rolling a dice. Instead of asking what the probability of getting

a 6, we might be interested instead in the probability of getting a 6 infinitely

often. Intuitively, the answer is “it happens with probability 1”, because in each

dice roll, we have a probability of

1

6

of getting a 6, and they are all independent.

We would like to make this precise and actually prove it. It turns out that

the notions of “occurs infinitely often” and also “occurs eventually” correspond

to more analytic notions of lim sup and lim inf.

Definition (limsup and liminf). Let (A

n

) be a sequence of events. We define

lim sup A

n

=

\

n

[

m≥n

A

m

lim inf A

n

=

[

n

\

m≥n

A

m

.

To parse these definitions more easily, we can read

∩

as “for all”, and

∪

as

“there exits”. For example, we can write

lim sup A

n

= ∀n, ∃m ≥ n such that A

m

occurs

= {x : ∀n, ∃m ≥ n, x ∈ A

m

}

= {A

m

occurs infinitely often}

= {A

m

i.o.}

Similarly, we have

lim inf A

n

= ∃n, ∀m ≥ n such that A

m

occurs

= {x : ∃n, ∀m ≥ n, x ∈ A

m

}

= {A

m

occurs eventually}

= {A

m

e.v.}

We are now going to prove two “obvious” results, known as the Borel–Cantelli

lemmas. These give us necessary conditions for an event to happen infinitely

often, and in the case where the events are independent, the condition is also

sufficient.

Lemma (Borel–Cantelli lemma). If

X

n

P[A

n

] < ∞,

then

P[A

n

i.o.] = 0.

Proof. For each k, we have

P[A

n

i.o] = P

\

n

[

m≥n

A

m

≤ P

[

m≥k

A

m

≤

∞

X

m=k

P[A

m

]

→ 0

as k → ∞. So we have P[A

n

i.o.] = 0.

Note that we did not need to use the fact that we are working with a

probability measure. So in fact this holds for any measure space.

Lemma (Borel–Cantelli lemma II). Let (A

n

) be independent events. If

X

n

P[A

n

] = ∞,

then

P[A

n

i.o.] = 1.

Note that independence is crucial. If we flip a fair coin, and we set all the

A

n

to be equal to “getting a heads”, then

P

n

P

[

A

n

] =

P

n

1

2

=

∞

, but we certainly

do not have P[A

n

i.o.] = 1. Instead it is just

1

2

.

Proof.

By example sheet, if (

A

n

) is independent, then so is (

A

C

n

). Then we have

P

"

N

\

m=n

A

C

m

#

=

N

Y

m=n

P[A

C

m

]

=

N

Y

m=n

(1 − P[A

m

])

≤

N

Y

m=n

exp(−P[A

m

])

= exp

−

N

X

m=n

P[A

m

]

!

→ 0

as N → ∞, as we assumed that

P

n

P[A

n

] = ∞. So we have

P

"

∞

\

m=n

A

C

m

#

= 0.

By countable subadditivity, we have

P

"

[

n

∞

\

m=n

A

C

m

#

= 0.

This in turn implies that

P

"

\

n

∞

[

m=n

A

m

#

= 1 − P

"

[

n

∞

\

m=n

A

C

m

#

= 1.

So we are done.