6Hecke operators
III Modular Forms and L-functions
6.2 Hecke operators on modular forms
We are now done with group theory. For the rest of the chapter, we take
G
=
GL
2
(
Q
)
+
and Γ(1) =
SL
2
(
Z
). We are going to compute the Hecke algebra
in this case.
The first thing to do is to identify what the single and double cosets are.
Let’s first look at the case where the representative lives in GL
2
(Z)
+
. We let
γ ∈ GL
2
(Z)
+
with
det γ = n > 0.
The rows of
γ
generate a subgroup Λ
⊆ Z
2
. If the rows of
γ
0
also generate the
same subgroup Λ, then there exists δ ∈ GL
2
(Z) with det δ = ±1 such that
γ
0
= δγ.
So we have
deg γ
0
=
±n
, and if
det γ
0
= +
n
, then
δ ∈ SL
2
(
Z
) = Γ. This gives a
bijection
cosets Γγ such that
γ ∈ Mat
2
(Z), det γ = n
←→
subgroups Λ ⊆ Z
2
of index n
What we next want to do is to pick representatives of these subgroups¡ hence
the cosets. Consider an arbitrary subgroup Λ ⊆ Z
2
= Ze
1
⊕ Ze
2
. We let
Λ ∩ Ze
2
= Z · de
2
for some d ≥ 1. Then we have
Λ = hae
1
+ be
2
, de
2
i
for some
a ≥
1,
b ∈ Z
such that 0
≤ b < d
. Under these restrictions,
a
and
b
are
unique. Moreover, ad = n. So we can define
Π
n
=
a b
0 d
∈ Mat
2
(Z) : a, d ≥ 1, ad = n, 0 ≤ b < d
.
Then
n
γ ∈ Mat
2
(Z) : det γ = n
o
=
a
γ∈Π
n
Γγ.
These are the single cosets. How about the double cosets? The left hand side
above is invariant under Γ on the left and right, and is so a union of double
cosets.
Proposition.
(i) Let γ ∈ Mat
2
(Z) and det γ = n ≥ 1. Then
ΓγΓ = Γ
n
1
0
0 n
2
Γ
for unique n
1
, n
2
≥ 1 and n
2
| n
1
, n
1
n
2
= n.
(ii)
n
γ ∈ Mat
2
(Z) : det γ = n
o
=
a
Γ
n
1
0
0 n
2
Γ,
where we sum over all 1 ≤ n
2
| n
1
such that n = n
1
n
2
.
(iii) Let γ, n
1
, n
2
be as above. if d ≥ 1, then
Γ(d
−1
γ)Γ = Γ
n
1
/d 0
0 n
2
/d
Γ,
Proof.
This is the Smith normal form theorem, or, alternatively, the fact that
we can row and column reduce.
Corollary. The set
Γ
r
1
0
0 r
2
Γ
: r
1
, r
2
∈ Q
>0
,
r
1
r
2
∈ Z
is a basis for H(G, Γ) over Z.
So we have found a basis. The next goal is to find a generating set. To do so,
we define the following matrices:
For 1 ≤ n
2
| n
1
, we define
T (n
1
, n
2
) =
Γ
n
1
0
0 n
2
Γ
For n ≥ 1, we write
R(n) =
Γ
n 0
0 n
Γ
=
Γ
n 0
0 n
= T (n, n)
Finally, we define
T (n) =
X
1≤n
2
|n
1
n
1
n
2
=n
T (n
1
, n
2
)
In particular, we have
T (1, 1) = R(1) = 1 = T (1),
and if n is square-free, then
T (n) = T (n, 1).
Theorem.
(i) R(mn) = R(m)R(n) and R(m)T (n) = T (n)R(m) for all m, n ≥ 1.
(ii) T (m)T (n) = T (mn) whenever (m, n) = 1.
(iii) T (p)T (p
r
) = T (p
r+1
) + pR(p)T (p
r−1
) of r ≥ 1.
Before we prove this theorem, we see how it helps us find a nice generating
set for the Hecke algebra.
Corollary. H
(
G,
Γ) is commutative, and is generated by
{T
(
p
)
, R
(
p
)
, R
(
p
)
−1
:
p prime}.
This is rather surprising, because the group we started with was very non-
commutative.
Proof. We know that T (n
1
, n
2
), R(p) and R(p)
−1
generate H(G, Γ), because
Γ
p 0
0 p
Γ
Γ
n
1
0
0 n
2
Γ
=
Γ
pn
1
0
0 pn
2
Γ
In particular, when n
2
| n
1
, we can write
T (n
1
, n
2
) = R(n
2
)T
n
1
n
2
, 1
.
So it suffices to show that we can produce any
T
(
n,
1) from the
T
(
m
) and
R
(
m
).
We proceed inductively. The result is immediate when
n
is square-free, because
T (n, 1) = T (n). Otherwise,
T (n) =
X
1≤n
2
|n
1
n
1
n
2
=n
T (n
1
, n
2
)
=
X
1≤n
2
|n
1
n
1
n
2
=n
R(n
2
)T
n
1
n
2
, 1
= T (n, 1) +
X
1<n
2
|n
1
n
1
n
2
=n
R(n
2
)T
n
1
n
2
, 1
.
So
{T
(
p
)
, R
(
p
)
, R
(
p
)
−1
}
does generate
H
(
G,
Γ), and by the theorem, we know
these generators commute. So H(G, Γ) is commutative.
We now prove the theorem.
Proof of theorem.
(i) We have
Γ
a 0
0 a
Γ
[ΓγΓ] =
Γ
a 0
0 a
γΓ
= [ΓγΓ]
Γ
a 0
0 a
Γ
by the formula for the product.
(ii) Recall we had the isomorphism Θ : H(G, Γ) 7→ Z[Γ \ G]
Γ
, and
Θ(T (n)) =
X
γ∈Π
n
[Γγ]
for some Π
n
. Moreover,
{γZ
2
| γ ∈
Π
n
}
is exactly the subgroups of
Z
2
of
index n.
On the other hand,
Θ(T (m)T (n)) =
X
δ∈Π
m
,γ∈Π
n
[Γδγ],
and
{δγZ
2
| δ ∈ Π
m
} = {subgroups of γZ
2
of index n}.
Since
n
and
m
are coprime, every subgroup Λ
⊆ Z
2
of index
mn
is
contained in a unique subgroup of index
n
. So the above sum gives exactly
Θ(T (mn)).
(iii) We have
Θ(T (p
r
)T (p)) =
X
δ∈Π
p
r
,γ∈Π
p
[Γδγ],
and for fixed
γ ∈
Π
p
, we know
{δγZ
2
:
δ ∈
Π
p
r
}
are the index
p
r
subgroups
of Z
2
.
On the other hand, we have
Θ(T (p
r+1
)) =
X
ε∈Π
p
r+1
[Γε],
where {εZ
2
} are the subgroups of Z
2
of index p
r+1
.
Every Λ =
εZ
2
of index
p
r+1
is a subgroup of some index
p
subgroup
Λ
0
∈ Z
2
of index
p
r
. If Λ
6⊆ pZ
2
, then Λ
0
is unique, and Λ
0
= Λ +
pZ
2
. On
the other hand, if Λ ⊆ pZ
2
, i.e.
ε =
p 0
0 p
ε
0
for some
ε
0
of determinant
p
r−1
, then there are (
p
+1) such Λ
0
corresponding
to the (p + 1) order p subgroups of Z
2
/pZ
2
.
So we have
Θ(T (p
r
)T (p)) =
X
ε∈Π
p
r+1
\(pIΓ
p
r−1
)
[Γε] + (p + 1)
X
ε
0
∈Π
p
r−1
[ΓpIε
0
]
=
X
ε∈Π
p
r+1
[Γε] + p
X
ε
0
∈Π
p
r−1
[ΓpIε
0
]
= T (p
r+1
) + pR(p)T (p
r−1
).
What’s underlying this is really just the structure theorem of finitely generated
abelian groups. We can replace
GL
2
with
GL
N
, and we can prove some analogous
formulae, only much uglier. We can also do this with
Z
replaced with any principal
ideal domain.
Given all these discussion of the Hecke algebra, we let them act on modular
forms! We write
V
k
= {all functions f : H → C}.
This has a right G = GL
2
(Q)
+
action on the right by
g : f 7→ f |
k
g.
Then we have M
k
⊆ V
Γ
k
. For f ∈ V
Γ
k
, and g ∈ G, we again write
ΓgΓ =
a
Γg
i
,
and then we have
f |
k
[ΓgΓ] =
X
f |
k
g
i
∈ V
Γ
k
.
Recall when we defined the slash operator, we included a determinant in there.
This gives us
f |
k
R(n) = f
for all n ≥ 1, so the R(n) act trivially. We also define
T
n
= T
k
n
: V
Γ
k
→ V
Γ
k
by
T
n
f = n
k/2−1
f |
k
T (n).
Since
H
(
G,
Γ) is commutative, there is no confusion by writing
T
n
on the left
instead of the right.
Proposition.
(i) T
k
mn
T
k
m
T
k
n
if (m, n) = 1, and
T
k
p
r+1
= T
k
p
r
T
k
p
− p
k−1
T
k
p
r−1
.
(ii) If f ∈ M
k
, then T
n
f ∈ M
k
. Similarly, if f ∈ S
k
, then T
n
f ∈ S
k
.
(iii) We have
a
n
(T
m
f) =
X
1≤d|(m,n)
d
k−1
a
mn/d
2
(f).
In particular,
a
0
(T
m
f) = σ
k−1
(m)a
0
(f).
Proof.
(i) This follows from the analogous relations for T (n), plus f |R(n) = f.
(ii)
This follows from (iii), since
T
n
clearly maps holomorphic
f
to holomorphic
f.
(iii) If r ∈ Z, then
q
r
|
k
T (m) = m
k/2
X
e|m,0≤b<e
e
−k
exp
2πi
mzr
e
2
+ 2πi
br
e
,
where we use the fact that the elements of Π
m
are those of the form
Π
m
=
a b
0 e
: ae = m, 0 ≤ b < e
.
Now for each fixed
e
, the sum over
b
vanishes when
r
e
6∈ Z
, and is
e
otherwise. So we find
q
r
|
k
T (m) = m
k/2
X
e|(n,r)
e
1−k
q
mr/e
2
.
So we have
T
m
(f) =
X
r≥0
a
r
(f)
X
e|(m,r)
m
e
k−1
q
mr/e
2
=
X
1≤d|m
e
k−1
X
a
ms/d
(f)q
ds
=
X
n≥0
X
d|(m,n)
d
k−1
a
mn/d
2
q
n
,
where we put n = ds.
So we actually have a rather concrete formula for what the action looks like.
We can use this to derive some immediate corollaries.
Corollary. Let f ∈ M
k
be such that
T
n
(f) = λf
for some m > 1 and λ ∈ C. Then
(i) For every n with (n, m) = 1, we have
a
mn
(f) = λa
n
(f).
If a
0
(f) 6= 0, then λ = σ
k−1
(m).
Proof. This just follows from above, since
a
n
(T
m
f) = λa
n
(f),
and then we just plug in the formula.
This gives a close relationship between the eigenvalues of
T
m
and the Fourier
coefficients. In particular, if we have an
f
that is an eigenvector for all
T
m
, then
we have the following corollary:
Corollary. Let 0 6= f ∈ M
k
, and k ≥ 4 with T
m
f = λ
m
f for all m ≥ 1. Then
(i) If f ∈ S
k
, then a
1
(f) 6= 0 and
f = a
1
(f)
X
n≥1
λ
n
q
n
.
(ii) If f 6∈ S
k
, then
f = a
0
(f)E
k
.
Proof.
(i) We apply the previous corollary with n = 1.
(ii)
Since
a
0
(
f
)
6
= 0, we know
a
n
(
f
) =
σ
k−1
(
m
)
a
1
(
f
) by (both parts of) the
corollary. So we have
f = a
0
(f) + a
1
(f)
X
n≥1
σ
k−1
(n)q
n
= A + BE
k
.
But since F and E
k
are modular forms, and k 6= 0, we know A = 0.
Definition
(Hecke eigenform)
.
Let
f ∈ S
k
\ {
0
}
. Then
f
is a Hecke eigenform
if for all n ≥ 1, we have
T
n
f = λ
n
f
for some l
n
∈ C. It is normalized if a
1
(f) = 1.
We now state a theorem, which we cannot prove yet, because there is still
one missing ingredient. Instead, we will give a partial proof of it.
Theorem.
There exists a basis for
S
k
consisting of normalized Hecke eigenforms.
So this is actually typical phenomena!
Partial proof. We know that {T
n
} are commuting operators on S
k
.
Fact. There exists an inner product on S
k
for which {T
n
} are self-adjoint.
Then by linear algebra, the {T
n
} are simultaneously diagonalized.
Example.
We take
k
= 12, and
dim S
12
= 1. So everything in here is an
eigenvector. In particular,
∆(z) =
X
n≥1
τ(n)q
n
is a normalized Hecke eigenform. So
τ
(
n
) =
λ
n
. Thus, from properties of the
T
n
, we know that
τ(mn) = τ(m)τ(n)
τ(p
r+1
) = τ(p)τ(p
r
) − p
11
τ(p
r−1
)
whenever (m, n) = 1 and r ≥ 1.
We can do this similarly for
k
= 16
,
18
,
20
,
22
,
26, because
dim S
k
= 1, with
Hecke eigenform f = E
k−12
∆.
Unfortunately, when
dim S
k
(Γ(1))
>
1, there do not appear to be any
“natural” eigenforms. It seems like we just have to take the space and diagonalize
it by hand. For example,
S
24
has dimension 2, and the eigenvalues of the
T
n
live in the strange field
Q
(
√
144169
) (note that 144169 is a prime), and not in
Q
.
We don’t seem to find good reasons for why this is true. It appears that the nice
things that happen for small values of
k
happen only because there is no choice.