Part III — Algebras
Based on lectures by C. J. B. Brookes
Notes taken by Dexter Chua
Lent 2017
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
The aim of the course is to give an introduction to algebras. The emphasis will be
on non-commutative examples that arise in representation theory (of groups and Lie
algebras) and the theory of algebraic D-modules, though you will learn something
ab out commutative algebras in passing.
Topics we discuss include:
–
Artinian algebras. Examples, group algebras of finite groups, crossed products.
Structure theory. Artin–Wedderburn theorem. Projective modules. Blocks.
K
0
.
–
No etherian algebras. Examples, quantum plane and quantum torus, differen-
tial operator algebras, enveloping algebras of finite dimensional Lie algebras.
Structure theory. Injective hulls, uniform dimension and Goldie’s theorem.
–
Hochschild chain and cochain complexes. Hochschild homology and cohomology.
Gerstenhab er algebras.
– Deformation of algebras.
– Coalgebras, bialgebras and Hopf algebras.
Pre-requisites
It will be assumed that you have attended a first course on ring theory, eg IB Groups,
Rings and Modules. Experience of other algebraic courses such as II Representation
Theory, Galois Theory or Number Fields, or III Lie algebras will be helpful but not
necessary.
Contents
0 Introduction
1 Artinian algebras
1.1 Artinian algebras
1.2 Artin–Wedderburn theorem
1.3 Crossed products
1.4 Projectives and blocks
1.5 K
0
2 Noetherian algebras
2.1 Noetherian algebras
2.2 More on A
n
(k) and U(g)
2.3 Injective modules and Goldie’s theorem
3 Hochschild homology and cohomology
3.1 Introduction
3.2 Cohomology
3.3 Star products
3.4 Gerstenhaber algebra
3.5 Hochschild homology
4 Coalgebras, bialgebras and Hopf algebras
0 Introduction
We start with the definition of an algebra. Throughout the course,
k
will be a
field.
Definition
(
k
-algebra)
.
A (unital) associative
k
-algebra is a
k
-vector space
A
together with a linear map
m
:
A ⊗ A → A
, called the product map, and linear
map u : k → A, called the unit map, such that
– The product induced by m is associative.
– u(1) is the identity of the multiplication.
In particular, we don’t require the product to be commutative. We usually
write m(x ⊗ y) as xy.
Example.
Let
K/k
be a finite field extension. Then
K
is a (commutative)
k-algebra.
Example.
The
n × n
matrices
M
n
(
k
) over
k
form a non-commutative
k
-algebra.
Example.
The quaternions
H
is an
R
-algebra, with an
R
-basis 1
, i, j, k
, and
multiplication given by
i
2
= j
2
= k
2
= 1, ij = k, ji = −k.
This is in fact a division algebra (or skew fields), i.e. the non-zero elements have
multiplicative inverse.
Example. Let G be a finite group. Then the group algebra
kG =
n
X
λ
g
g : g ∈ G, λ
g
∈ k
o
with the obvious multiplication induced by the group operation is a k-algebra.
These are the associative algebras underlying the representation theory of
finite groups.
Most of the time, we will just care about algebras that are finite-dimensional
as
k
-vector spaces. However, often what we need for the proofs to work is not
that the algebra is finite-dimensional, but just that it is Artinian. These algebras
are defined by some finiteness condition on the ideals.
Definition
(Ideal)
.
A left ideal of
A
is a
k
-subspace of
A
such that if
x ∈ A
and
y ∈ I
, then
xy ∈ I
. A right ideal is one where we require
yx ∈ I
instead.
An ideal is something that is both a left ideal and a right ideal.
Since the multiplication is not necessarily commutative, we have to make the
distinction between left and right things. Most of the time, we just talk about
the left case, as the other case is entirely analogous.
The definition we want is the following:
Definition
(Artinian algebra)
.
An algebra
A
is left Artinian if it satisfies the
descending chain condition (DCC ) on left ideals, i.e. if we have a descending
chain of left ideals
I
1
≥ I
2
≥ I
3
≥ · · · ,
then there is some N such that I
N+m
= I
N
for all m ≥ 0.
We say an algebra is Artinian if it is both left and right Artinian.
Example. Any finite-dimensional algebra is Artinian.
The main classification theorem for Artinian algebras we will prove is the
following result:
Theorem
(Artin–Wedderburn theorem)
.
Let
A
be a left-Artinian algebra such
that the intersection of the maximal left ideals is zero. Then
A
is the direct sum
of finitely many matrix algebras over division algebras.
When we actually get to the theorem, we will rewrite this in a way that
seems a bit more natural.
One familiar application of this theorem is in representation theory. The group
algebra of a finite group is finite-dimensional, and in particular Artinian. We
will later see that Maschke’s theorem is equivalent to saying that the hypothesis
of the theorem holds. So this puts a very strong constraint on how the group
algebra looks like.
After studying Artinian rings, we’ll talk about Noetherian algebras.
Definition
(Noetherian algebra)
.
An algebra is left Noetherian if it satisfies
the ascending chain condition (ACC ) on left ideals, i.e. if
I
1
≤ I
2
≤ I
3
≤ · · ·
is an ascending chain of left ideals, then there is some
N
such that
I
N+m
=
I
N
for all m ≥ 0.
Similarly, we can define right Noetherian algebras, and say an algebra is
Noetherian if it is both left and right Noetherian.
We can again look at some examples.
Example. Again all finite-dimensional algebras are Noetherian.
Example.
In the commutative case, Hilbert’s basis theorem tells us a polynomial
algebra
k
[
X
1
, · · · , X
k
] in finitely many variables is Noetherian. Similarly, the
power series rings k[[X
1
, · · · , X
n
]] are Noetherian.
Example.
The universal enveloping algebra of a finite-dimensional Lie algebra
are the (associative!) algebras that underpin the representation theory of these
Lie algebras.
Example.
Some differential operator algebras are Noetherian. We assume
char k
= 0. Consider the polynomial ring
k
[
X
]. We have operators “multipli-
cation by
X
” and “differentiate with respect to
X
” on
k
[
X
]. We can form the
algebra
k
[
X,
∂
∂x
] of differential operators on
k
[
X
], with multiplication given by
the composition of operators. This is called the Weyl algebra
A
1
. We will show
that this is a non-commutative Noetherian algebra.
Example.
Some group algebras are Noetherian. Clearly all group algebras of
finite groups are Noetherian, but the group algebras of certain infinite groups
are Noetherian. For example, we can take
G =
1 λ µ
0 1 ν
0 0 0
: λ, µ, ν ∈ Z
,
and this is Noetherian. However, we shall not prove this.
We will see that all left Artinian algebras are left Noetherian. While there is
a general theory of non-commutative Noetherian algebras, it is not as useful as
the theory of commutative Noetherian algebras.
In the commutative case, we often look at
Spec A
, the set of prime ideals of
A
. However, sometimes in the non-commutative there are few prime ideals, and
so Spec is not going to keep us busy.
Example. In the Weyl algebra A
1
, the only prime ideals are 0 and A
1
.
We will prove a theorem of Goldie:
Theorem
(Goldie’s theorem)
.
Let
A
be a right Noetherian algebra with no
non-zero ideals all of whose elements are nilpotent. Then
A
embeds in a finite
direct sum of matrix algebras over division algebras.
Some types of Noetherian algebras can be thought of as non-commutative
polynomial algebras and non-commutative power series, i.e. they are deformations
of the analogous commutative algebra. For example, we say
A
1
is a deformation
of the polynomial algebra
k
[
X, Y
], where instead of having
XY − Y X
= 0, we
have
XY − Y X
= 1. This also applies to enveloping algebras and Iwasawa
algebras. We will study when one can deform the multiplication so that it remains
associative, and this is bound up with the cohomology theory of associative
algebras — Hochschild cohomology. The Hochschild complex has rich algebraic
structure, and this will allow us to understand how we can deform the algebra.
At the end, we shall quickly talk about bialgebras and Hopf algebras. In a
bialgebra, one also has a comultiplication map
A → A⊗A
, which in representation
theory is crucial in saying how to regard a tensor product of two representations
as a representation.
1 Artinian algebras
1.1 Artinian algebras
We continue writing down some definitions. We already defined left and right
Artinian algebras in the introduction. Most examples we’ll meet are in fact
finite-dimensional vector spaces over
k
. However, there exists some more perverse
examples:
Example. Let
A =
r s
0 t
: r ∈ Q, s, t ∈ R
Then this is right Artinian but not left Artinian over
Q
. To see it is not left
Artinian, note that there is an ideal
I =
0 s
0 0
: s ∈ R
∼
=
R
of
A
, and a matrix
r s
0 t
acts on this on the left by sending
0 s
0
0 0
to
rs
0
. Since
R
is an infinite-dimensional
Q
-vector space, one sees that there is an
infinite strictly descending chain of ideals contained in I.
The fact that it is right Artinian is a direct verification. Indeed, it is not
difficult to enumerate all the right ideals, which is left as an exercise for the
reader.
As in the case of commutative algebra, we can study the modules of an
algebra.
Definition
(Module)
.
Let
A
be an algebra. A left
A
-module is a
k
-vector space
M and a bilinear map
A ⊗ M M
a ⊗ m xm
such that (
ab
)
m
=
a
(
bm
) for all
a, b ∈ A
and
m ∈ M
. Right
A
-modules are
defined similarly.
An
A
-
A
-bimodule is a vector space
M
that is both a left
A
-module and a
right
A
-module, such that the two actions commute — for
a, b ∈ A
and
x ∈ M
,
we have
a(xb) = (ax)b.
Example.
The algebra
A
itself is a left
A
-module. We write this as
A
A
, and
call this the left regular representation. Similarly, the right action is denoted
A
A
.
These two actions are compatible by associativity, so A is an A-A-bimodule.
If we write
End
k
(
A
) for the
k
-linear maps
A → A
, then
End
k
is naturally a
k
-
algebra by composition, and we have a
k
-algebra homomorphism
A → End
k
(
A
)
that sends
a ∈ A
to multiplication by
a
on the left. However, if we want
to multiply on the right instead, it is no longer a
k
-algebra homomorphism
A → End
k
(A). Instead, it is a map A → End
k
(A)
op
, where
Definition
(Opposite algebra)
.
Let
A
be a
k
-algebra. We define the opposite
algebra
A
op
to be the algebra with the same underlying vector space, but with
multiplication given by
x · y = yx.
Here on the left we have the multiplication in
A
op
and on the right we have the
multiplication in A.
In general, a left A-module is a right A
op
-module.
As in the case of ring theory, we can talk about prime ideals. However, we
will adopt a slightly different definition:
Definition
(Prime ideal)
.
An ideal
P
is prime if it is a proper ideal, and if
I
and J are ideals with IJ ⊆ P , then either I ⊆ P or J ⊆ P .
It is an exercise to check that this coincides in the commutative case with
the definition using elements.
Definition
(Annihilator)
.
Let
M
be a left
A
-module and
m ∈ M
. We define
the annihilators to be
Ann(m) = {a ∈ A : am = 0}
Ann(M) = {a ∈ A : am = 0 for all m ∈ M} =
\
m∈M
Ann(m).
Note that
Ann
(
m
) is a left ideal of
A
, and is in fact the kernel of the
A
-module
homomorphism
A → M
given by
x 7→ xm
. We’ll denote the image of this map
by Am, a left submodule of M, and we have
A
Ann(m)
∼
=
Am.
On the other hand, it is easy to see that
Ann
(
M
) is an fact a (two-sided) ideal.
Definition
(Simple module)
.
A non-zero module
M
is simple or irreducible if
the only submodules of M are 0 and M.
It is easy to see that
Proposition.
Let
A
be an algebra and
I
a left ideal. Then
I
is a maximal left
ideal iff A/I is simple.
Example. Ann(m) is a maximal left ideal iff Am is irreducible.
Proposition.
Let
A
be an algebra and
M
a simple module. Then
M
∼
=
A/I
for some (maximal) left ideal I of A.
Proof.
Pick an arbitrary element
m ∈ M
, and define the
A
-module homomor-
phism
ϕ
:
A → M
by
ϕ
(
a
) =
am
. Then the image is a non-trivial submodule,
and hence must be
M
. Then by the first isomorphism theorem, we have
M
∼
=
A/ ker ϕ.
Before we start doing anything, we note the following convenient lemma:
Lemma.
Let
M
be a finitely-generated
A
module. Then
M
has a maximal
proper submodule M
0
.
Proof.
Let
m
1
, · · · , m
k
∈ M
be a minimal generating set. Then in particular
N
=
hm
1
, · · · , m
k−1
i
is a proper submodule of
M
. Moreover, a submodule
of
M
containing
N
is proper iff it does not contain
m
k
, and this property is
preserved under increasing unions. So by Zorn’s lemma, there is a maximal
proper submodule.
Definition
(Jacobson radical)
.
The
J
(
A
) of
A
is the intersection of all maximal
left ideals.
This is in fact an ideal, and not just a left one, because
J(A) =
\
{maximal left ideals} =
\
m∈M,M simple
Ann(m) =
\
M simple
Ann(M),
which we have established is an ideal. Yet, it is still not clear that this is
independent of us saying “left” instead of “right”. It, in fact, does not, and this
follows from the Nakayama lemma:
Lemma
(Nakayama lemma)
.
The following are equivalent for a left ideal
I
of
A.
(i) I ≤ J(A).
(ii)
For any finitely-generated left
A
-module
M
, if
IM
=
M
, then
M
= 0,
where
IM
is the module generated by elements of the form
am
, with
a ∈ I
and m ∈ M.
(iii) G = {1 + a : a ∈ I} = 1 + I is a subgroup of the unit group of A.
In particular, this shows that the Jacobson radical is the largest ideal satisfying
(iii), which is something that does not depend on handedness.
Proof.
–
(i)
⇒
(ii): Suppose
I ≤ J
(
A
) and
M 6
= 0 is a finitely-generated
A
-module,
and we’ll see that IM M.
Let
N
be a maximal submodule of
M
. Then
M/N
is a simple module,
so for any
¯m ∈ M/N
, we know
Ann
(
¯m
) is a maximal left ideal. So
J(A) ≤ Ann(M/N ). So IM ≤ J(A)M ≤ N M.
–
(ii)
⇒
(iii): Assume (ii). We let
x ∈ I
and set
y
= 1 +
x
. Hence
1 =
y − x ∈ Ay
+
I
. Since
Ay
+
I
is a left ideal, we know
Ay
+
I
=
A
. In
other words, we know
I
A
Ay
=
A
Ay
.
Now using (ii) on the finitely-generated module
A/Ay
(it is in fact generated
by 1), we know that
A/Ay
= 0. So
A
=
Ay
. So there exists
z ∈ A
such
that 1 =
zy
=
z
(1 +
x
). So (1 +
x
) has a left inverse, and this left inverse
z
lies in
G
, since we can write
z
= 1
− zx
. So
G
is a subgroup of the unit
group of A.
–
(iii)
⇒
(i): Suppose
I
1
is a maximal left ideal of
A
. Let
x ∈ I
. If
x 6∈ I
1
,
then
I
1
+
Ax
=
A
by maximality of
I
. So 1 =
y
+
zx
for some
y ∈ I
1
and
z ∈ A
. So
y
= 1
− zx ∈ G
. So
y
is invertible. But
y ∈ I
1
. So
I
1
=
A
.
This is a contradiction. So we found that
I < I
1
, and this is true for all
maximal left ideals I
1
. Hence I ≤ J(A).
We now come to the important definition:
Definition (Semisimple algebra). An algebra is semisimple if J(A) = 0.
We will very soon see that for Artinian algebras, being semi-simple is equiva-
lent to a few other very nice properties, such as being completely reducible. For
now, we shall content ourselves with some examples.
Example. For any A, we know A/J(A) is always semisimple.
We can also define
Definition (Simple algebra). An algebra is simple if the only ideals are 0 and
A.
It is trivially true that any simple algebra is semi-simple — the Jacobson
radical is an ideal, and it is not
A
. A particularly important example is the
following:
Example.
Consider
M
n
(
k
). We let
e
i
be the matrix with 1 in the (
i, i
)th
entry and zero everywhere else. This is idempotent, i.e.
e
2
i
=
e
i
. It is also
straightforward to check that
Ae
i
=
0 · · · 0 a
1
0 · · · 0
0 · · · 0 a
2
0 · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 · · · 0 a
n
0 · · · 0
The non-zero column is, of course, the
i
th column. Similarly,
e
i
A
is the matrices
that are zero apart from in the
i
th row. These are in fact all left and all right
ideals respectively. So the only ideals are 0 and A.
As a left A-module, we can decompose A as
A
A =
n
M
i=1
Ae
i
,
which is a decomposition into simple modules.
Definition
(Completely reducible)
.
A module
M
of
A
is completely reducible
iff it is a sum of simple modules.
Here in the definition, we said “sum” instead of “direct sum”, but the
following proposition shows it doesn’t matter:
Proposition. Let M be an A-module. Then the following are equivalent:
(i) M is completely reducible.
(ii) M is the direct sum of simple modules.
(iii)
Every submodule of
M
has a complement, i.e. for any submodule
N
of
M
,
there is a complement N
0
such that M = N ⊕ N
0
.
Often, the last condition is the most useful in practice.
Proof.
– (i) ⇒ (ii): Let M be completely reducible, and consider the set
n
{S
α
≤ M} : S
α
are simple,
X
S
α
is a direct sum
o
.
Notice this set is closed under increasing unions, since the property of
being a direct sum is only checked on finitely many elements. So by Zorn’s
lemma, it has a maximal element, and let N be the sum of the elements.
Suppose this were not all of
M
. Then there is some
S ≤ M
such that
S 6⊆ N
. Then
S ∩ N ( S
. By simplicity, they intersect trivially. So
S
+
N
is a direct sum, which is a contradiction. So we must have
N
=
M
, and
M is the direct sum of simple modules.
– (ii) ⇒ (i) is trivial.
– (i) ⇒ (iii): Let N ≤ M be a submodule, and consider
n
{S
α
≤ M} : S
α
are simple, N +
X
S
α
is a direct sum
o
.
Again this set has a maximal element, and let
P
be the direct sum of those
S
α
. Again if
P ⊕ N
is not all of
M
, then pick an
S ≤ M
simple such that
S
is not contained in
P ⊕ N
. Then again
S ⊕ P ⊕ N
is a direct sum, which
is a contradiction.
–
(iii)
⇒
(i): It suffices to show that if
N < M
is a proper submodule,
then there exists a simple module that intersects
N
trivially. Indeed, we
can take
N
to be the sum of all simple submodules of
M
, and this forces
N = M .
To do so, pick an
x 6∈ N
, and let
P
be submodule of
M
maximal among
those satisfying
P ∩ N
= 0 and
x 6∈ N ⊕ P
. Then
N ⊕ P
is a proper
submodule of M. Let S be a complement. We claim S is simple.
If not, we can find a proper submodule
S
0
of
S
. Let
Q
be a complement of
N ⊕ P ⊕ S
0
. Then we can write
M = N ⊕ P ⊕ S
0
⊕ Q
x = n + p + s + q
.
By assumption,
s
and
q
are not both zero. We wlog assume
s
is non-
zero. Then
P ⊕ Q
is a larger submodule satisfying (
P ⊕ Q
)
∩ N
= 0 and
x 6∈ N ⊕
(
P ⊕ Q
). This is a contradiction. So
S
is simple, and we are
done.
Using these different characterizations, we can prove that completely reducible
modules are closed under the familiar ex operations.
Proposition.
Sums, submodules and quotients of completely reducible modules
are completely reducible.
Proof.
It is clear by definition that sums of completely reducible modules are
completely reducible.
To see that submodules of completely reducible modules are completely
reducible, let
M
be completely reducible, and
N ≤ M
. Then for each
x ∈ N
,
there is some simple submodule
S ≤ M
containing
x
. Since
S ∩ N ≤ S
and
contains x, it must be S, i.e. S ⊆ N. So N is the sum of simple modules.
Finally, to see quotients are completely reducible, if
M
is completely reducible
and N is a submodule, then we can write
M = N ⊕ P
for some P . Then M/N
∼
=
P , and P is completely reducible.
We will show that every left Artinian algebra is completely reducible over
itself iff it is semi-simple. We can in fact prove a more general fact for
A
-modules.
To do so, we need a generalization of the Jacobson radical.
Definition
(Radical)
.
For a module
M
, we write
Rad
(
M
) for the intersection
of maximal submodules of M, and call it the radical of M.
Thus, we have Rad(
A
A) = J(A) = Rad(A
A
).
Proposition.
Let
M
be an
A
-module satisfying the descending chain condition
on submodules. Then M is completely reducible iff Rad(M) = 0.
Proof.
It is clear that if
M
is completely reducible, then
Rad
(
M
) = 0. Indeed,
we can write
M =
M
α∈A
S
α
,
where each S
α
is simple. Then
J(A) ≤
\
α∈A
M
β∈A\{α}
S
β
= {0}.
Conversely, if
Rad
(
M
) = 0, we note that since
M
satisfies the descending
chain condition on submodules, there must be a finite collection
M
1
, · · · , M
n
of
maximal submodules whose intersection vanish. Then consider the map
M
n
M
i=1
M
M
i
x (x + M
1
, x + M
2
, · · · , x + M
n
)
The kernel of this map is the intersection of the
M
i
, which is trivial. So this
embeds
M
as a submodule of
L
M
M
i
. But each
M
M
i
is simple, so
M
is a submodule
of a completely reducible module, hence completely reducible.
Corollary.
If
A
is a semi-simple left Artinian algebra, then
A
A
is completely
reducible.
Corollary.
If
A
is a semi-simple left Artinian algebra, then every left
A
-module
is completely reducible.
Proof.
Every
A
-module
M
is a quotient of sums of
A
A
. Explicitly, we have a
map
M
m∈M
A
A M
(a
m
)
P
a
m
m
Then this map is clearly surjective, and thus M is a quotient of
L
M
A
A.
If
A
is not semi-simple, then it turns out it is rather easy to figure out radical
of M , at least if M is finitely-generated.
Lemma.
Let
A
be left Artinian, and
M
a finitely generated left
A
-module, then
J(A)M = Rad(M ).
Proof.
Let
M
0
be a maximal submodule of
M
. Then
M/M
0
is simple, and is in
fact A/I for some maximal left ideal I. Then we have
J(A)
M
M
0
= 0,
since J(A) < I. Therefore J(A)M ≤ M
0
. So J(A)M ≤ Rad(M).
Conversely, we know
M
J(A)M
is an
A/J
(
A
)-module, and is hence completely
reducible as
A/J
(
A
) is semi-simple (and left Artinian). Since an
A
-submodule
of
M
J(A)M
is the same as an
A/J
(
A
)-submodule, it follows that it is completely
reducible as an A-module as well. So
Rad
M
J(A)M
= 0,
and hence Rad(M ) ≤ J(A)M.
Proposition. Let A be left Artinian. Then
(i) J(A) is nilpotent, i.e. there exists some r such that J(A)
r
= 0.
(ii)
If
M
is a finitely-generated left
A
-module, then it is both left Artinian and
left Noetherian.
(iii) A is left Noetherian.
Proof.
(i)
Since
A
is left-Artinian, and
{J
(
A
)
r
:
r ∈ N}
is a descending chain of
ideals, it must eventually be constant. So
J
(
A
)
r
=
J
(
A
)
r+1
for some
r
. If
this is non-zero, then again using the descending chain condition, we see
there is a left ideal
I
with
J
(
A
)
r
I 6
= 0 that is minimal with this property
(one such ideal exists, say J(A) itself).
Now pick
x ∈ I
with
J
(
A
)
r
x 6
= 0. Since
J
(
A
)
2r
=
J
(
A
)
r
, it follows
that
J
(
A
)
r
(
J
(
A
)
r
x
)
6
= 0. So by minimality,
J
(
A
)
r
x ≥ I
. But the other
inclusion clearly holds. So they are equal. So there exists some
a ∈ J
(
A
)
r
with x = ax. So
(1 − a)x = 0.
But 1 − a is a unit. So x = 0. This is a contradiction. So J(A)
r
= 0.
(ii)
Let
M
i
=
J
(
A
)
i
M
. Then
M
i
/M
i+1
is annihilated by
J
(
A
), and hence
completely reducible (it is a module over semi-simple
A/J
(
A
)). Since
M
is a finitely generated left
A
-module for a left Artinian algebra, it satisfies
the descending chain condition for submodules (exercise), and hence so
does M
i
/M
i+1
.
So we know
M
i
/M
i+1
is a finite sum of simple modules, and therefore
satisfies the ascending chain condition. So
M
i
/M
i+1
is left Noetherian,
and hence M is (exercise).
(iii) Follows from (ii) since A is a finitely-generated left A-module.
1.2 Artin–Wedderburn theorem
We are going to state the Artin–Wedderburn theorem for right (as opposed to
left) things, because this makes the notation easier for us.
Theorem
(Artin–Wedderburn theorem)
.
Let
A
be a semisimple right Artinian
algebra. Then
A =
r
M
i=1
M
n
i
(D
i
),
for some division algebra D
i
, and these factors are uniquely determined.
A has exactly r isomorphism classes of simple (right) modules S
i
, and
End
A
(S
i
) = {A-module homomorphisms S
i
→ S
i
}
∼
=
D
i
,
and
dim
D
i
(S
i
) = n
i
.
If A is simple, then r = 1.
If we had the left version instead, then we need to insert op’s somewhere.
Artin–Wedderburn is an easy consequence of two trivial lemma. The key idea
that leads to Artin–Wedderburn is the observation that the map
A
A
→ End
A
(
A
A
)
sending
a
to left-multiplication by
a
is an isomorphism of algebras. So we need
to first understand endomorphism algebras, starting with Schur’s lemma.
Lemma
(Schur’s lemma)
.
Let
M
1
, M
2
be simple right
A
-modules. Then either
M
1
∼
=
M
2
, or
Hom
A
(
M
1
, M
2
) = 0. If
M
is a simple
A
-module, then
End
A
(
M
)
is a division algebra.
Proof.
A non-zero
A
-module homomorphism
M
1
→ M
2
must be injective, as
the kernel is submodule. Similarly, the image has to be the whole thing since
the image is a submodule. So this must be an isomorphism, and in particular
has an inverse. So the last part follows as well.
As mentioned, we are going to exploit the isomorphism
A
A
∼
=
End
A
(
A
A
).
This is easy to see directly, but we can prove a slightly more general result, for
the sake of it:
Lemma.
(i)
If
M
is a right
A
-module and
e
is an idempotent in
A
, i.e.
e
2
=
e
, then
Me
∼
=
Hom
A
(eA, M ).
(ii) We have
eAe
∼
=
End
A
(eA).
In particular, we can take e = 1, and recover End
A
(A
A
)
∼
=
A.
Proof.
(i) We define maps
me (ex 7→ mex)
Me Hom(eA, M )
α(e) α
f
1
f
2
We note that
α
(
e
) =
α
(
e
2
) =
α
(
e
)
e ∈ Me
. So this is well-defined. By
inspection, these maps are inverse to each other. So we are done.
Note that we might worry that we have to pick representatives
me
and
ex
for the map
f
1
, but in fact we can also write it as
f
(
a
)(
y
) =
ay
, since
e
is
idempotent. So we are safe.
(ii) Immediate from above by putting M = eA.
Lemma. Let M be a completely reducible right A-module. We write
M =
M
S
n
i
i
,
where
{S
i
}
are distinct simple
A
-modules. Write
D
i
=
End
A
(
S
i
), which we
already know is a division algebra. Then
End
A
(S
n
i
i
)
∼
=
M
n
i
(D
i
),
and
End
A
(M) =
M
M
n
i
(D
i
)
Proof.
The result for
End
A
(
S
n
i
i
) is just the familiar fact that a homomorphism
S
n
→ S
m
is given by an
m × n
matrix of maps
S → S
(in the case of vector
spaces over a field
k
, we have
End
(
k
)
∼
=
k
, so they are matrices with entries in
k). Then by Schur’s lemma, we have
End
A
(M) =
M
i
End
A
(M
i
)
∼
=
M
n
i
(D
i
).
We now prove Artin–Wedderburn.
Proof of Artin–Wedderburn.
If
A
is semi-simple, then it is completely reducible
as a right A-module. So we have
A
∼
=
End(A
A
)
∼
=
M
M
n
i
(D
i
).
We now decompose each
M
n
i
(
D
i
) into a sum of simple modules. We know each
M
n
i
(
D
i
) is a non-trivial
M
n
i
(
D
i
) module in the usual way, and the action of
the other summands is trivial. We can simply decompose each
M
n
i
(
D
i
) as the
sum of submodules of the form
0 0 · · · 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0 0
a
1
a
2
· · · a
n
i
−1
a
n
i
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0 0
and there are
n
i
components. We immediately see that if we write
S
i
for this
submodule, then we have
dim
D
i
(S
i
) = n
i
.
Finally, we have to show that every simple module S of A is one of the S
i
. We
simply have to note that if
S
is a simple
A
-module, then there is a non-trivial
map
f
:
A → S
(say by picking
x ∈ S
and defining
f
(
a
) =
xa
). Then in the
decomposition of
A
into a direct sum of simple submodules, there must be one
factor
S
i
such that
f|
S
i
is non-trivial. Then by Schur’s lemma, this is in fact an
isomorphism S
i
∼
=
S.
This was for semi-simple algebras. For a general right Artinian algebra, we
know that
A/J
(
A
) is semi-simple and inherits the Artinian property. Then
Artin–Wedderburn applies to A/J(A).
Some of us might be scared of division algebras. Sometimes, we can get away
with not talking about them. If
A
is not just Artinian, but finite-dimensional,
then so are the D
i
.
Now pick an arbitrary
x ∈ D
i
. Then the sub-algebra of
D
i
generated by
x
is
be commutative. So it is in fact a subfield, and finite dimensionality means it is
algebraic over
k
. Now if we assume that
k
is algebraically closed, then
x
must
live in k. So we’ve shown that these D
i
must be k itself. Thus we get
Corollary.
If
k
is algebraically closed and
A
is a finite-dimensional semi-simple
k-algebra, then
A
∼
=
M
M
n
i
(k).
This is true, for example, when k = C.
We shall end this section by applying our results to group algebras. Recall
the following definition:
Definition
(Group algebra)
.
Let
G
be a group and
k
a field. The group algebra
of G over k is
kG =
n
X
λ
g
g : g ∈ G, λ
g
∈ k
o
.
This has a bilinear multiplication given by the obvious formula
(λ
g
g)(µ
h
h) = λ
g
µ
h
(gh).
The first thing to note is that group algebras are almost always semi-simple.
Theorem
(Maschke’s theorem)
.
Let
G
be a finite group and
p - |G|
, where
p = char k, so that |G| is invertible in k, then kG is semi-simple.
Proof.
We show that any submodule
V
of a
kG
-module
U
has a complement.
Let π : U → V be any k-vector space projection, and define a new map
π
0
=
1
|G|
X
g∈G
gπg
−1
: U → V.
It is easy to see that this is a
kG
-module homomorphism
U → V
, and is a
projection. So we have
U = V ⊕ ker π
0
,
and this gives a kG-module complement.
There is a converse to Maschke’s theorem:
Theorem. Let G be finite and kG semi-simple. Then char k - |G|.
Proof.
We note that there is a simple
kG
-module
S
, given by the trivial module.
This is a one-dimensional k vector space. We have
D = End
kG
(S) = k.
Now suppose
kG
is semi-simple. Then by Artin–Wedderburn, there must be
only one summand of S in kG.
Consider the following two ideals of kG: we let
I
1
=
n
X
λ
g
g ∈ kG :
X
λ
g
= 0
o
.
This is in fact a two-sided ideal of
kG
. We also have the center of the algebra,
given by
I
2
=
n
λ
X
g ∈ kG : λ ∈ k
o
.
Now if char k | |G|, then I
2
⊆ I
1
. So we can write
kG =
kG
I
1
⊕ I
1
=
kG
I
1
⊕ I
2
⊕ · · · .
But we know
G
acts trivially on
kG
I
1
and
I
2
, and they both have dimension 1.
This gives a contradiction. So we must have char k - |G|.
We can do a bit more of representation theory. Recall that when
k
is
algebraically closed and has characteristic zero, then the number of simple
kG
-
modules is the number of conjugacy classes of
G
. There is a more general result
for a general characteristic p field:
Theorem.
Let
k
be algebraically closed of characteristic
p
, and
G
be finite.
Then the number of simple
kG
modules (up to isomorphism) is equal to the
number of conjugacy classes of elements of order not divisible by
p
. These are
known as the p-regular elements.
We immediately deduce that
Corollary.
If
|G|
=
p
r
for some
r
and
p
is prime, then the trivial module is the
only simple kG module, when char k = p.
Note that we can prove this directly rather than using the theorem, by
showing that
I
=
ker
(
kG → k
) is a nilpotent ideal, and annihilates all simple
modules.
Proof sketch of theorem.
The number of simple
kG
modules is just the number
of simple
kG/J
(
kG
) module, as
J
(
kG
) acts trivially on every simple module.
There is a useful trick to figure out the number of simple
A
-modules for a given
semi-simple A. Suppose we have a decomposition
A
∼
=
r
M
i=1
M
n
i
(k).
Then we know
r
is the number of simple
A
-modules. We now consider [
A, A
],
the k-subspace generated by elements of the form xy − yx. Then we see that
A
[A, A]
∼
=
r
M
i=1
M
n
i
(k)
[M
n
i
(k), M
n
i
(k)]
.
Now by linear algebra, we know [
M
n
i
(
k
)
, M
n
i
(
k
)] is the trace zero matrices, and
so we know
dim
k
M
n
i
(k)
[M
n
i
(k), M
n
i
(k)]
= 1.
Hence we know
dim
A
[A, A]
= r.
Thus we need to compute
dim
k
kG/J(kG)
[kG/J(kG), kG/J(kG)]
We then note the following facts:
(i) For a general algebra A, we have
A/J(A)
[A/J(A), A/J(A)]
∼
=
A
[A, A] + J(A)
.
(ii) Let g
1
, · · · , g
m
be conjugacy class representatives of G. Then
{g
i
+ [kG, kG]}
forms a k-vector space basis of kG/[kG, kG].
(iii)
If
g
1
, · · · , g
r
is a set of representatives of
p
-regular conjugacy classes, then
n
g
i
+
[kG, kG] + J(kG)
o
form a basis of kG/([kG, kG] + J(kG)).
Hence the result follows.
One may find it useful to note that [
kG, kG
] +
J
(
kG
) consists of the elements
in kG such that x
p
s
∈ [kG, kG] for some s.
In this proof, we look at
A/
[
A, A
]. However, in the usual proof of the result
in the characteristic zero looks at the center
Z
(
A
). The relation between these
two objects is that the first is the 0th Hochschild homology group of
A
, while
the second is the 0th Hochschild cohomology group of A.
1.3 Crossed products
Number theorists are often interested in representations of Galois groups and
kG
-modules where
k
is an algebraic number field, e.g.
Q
. In this case, the
D
i
’s
appearing in Artin–Wedderburn may be non-commutative.
We have already met one case of a non-commutative division ring, namely
the quaternions H. This is in fact an example of a general construction.
Definition
(Crossed product)
.
The crossed product of a
k
-algebra
B
and a
group G is specified by the following data:
– A group homomorphism φ : G → Aut
k
(B), written
φ
g
(λ) = λ
g
;
– A function
Ψ(g, h) : G × G → B.
The crossed product algebra has underlying set
X
λ
g
g : λ
g
∈ B.
with operation defined by
λg · µh = λµ
g
Ψ(g, h)(gh).
The function Ψ is required to be such that the resulting product is associative.
We should think of the
µ
g
as specifying what happens when we conjugate
g
pass µ, and then Ψ(g, h)(gh) is the product of g and h in the crossed product.
Usually, we take
B
=
K
, a Galois extension of
k
, and
G
=
Gal
(
K/k
). Then
the action
φ
g
is the natural action of
G
on the elements of
K
, and we restrict to
maps Ψ : G × G → K
×
only.
Example.
Consider
B
=
K
=
C
, and
k
=
R
. Then
G
=
Gal
(
C/R
)
∼
=
Z/
2
Z
=
{e, g}, where g is complex conjugation. The elements of H are of the form
λ
e
e + λ
g
g,
where λ
e
, λ
g
∈ C, and we will write
1 · g = g, i · g = k, 1 · e = 1, i · e = i.
Now we want to impose
−1 = j
2
= 1g · 1g = ψ(g, g)e.
So we set Ψ(
g, g
) =
−
1. We can similarly work out what we want the other
values of Ψ to be.
Note that in general, crossed products need not be division algebras.
The crossed product is not just a
k
-algebra. It has a natural structure of a
G-graded algebra, in the sense that we can write it as a direct sum
BG =
M
g∈G
Bg,
and we have Bg
1
· Bg
2
⊆ Bg
1
g
2
.
Focusing on the case where
K/k
is a Galois extension, we use the notation
(
K, G,
Ψ), where Ψ :
G × G → K
×
. Associativity of these crossed products is
equivalent to a 2-cocycle condition Ψ, which you will be asked to make precise
on the first example sheet.
Two crossed products (
K, G,
Ψ
1
) and (
K, G,
Ψ
2
) are isomorphic iff the map
G × G K
×
(g, h) Ψ
1
(g, h)(Ψ
2
(g, h))
−1
satisfies a 2-coboundary condition, which is again left for the first example sheet.
Therefore the second (group) cohomology
{2-cocycles : G × G → K
×
}
{2-coboundaries : G × G → K
×
}
determines the isomorphism classes of (associative) crossed products (K, G, Ψ).
Definition
(Central simple algebra)
.
A central simple
k
-algebra is a finite-
dimensional k-algebra which is a simple algebra, and with a center Z(A) = k.
Note that any simple algebra is a division algebra, say by Schur’s lemma.
So the center must be a field. Hence any simple
k
-algebra can be made into a
central simple algebra simply by enlarging the base field.
Example. M
n
(k) is a central simple algebra.
The point of talking about these is the following result:
Fact.
Any central simple
k
-algebra is of the form
M
n
(
D
) for some division
algebra
D
which is also a central simple
k
-algebra, and is a crossed product
(K, G, Ψ).
Note that when
K
=
C
and
k
=
R
, then the second cohomology group has
2 elements, and we get that the only central simple
R
-algebras are
M
n
(
R
) or
M
n
(H).
For amusement, we also note the following theorem:
Fact (Wedderburn). Every finite division algebra is a field.
1.4 Projectives and blocks
In general, if
A
is not semi-simple, then it is not possible to decompose
A
as a
direct sum of simple modules. However, what we can do is to decompose it as a
direct sum of indecomposable projectives.
We begin with the definition of a projective module.
Definition
(Projective module)
.
An
A
-module is projective
P
if given modules
M and M
0
and maps
P
M
0
M 0
α
θ
,
then there exists a map
β
:
P → M
0
such that the following diagram commutes:
P
M
0
M 0
α
β
θ
.
Equivalently, if we have a short exact sequence
0 N M
0
M 0,
then the sequence
0 Hom(P, N) Hom(P, M
0
) Hom(P, M) 0
is exact.
Note that we get exactness at
Hom
(
P, N
) and
Hom
(
P, M
0
) for any
P
at all.
Projective means it is also exact at Hom(P, M ).
Example. Free modules are always projective.
In general, projective modules are “like” free modules. We all know that free
modules are nice, and most of the time, when we want to prove things about
free modules, we are just using the property that they are projective. It is also
possible to understand projective modules in an algebro-geometric way — they
are “locally free” modules.
It is convenient to characterize projective modules as follows:
Lemma. The following are equivalent:
(i) P is projective.
(ii) Every surjective map φ : M → P splits, i.e.
M
∼
=
ker φ ⊕ N
where N
∼
=
P .
(iii) P is a direct summand of a free module.
Proof.
– (i) ⇒ (ii): Consider the following lifting problem:
P
M P 0
φ
,
The lifting gives an embedding of
P
into
M
that complements
ker φ
(by
the splitting lemma, or by checking it directly).
–
(ii)
⇒
(iii): Every module admits a surjection from a free module (e.g. the
free module generated by the elements of P )
–
(iii)
⇒
(i): It suffices to show that direct summands of projectives are
projective. Suppose P is is projective, and
P
∼
=
A ⊕ B.
Then any diagram
A
M
0
M 0
α
θ
,
can be extended to a diagram
A ⊕ B
M
0
M 0
˜α
θ
,
by sending
B
to 0. Then since
A ⊕ B
∼
=
P
is projective, we obtain a lifting
A ⊕ B → M
0
, and restricting to A gives the desired lifting.
Our objective is to understand the direct summands of a general Artinian
k
-algebra
A
, not necessarily semi-simple. Since
A
is itself a free
A
-module, we
know these direct summands are always projective.
Since
A
is not necessarily semi-simple, it is in general impossible to decompose
it as a direct sum of simples. What we can do, though, is to decompose it as a
direct sum of indecomposable modules.
Definition
(Indecomposable)
.
A non-zero module
M
is indecomposable if
M
cannot be expressed as the direct sum of two non-zero submodules.
Note that since
A
is (left) Artinian, it can always be decomposed as a finite
sum of indecomposable (left) submodules. Sometimes, we are also interested in
decomposing A as a sum of (two-sided) ideals. These are called blocks.
Definition
(Block)
.
The blocks are the direct summands of
A
that are inde-
composable as ideals.
Example. If A is semi-simple Artinian, then Artin-Wedderburn tells us
A =
M
M
n
i
(D
i
),
and the M
n
i
(D
i
) are the blocks.
We already know that every Artinian module can be decomposed as a direct
sum of indecomposables. The first question to ask is whether this is unique. We
note the following definitions:
Definition
(Local algebra)
.
An algebra is local if it has a unique maximal left
ideal, which is J(A), which is the unique maximal right ideal.
If so, then
A/J
(
A
) is a division algebra. This name, of course, comes from
algebraic geometry (cf. local rings).
Definition
(Unique decomposition property)
.
A module
M
has the unique
decomposition property if
M
is a finite direct sum of indecomposable modules,
and if
M =
m
M
i=1
M
i
=
n
M
i=1
M
0
i
,
then n = m, and, after reordering, M
i
= M
0
i
.
We want to prove that
A
as an
A
-module always has the unique decomposi-
tion property. The first step is the following criterion for determining unique
decomposition property.
Theorem
(Krull–Schmidt theorem)
.
Suppose
M
is a finite sum of indecom-
posable
A
-modules
M
i
, with each
End
(
M
i
) local. Then
M
has the unique
decomposition property.
Proof. Let
M =
m
M
i=1
M
i
=
n
M
i=1
M
0
i
.
We prove by induction on
m
. If
m
= 1, then
M
is indecomposable. Then we
must have n = 1 as well, and the result follows.
For m > 1, we consider the maps
α
i
: M
0
i
M M
1
β
i
: M
1
M M
0
i
We observe that
id
M
1
=
n
X
i=1
α
i
◦ β
i
: M
1
→ M
1
.
Since
End
A
(
M
1
) is local, we know some
α
i
◦ β
i
must be invertible, i.e. a unit, as
they cannot all lie in the Jacobson radical. We may wlog assume
α
1
◦ β
1
is a
unit. If this is the case, then both
α
1
and
β
1
have to be invertible. So
M
1
∼
=
M
0
1
.
Consider the map id −θ = φ, where
θ : M M
1
M
0
1
M
L
m
i=2
M
i
M.
α
−1
1
Then φ(M
0
1
) = M
1
. So φ|
M
0
1
looks like α
1
. Also
φ
m
M
i=2
M
i
!
=
m
M
i=2
M
i
,
So
φ|
L
m
i=2
M
i
looks like the identity map. So in particular, we see that
φ
is
surjective. However, if φ(x) = 0, this says x = θ(x), So
x ∈
m
M
i=2
M
i
.
But then
θ
(
x
) = 0. Thus
x
= 0. Thus
φ
is an automorphism of
m
with
φ(M
0
1
) = φ(M
1
). So this gives an isomorphism between
m
M
i=2
M
i
=
M
M
1
∼
=
M
M
1
∼
=
n
M
i=2
M
0
i
,
and so we are done by induction.
Now it remains to prove that the endomorphism rings are local. Recall the
following result from linear algebra.
Lemma
(Fitting)
.
Suppose
M
is a module with both the ACC and DCC on
submodules, and let f ∈ End
A
(M). Then for large enough n, we have
M = im f
n
⊕ ker f
n
.
Proof. By ACC and DCC, we may choose n large enough so that
f
n
: f
n
(M) → f
2n
(M)
is an isomorphism, as if we keep iterating
f
, the image is a descending chain and
the kernel is an ascending chain, and these have to terminate.
If m ∈ M, then we can write
f
n
(m) = f
2n
(m
1
)
for some m
1
. Then
m = f
n
(m
1
) + (m − f
n
(m
1
)) ∈ im f
n
+ ker f
n
,
and also
im f
n
∩ ker f
n
= ker(f
n
: f
n
(M) → f
2n
(M)) = 0.
So done.
Lemma.
Suppose
M
is an indecomposable module satisfying ACC and DCC
on submodules. Then B = End
A
(M) is local.
Proof.
Choose a maximal left ideal of
B
, say
I
. It’s enough to show that if
x 6∈ I
,
then x is left invertible. By maximality of I, we know B = Bx + I. We write
1 = λx + y,
for some
λ ∈ B
and
y ∈ I
. Since
y ∈ I
, it has no left inverse. So it is not an
isomorphism. By Fitting’s lemma and the indecomposability of
M
, we see that
y
m
= 0 for some m. Thus
(1 + y + y
2
+ · · · + y
m−1
)λx = (1 + y + · · · + y
m−1
)(1 − y) = 1.
So x is left invertible.
Corollary.
Let
A
be a left Artinian algebra. Then
A
has the unique decompo-
sition property.
Proof.
We know
A
satisfies the ACC and DCC condition. So
A
A
is a finite
direct sum of indecomposables.
So if
A
is an Artinian algebra, we know
A
can be uniquely decomposed as a
direct sum of indecomposable projectives,
A =
M
P
j
.
For convenience, we will work with right Artinian algebras and right modules
instead of left ones. It turns out that instead of studying projectives in
A
, we
can study idempotent elements instead.
Recall that
End
(
A
A
)
∼
=
A
. The projection onto
P
j
is achieved by left
multiplication by an idempotent e
j
,
P
j
= e
j
A.
The fact that the
A
decomposes as a direct sum of the
P
j
translates to the
condition
X
e
j
= 1, e
i
e
j
= 0
for i 6= j.
Definition
(Orthogonal idempotent)
.
A collection of idempotents
{e
i
}
is or-
thogonal if e
i
e
j
= 0 for i 6= j.
The indecomposability of P
j
is equivalent to e
j
being primitive:
Definition
(Primitive idempotent)
.
An idempotent is primitive if it cannot be
expressed as a sum
e = e
0
+ e
00
,
where e
0
, e
00
are orthogonal idempotents, both non-zero.
We see that giving a direct sum decomposition of
A
is equivalent to finding
an orthogonal collection of primitive idempotents that sum to 1. This is rather
useful, because idempotents are easier to move around that projectives.
Our current plan is as follows — given an Artinian algebra
A
, we can quotient
out by
J
(
A
), and obtain a semi-simple algebra
A/J
(
A
). By Artin–Wedderburn,
we know how we can decompose
A/J
(
A
), and we hope to be able to lift this
decomposition to one of
A
. The point of talking about idempotents instead is
that we know what it means to lift elements.
Proposition.
Let
N
be a nilpotent ideal in
A
, and let
f
be an idempotent of
A/N ≡
¯
A. Then there is an idempotent e ∈ A with f = ¯e.
In particular, we know
J
(
A
) is nilpotent, and this proposition applies. The
proof involves a bit of magic.
Proof.
We consider the quotients
A/N
i
for
i ≥
1. We will lift the idempotents
successively as we increase
i
, and since
N
is nilpotent, repeating this process
will eventually land us in A.
Suppose we have found an idempotent
f
i−1
∈ A/N
i−1
with
¯
f
i−1
=
f
. We
want to find f
i
∈ A/N
i
such that
¯
f
i
= f.
For
i >
1, we let
x
be an element of
A/N
i
with image
f
i−1
in
A/N
i−1
.
Then since
x
2
− x
vansishes in
A/N
i−1
, we know
x
2
− x ∈ N
i−1
/N
i
. Then in
particular,
(x
2
− x)
2
= 0 ∈ A/N
i
. (†)
We let
f
i
= 3x
2
− 2x
3
.
Then by a direct computation using (
†
), we find
f
2
i
=
f
i
, and
f
i
has image
3
f
i−1
−
2
f
i−1
=
f
i−1
in
A/N
i−1
(alternatively, in characteristic
p
, we can use
f
i
= x
p
). Since N
k
= 0 for some k, this process gives us what we want.
Just being able to lift idempotents is not good enough. We want to lift
decompositions as projective indecomposables. So we need to do better.
Corollary. Let N be a nilpotent ideal of A. Let
¯
1 = f
1
+ · · · + f
r
with {f
i
} orthogonal primitive idempotents in A/N. Then we can write
1 = e
1
+ · · · + e
r
,
with {e
i
} orthogonal primitive idempotents in A, and ¯e
i
= f
i
.
Proof. We define a sequence e
0
i
∈ A inductively. We set
e
0
1
= 1.
Then for each
i >
1, we pick
e
0
i
a lift of
f
i
+
· · ·
+
f
t
∈ e
0
i−1
Ae
0
i−1
, since by
inductive hypothesis we know that f
i
+ · · · + f
t
∈ e
0
i−1
Ae
0
i−1
/N . Then
e
0
i
e
0
i+1
= e
0
i+1
= e
0
i+1
e
0
i
.
We let
e
i
= e
0
i
− e
0
i+1
.
Then
¯e
i
= f
i
.
Also, if j > i, then
e
j
= e
0
i+1
e
j
e
0
i+1
,
and so
e
i
e
j
= (e
0
i
− e
0
i+1
)e
0
i+1
e
j
e
0
i+1
= 0.
Similarly e
j
e
i
= 0.
We now apply this lifting of idempotents to
N
=
J
(
A
), which we know is
nilpotent. We know
A/N
is the direct sum of simple modules, and thus the
decomposition corresponds to
¯
1 = f
1
+ · · · + f
t
∈ A/J(A),
and these
f
i
are orthogonal primitive idempotents. Idempotent lifting then gives
1 = e
1
+ · · · + e
t
∈ A,
and these are orthogonal primitive idempotents. So we can write
A =
M
e
j
A =
M
P
i
,
where
P
i
=
e
i
A
are indecomposable projectives, and
P
i
/P
i
J
(
A
) =
S
i
is simple.
By Krull–Schmidt, any indecomposable projective isomorphic to one of these
P
j
.
The final piece of the picture is to figure out when two indecomposable
projectives lie in the same block. Recall that if
M
is a right
A
-module and
e
is
idempotent, then
Me
∼
=
Hom
A
(eA, M ).
In particular, if M = fA for some idempotent f, then
Hom(eA, f A)
∼
=
fAe.
However, if e and f are in different blocks, say B
1
and B
2
, then
fAe ∈ B
1
∩ B
2
= 0,
since B
1
and B
2
are (two-sided!) ideals. So we know
Hom(eA, f A) = 0.
So if
Hom
(
eA, f A
)
6
= 0, then they are in the same block. The existence of a
homomorphism can alternatively be expressed in terms of composition factors.
We have seen that each indecomposable projective P has a simple “top”
P/P J(A)
∼
=
S.
Definition
(Composition factor)
.
A simple module
S
is a composition factor
of a module M if there are submodules M
1
≤ M
2