We now arrive at the main theorem we were working towards.
Theorem(Hodge Decomposition Theorem)
Let (E∗,L) be an elliptic complex with D=L+L∗ and Δ=D2 as before. Then
Γ(M,E)=kerD⊕imD.
Moreover,
kerD is finite-dimensional.
kerD=kerΔ=kerL∩kerL∗
imΔ=imD=imLL∗⊕imL∗L=imL⊕imL∗
kerL=imL⊕kerΔ, and kerΔ→H(E∗) is an isomorphism.
Proof
□
(1) follows from regularity. (2) follows from the identities
⟨u,Δu⟩=⟨Du,Du⟩=⟨Lu,Lu⟩+⟨L∗u+L∗u⟩,u∈Γ(M,Ei).
For (3), we can also decompose Γ(M,E)=kerΔ⊕imΔ. Since imΔ⊆imD, they must be equal. Moreover,
imD⊆im(LL∗)⊕im(L∗L)⊆imL⊕imL∗,
but clearly imL⊕imL∗⊥kerΔ since
⟨Lu+L∗v,w⟩=⟨u+v,Dw⟩.
So we must have equality throughout, and imL is clearly orthogonal to imL∗. For (4), it is clear that kerL⊇imL⊕kerΔ, and since kerL⊥imL∗, that must be an equality.
Proof
□
Theorem(Spectral theorem)
Let M be a closed Riemannian manifold and E→M a Hermitian vector bundle. Let D:Γ(M,E)→Γ(M,E) be formally self-adjoint of order k≥1. Then we have an orthogonal decomposition
L2(M,E)=λ∈R⨁ker(D−λ).
Moreover, each ker(D−λ) is finite-dimensional, and for any Λ, there are only finitely many eigenvalues of magnitude <Λ.
The idea is to apply the spectral theorem for compact self-adjoint operators to the inverse of D. Of course, D need not be invertible. So we do the following:
Proof
□
Consider the operator L=1+D2:Γ(M,E)→Γ(M,E). It is then clear that L=L∗ is elliptic and injective. So L:H2k→L2 is invertible (since the complement of the image is kerL∗), with inverse S:L2→H2k. Since L induces a bijection between the smooth sections, so does S. Let T be the composition L2→SH2k↪L2. Then this is compact and self-adjoint (can check this for smooth sections, and use that its “inverse” L is formally self adjoint).
By the spectral theorem of compact self–adjoint operators (and positivity of T),
L2(M;E)=μ>0⨁ker(T−μ).
Moreover, each factor is finite-dimensional, and 0 is the only accumulation point of the spectrum.
We will show that ker(T−μ) decomposes as a sum of eigenspaces for D. We first establish that
ker(T−μ)=im(1−μL)⊥=ker(1−μL).
Since L is self-adjoint, the second equality follows by elliptic regularity. The first equality follows from the computation
plus the density of Γ(M,E) and surjectivity of L:Γ(M,E)→Γ(M,E).
Now since D commutes with L, we know D acts as a self-adjoint operator on the finite-dimensional vector space ker(T−μ)=ker(1−μL). Moreover, restricted to this subspace, we have
D2=μ1−1.
So by linear algebra, ker(T−μ) decomposes into eigenspaces of D of eigenvalues ±μ1−1, and the theorem follows.