Global AnalysisHodge Theory

4 Hodge Theory

We now arrive at the main theorem we were working towards.

Theorem (Hodge Decomposition Theorem)

Let (E,L)(E_*, L) be an elliptic complex with D=L+LD = L + L^* and Δ=D2\Delta = D^2 as before. Then

Γ(M,E)=kerDimD. \Gamma (M, E) = \ker D \oplus \operatorname{im}D.

Moreover,

  1. kerD\ker D is finite-dimensional.

  2. kerD=kerΔ=kerLkerL\ker D = \ker \Delta = \ker L \cap \ker L^*

  3. imΔ=imD=imLLimLL=imLimL\operatorname{im}\Delta = \operatorname{im}D = \operatorname{im}LL^* \oplus \operatorname{im}L^*L = \operatorname{im}L \oplus \operatorname{im}L^*

  4. kerL=imLkerΔ\ker L = \operatorname{im}L \oplus \ker \Delta , and kerΔH(E)\ker \Delta \to H(E_*) is an isomorphism.

Proof
(1) follows from regularity. (2) follows from the identities

u,Δu=Du,Du=Lu,Lu+Lu+Lu,uΓ(M,Ei). \langle u, \Delta u\rangle = \langle D u, D u\rangle = \langle L u, L u\rangle + \langle L^* u + L^* u\rangle ,\quad u \in \Gamma (M, E_i).

For (3), we can also decompose Γ(M,E)=kerΔimΔ\Gamma (M, E) = \ker \Delta \oplus \operatorname{im}\Delta . Since imΔimD\operatorname{im}\Delta \subseteq \operatorname{im}D, they must be equal. Moreover,

imDim(LL)im(LL)imLimL, \operatorname{im}D \subseteq \operatorname{im}(LL^*) \oplus \operatorname{im}(L^*L) \subseteq \operatorname{im}L \oplus \operatorname{im}L^*,

but clearly imLimLkerΔ\operatorname{im}L \oplus \operatorname{im}L^* \perp \ker \Delta since

Lu+Lv,w=u+v,Dw. \langle L u + L^* v, w\rangle = \langle u + v, D w\rangle .

So we must have equality throughout, and imL\operatorname{im}L is clearly orthogonal to imL\operatorname{im}L^*. For (4), it is clear that kerLimLkerΔ\ker L \supseteq \operatorname{im}L \oplus \ker \Delta , and since kerLimL\ker L \perp \operatorname{im}L^*, that must be an equality.

Proof

Theorem (Spectral theorem)

Let MM be a closed Riemannian manifold and EME \to M a Hermitian vector bundle. Let D:Γ(M,E)Γ(M,E)D: \Gamma (M, E) \to \Gamma (M, E) be formally self-adjoint of order k1k \geq 1. Then we have an orthogonal decomposition

L2(M,E)=λRker(Dλ). L^2(M, E) = \bigoplus _{\lambda \in \mathbb {R}} \ker (D - \lambda ).

Moreover, each ker(Dλ)\ker (D - \lambda ) is finite-dimensional, and for any Λ\Lambda , there are only finitely many eigenvalues of magnitude <Λ< \Lambda .

The idea is to apply the spectral theorem for compact self-adjoint operators to the inverse of DD. Of course, DD need not be invertible. So we do the following:

Proof
Consider the operator L=1+D2:Γ(M,E)Γ(M,E)L = 1 + D^2: \Gamma (M, E) \to \Gamma (M, E). It is then clear that L=LL = L^* is elliptic and injective. So L:H2kL2L: H^{2k} \to L^2 is invertible (since the complement of the image is kerL\ker L^*), with inverse S:L2H2kS: L^2 \to H^{2k}. Since LL induces a bijection between the smooth sections, so does SS. Let TT be the composition L2SH2kL2L^2 \overset {S}{\to } H^{2k} \hookrightarrow L^2. Then this is compact and self-adjoint (can check this for smooth sections, and use that its “inverse” LL is formally self adjoint).

By the spectral theorem of compact self–adjoint operators (and positivity of TT),

L2(M;E)=μ>0ker(Tμ). L^2(M; E) = \bigoplus _{\mu > 0} \ker (T - \mu ).

Moreover, each factor is finite-dimensional, and 00 is the only accumulation point of the spectrum.

We will show that ker(Tμ)\ker (T - \mu ) decomposes as a sum of eigenspaces for DD. We first establish that

ker(Tμ)=im(1μL)=ker(1μL). \ker (T - \mu ) = \operatorname{im}(1 - \mu L)^\perp = \ker (1 - \mu L).

Since LL is self-adjoint, the second equality follows by elliptic regularity. The first equality follows from the computation

x,(1μL)u=x,uμx,Lu=Tx,Luμx,Lu=(Tμ)x,Lu, \langle x, (1 - \mu L)u\rangle = \langle x, u\rangle - \mu \langle x, Lu\rangle = \langle Tx, Lu\rangle - \mu \langle x, Lu\rangle = \langle (T- \mu )x, Lu\rangle ,

plus the density of Γ(M,E)\Gamma (M, E) and surjectivity of L:Γ(M,E)Γ(M,E)L: \Gamma (M, E) \to \Gamma (M, E).

Now since DD commutes with LL, we know DD acts as a self-adjoint operator on the finite-dimensional vector space ker(Tμ)=ker(1μL)\ker (T - \mu ) = \ker (1 - \mu L). Moreover, restricted to this subspace, we have

D2=1μ1. D^2 = \frac{1}{\mu } - 1.

So by linear algebra, ker(Tμ)\ker (T - \mu ) decomposes into eigenspaces of DD of eigenvalues ±1μ1\pm \sqrt{\frac{1}{\mu } - 1}, and the theorem follows.

Proof