Chern–Weil Forms and Abstract Homotopy Theory — The proof

2 The proof

We now prove of Theorem 12. The p=0p = 0 case is trivial, so assume p>0p > 0.

Recall that we have to show that any natural transformation

ωM:Ω1(M;V)Ωp(M) \omega _ M: \Omega ^1(M; V) \to \Omega ^ p(M)

is (uniquely) a linear combination of transformations of the form

αiviI,JMI,J(vi1,,vik,vj1,,vj)αi1αikdαj1dαj. \sum \alpha _ i \otimes v_ i \mapsto \sum _{I, J} M_{I, J}(v_{i_1}, \ldots , v_{i_ k}, v_{j_1}, \ldots , v_{j_\ell })\, \alpha _{i_1} \wedge \cdots \wedge \alpha _{i_ k} \wedge \mathrm{d}\alpha _{j_1} \wedge \cdots \wedge \mathrm{d}\alpha _{j_\ell }.

The uniqueness part is easy to see since we can extract MI,JM_{I, J} by evaluating ωM(α)\omega _ M(\alpha ) for MM of dimension large enough. So we have to show every ωM\omega _ M is of this form.

The idea of the proof is to first use naturality to show that for xMx \in M, the form ωM(α)x\omega _ M(\alpha )_ x depends only on the NN-jet of α\alpha at xx for some large but finite number NN (of course, a posteriori, N=1N = 1 suffices). Once we know this, the problem is reduced to one of finite dimensional linear algebra and invariant theory.

Lemma 15

For ωΩp(Ω1V)\omega \in \Omega ^ p(\Omega ^1 \otimes V) and αΩ1(M;V)\alpha \in \Omega ^1(M; V), the value of ωM(α)\omega _ M(\alpha ) at xMx \in M depends only on the NN-jet of α\alpha at pp for some NN. In fact, N=pN = p suffices.

We elect to introduce the constant NN, despite it being equal to pp, because the precise value does not matter.

Proof

Suppose α\alpha and α\alpha ' have identical pp-jets at xx. Then there are functions f0,f1,,fpf_0, f_1, \ldots , f_ p vanishing at pp and βΩ1(M;V)\beta \in \Omega ^1(M; V) such that

α=α+f0f1fpβ. \alpha ' = \alpha + f_0 f_1 \cdots f_ p \beta .

The first step is to replace the fif_ i with more easily understood coordinate functions. Consider the maps

\begin{useimager} 
    \[
      \begin{tikzcd}[column sep=6em]
        M \ar[r, "{1_M \times (f_0, \ldots, f_p)}"] & M \times \R^{p + 1} \ar[r, "\mathrm{pr}_1"] & M.
      \end{tikzcd}
    \]
  \end{useimager}

Let α~,β~\tilde{\alpha }, \tilde{\beta } be the pullbacks of the corresponding forms under pr1\mathrm{pr}_1, and t0,,tpt_0, \ldots , t_ p the standard coordinates on Rp+1\mathbb {R}^{p + 1}. Then α,f0f1fpβ\alpha , f_0 f_1\cdots f_ p \beta are the pullbacks of α~,t0t1tpβ~\tilde{\alpha }, t_0 t_1\cdots t_ p \tilde{\beta } under the first map.

So it suffices to show that ωM×Rp+1(α~)\omega _{M \times \mathbb {R}^{p + 1}}(\tilde{\alpha }) and ωM×Rp+1(α~+t0t1tpβ~)\omega _{M \times \mathbb {R}^{p + 1}}(\tilde{\alpha } + t_0 t_1 \cdots t_ p \tilde{\beta }) agree as pp-forms at (x,0)(x, 0).

The point now is that by multilinearity of a pp-form, it suffices to evaluate these pp-forms on pp-tuples of standard basis basis vectors (after choosing a chart for MM), and there is at least one ii for which the ti\partial _{t_ i} is not in the list. So by naturality we can perform this evaluation in the submanifold defined by ti=0t_ i = 0, in which these two pp-forms agree.

By naturality, we may assume M=WM = W is a vector space and xx is the origin. The value of ωW(α)\omega _ W(\alpha ) at the origin is given by a map

ω~W:JN(W;WV)pW, \tilde{\omega }_ W: J^ N(W; W^* \otimes V) \to {\textstyle \bigwedge }^ p W^*,

where JN(W;WV)J^ N(W; W^* \otimes V) is the space of NN-jets of elements of Ω1(W;V)\Omega ^1(W; V). This is a finite dimensional vector space, given explicitly by

JN(W;WV)=j=0NSymjWWV. J^ N(W; W^* \otimes V) = \bigoplus _{j = 0}^ N \operatorname{Sym}^ j W^* \otimes W^* \otimes V.

Under this decomposition, the jjth piece captures the jjth derivatives of α\alpha . Throughout the proof, we view SymjW\operatorname{Sym}^ j W^* as a quotient of (W)j(W^*)^{\otimes j}, hence every function on SymjW\operatorname{Sym}^ j W^* is in particular a function on (W)j(W^*)^{\otimes j}.

At this point, everything else follows from the fact that ω~W\tilde{\omega }_ W is functorial in WW, and in particular GL(W)\mathrm{GL}(W)-invariant.

Lemma 16

ω~W\tilde{\omega }_ W is a polynomial function.

This lemma is true in much greater generality — it holds for any set-theoretic natural transformation between “polynomial functors” VecVec\mathsf{Vec}\to \mathsf{Vec}. Here a set-theoretic natural transformation is a natural transformations of the underlying set-valued functors. This is a polynomial version of the fact that a natural transformation between additive functors is necessarily additive, because being additive is a property and not a structure.

Proof

Write

F(W)=j=0NSymjWWV,G(W)=pW. F(W) = \bigoplus _{j = 0}^ N \operatorname{Sym}^ j W^* \otimes W^* \otimes V,\quad G(W) = {\textstyle \bigwedge }^ p W.

We think of these as a functor VecVec\mathsf{Vec}\to \mathsf{Vec} (with VV fixed). The point is that for fHomVec(W,W)f \in \operatorname{Hom}_\mathsf{Vec}(W, W'), the functions F(f),G(f)F(f), G(f) are polynomial in ff. This together with naturality will force ω~W\tilde{\omega }_ W to be polynomial as well.

To show that ω~W\tilde{\omega }_ W is polynomial, we have to show that if v1,,vnF(W)v_1, \ldots , v_ n \in F(W), then ω~W(λivi)\tilde{\omega }_ W(\sum \lambda _ i v_ i) is a polynomial function in λ1,,λn\lambda _1, \ldots , \lambda _ n. Without loss of generality, we may assume each viv_ i lives in the (ji1)(j_ i - 1)th summand (so that the summand has jij_ i tensor powers of WW^*).

Fix a number jj such that jijj_ i \mid j for all ii. We first show that ω~W(λijvi)\tilde{\omega }_ W(\sum \lambda _ i^ j v_ i) is a polynomial function in the λi\lambda _ i's.

Let f:WnWnf: W^{\oplus n} \to W^{\oplus n} be the map that multiplies by λij/ji\lambda _ i^{j / j_ i} on the iith factor, and Σ:WnW\Sigma : W^{\oplus n} \to W be the sum map. Consider the commutative diagram

\begin{useimager} 
    \[
      \begin{tikzcd}
        F(W^{\oplus n}) \ar[r, "F(f)"] \ar[d, "\tilde{\omega}_{W^{\oplus n}}"] & F(W^{\oplus n}) \ar[r, "F(\Sigma)"]\ar[d, "\tilde{\omega}_{W^{\oplus n}}"] & F(W)\ar[d, "\tilde{\omega}_{W}"] \\
        G(W^{\oplus n}) \ar[r, "G(f)"] & G(W^{\oplus n}) \ar[r, "G(\Sigma)"] & G(W)
      \end{tikzcd}
    \]
  \end{useimager}

Let v~iF(Wn)\tilde{v}_ i \in F(W^{\oplus n}) be the image of viv_ i under the inclusion of the iith summand. Then x=v~ix = \sum \tilde{v}_ i gets sent along the top row to λijvi\sum \lambda _ i^ j v_ i. On the other hand, ω~Wn(x)\tilde{\omega }_{W^{\oplus n}}(x) is some element in G(Wn)G(W^{\oplus n}), and whatever it might be, the image along the bottom row gives a polynomial function in the λij/ji\lambda _ i^{j/j_ i}, hence in the λi\lambda _ i. So we are done.

We now know that for any finite set v1,,vnv_1, \ldots , v_ n, we can write

ω~W(λ1jv1++λnjvn)=r1,,rmaRλ1r1λnrn. \tilde{\omega }_ W(\lambda _1^ j v_1 + \cdots + \lambda _ n^ j v_ n) = \sum _{r_1, \ldots , r_ m} a_ R \lambda _1^{r_1} \cdots \lambda _ n^{r_ n}.

We claim each rir_ i is a multiple of jj (if the corresponding aRa_ R is non-zero). Indeed, if we set λi=(μijνij)1/j\lambda _ i = (\mu _ i^ j - \nu _ i^ j)^{1/j}, then the result must be a polynomial in the μi\mu _ i and νi\nu _ i as well, since it is of the form ω~W(μijviνijvi)\tilde{\omega }_ W(\sum \mu _ i^ j v_ i - \nu _ i^ j v_ i). But aR(μ1jν1j)r1/j(μnjνnj)rn/j\sum a_ R (\mu _1^ j - \nu _1^ j)^{r_1/ j} \cdots (\mu _ n^ j - \nu _ n^ j)^{r_ n/ j} is polynomial in μi,νi\mu _ i, \nu _ i if and only if jrij \mid r_ i.

Now by taking jjth roots, we know ω~W(λivi)\tilde{\omega }_ W(\sum \lambda _ i v_ i) is polynomial in the λi\lambda _ i when λi0\lambda _ i \geq 0. That is, it is polynomial when restricted to the cone spanned by the viv_ i's. But since the viv_ i's are arbitrary, this implies it is polynomial everywhere.

Lemma 17

Any non-zero GL(W)\mathrm{GL}(W)-invariant linear map MWpW\bigotimes ^ M W^* \to {\textstyle \bigwedge }^ p W^* has M=pM = p and is a multiple of the anti-symmetrization map. In particular, any such map is anti-symmetric.

Proof

For convenience of notation, replace WW^* with WW. Since the map is in particular invariant under R×GL(W)\mathbb {R}^\times \subseteq \mathrm{GL}(W), we must have M=pM = p. By Schur's lemma, the second part of the lemma is equivalent to claiming that if we decompose WpW^{\otimes p} as a direct sum of irreducible GL(W)\mathrm{GL}(W) representations, then pW{\textstyle \bigwedge }^ p W appears exactly once. In fact, we know the complete decomposition of WpW^{\otimes p} by Schur–Weyl duality.

Let {Vλ}\{ V_\lambda \} be the set of irreducible representations of SpS_ p. Then as an Sp×GL(W)S_ p \times \mathrm{GL}(W)-representation, we have

Wp=λVλWλ, W^{\otimes p} = \bigoplus _\lambda V_\lambda \otimes W_\lambda ,

where Wλ=HomSp(Vλ,Wp)W_\lambda = \operatorname{Hom}_{S_ p} (V_\lambda , W^{\otimes p}) is either zero or irreducible, and are distinct for different λ\lambda . Under this decomposition, pW{\textstyle \bigwedge }^ p W corresponds to the sign representation of SpS_ p.

So we know ω~W\tilde{\omega }_ W is a polynomial in jSymjWWV\bigoplus _ j \operatorname{Sym}^ j W^* \otimes W^* \otimes V, and is anti-symmetric in the WW^*. So the only terms that can contribute are when j=0j = 0 or j=1j = 1. In the j=1j = 1 case, it has to factor through 2WV{\textstyle \bigwedge }^2 W^* \otimes V. So ω~W\tilde{\omega }_ W is polynomial in (WV)(2WV)(W^* \otimes V) \oplus ({\textstyle \bigwedge }^2 W^* \otimes V). This exactly says ωW(α)\omega _ W(\alpha ) is given by wedging together α\alpha and dα\mathrm{d}\alpha (and pairing with elements of VV^*).