2 The proof
We now prove of Theorem 12. The p=0 case is trivial, so assume p>0.
Recall that we have to show that any natural transformation
ωM:Ω1(M;V)→Ωp(M)
is (uniquely) a linear combination of transformations of the form
∑αi⊗vi↦I,J∑MI,J(vi1,…,vik,vj1,…,vjℓ)αi1∧⋯∧αik∧dαj1∧⋯∧dαjℓ.
The uniqueness part is easy to see since we can extract MI,J by evaluating ωM(α) for M of dimension large enough. So we have to show every ωM is of this form.
The idea of the proof is to first use naturality to show that for x∈M, the form ωM(α)x depends only on the N-jet of α at x for some large but finite number N (of course, a posteriori, N=1 suffices). Once we know this, the problem is reduced to one of finite dimensional linear algebra and invariant theory.
Lemma
15
For ω∈Ωp(Ω1⊗V) and α∈Ω1(M;V), the value of ωM(α) at x∈M depends only on the N-jet of α at p for some N. In fact, N=p suffices.
We elect to introduce the constant
N, despite it being equal to
p, because the precise value does not matter.
Suppose
α and
α′ have identical
p-jets at
x. Then there are functions
f0,f1,…,fp vanishing at
p and
β∈Ω1(M;V) such that
α′=α+f0f1⋯fpβ.
The first step is to replace the fi with more easily understood coordinate functions. Consider the maps
Let α~,β~ be the pullbacks of the corresponding forms under pr1, and t0,…,tp the standard coordinates on Rp+1. Then α,f0f1⋯fpβ are the pullbacks of α~,t0t1⋯tpβ~ under the first map.
So it suffices to show that ωM×Rp+1(α~) and ωM×Rp+1(α~+t0t1⋯tpβ~) agree as p-forms at (x,0).
The point now is that by multilinearity of a p-form, it suffices to evaluate these p-forms on p-tuples of standard basis basis vectors (after choosing a chart for M), and there is at least one i for which the ∂ti is not in the list. So by naturality we can perform this evaluation in the submanifold defined by ti=0, in which these two p-forms agree.
By naturality, we may assume M=W is a vector space and x is the origin. The value of ωW(α) at the origin is given by a map
ω~W:JN(W;W∗⊗V)→⋀pW∗,
where JN(W;W∗⊗V) is the space of N-jets of elements of Ω1(W;V). This is a finite dimensional vector space, given explicitly by
JN(W;W∗⊗V)=j=0⨁NSymjW∗⊗W∗⊗V.
Under this decomposition, the jth piece captures the jth derivatives of α. Throughout the proof, we view SymjW∗ as a quotient of (W∗)⊗j, hence every function on SymjW∗ is in particular a function on (W∗)⊗j.
At this point, everything else follows from the fact that ω~W is functorial in W, and in particular GL(W)-invariant.
Lemma
16
ω~W is a polynomial function.
This lemma is true in much greater generality — it holds for any set-theoretic natural transformation between “polynomial functors” Vec→Vec. Here a set-theoretic natural transformation is a natural transformations of the underlying set-valued functors. This is a polynomial version of the fact that a natural transformation between additive functors is necessarily additive, because being additive is a property and not a structure.
Write
F(W)=j=0⨁NSymjW∗⊗W∗⊗V,G(W)=⋀pW.
We think of these as a functor Vec→Vec (with V fixed). The point is that for f∈HomVec(W,W′), the functions F(f),G(f) are polynomial in f. This together with naturality will force ω~W to be polynomial as well.
To show that ω~W is polynomial, we have to show that if v1,…,vn∈F(W), then ω~W(∑λivi) is a polynomial function in λ1,…,λn. Without loss of generality, we may assume each vi lives in the (ji−1)th summand (so that the summand has ji tensor powers of W∗).
Fix a number j such that ji∣j for all i. We first show that ω~W(∑λijvi) is a polynomial function in the λi's.
Let f:W⊕n→W⊕n be the map that multiplies by λij/ji on the ith factor, and Σ:W⊕n→W be the sum map. Consider the commutative diagram
Let v~i∈F(W⊕n) be the image of vi under the inclusion of the ith summand. Then x=∑v~i gets sent along the top row to ∑λijvi. On the other hand, ω~W⊕n(x) is some element in G(W⊕n), and whatever it might be, the image along the bottom row gives a polynomial function in the λij/ji, hence in the λi. So we are done.
We now know that for any finite set v1,…,vn, we can write
ω~W(λ1jv1+⋯+λnjvn)=r1,…,rm∑aRλ1r1⋯λnrn.
We claim each ri is a multiple of j (if the corresponding aR is non-zero). Indeed, if we set λi=(μij−νij)1/j, then the result must be a polynomial in the μi and νi as well, since it is of the form ω~W(∑μijvi−νijvi). But ∑aR(μ1j−ν1j)r1/j⋯(μnj−νnj)rn/j is polynomial in μi,νi if and only if j∣ri.
Now by taking jth roots, we know ω~W(∑λivi) is polynomial in the λi when λi≥0. That is, it is polynomial when restricted to the cone spanned by the vi's. But since the vi's are arbitrary, this implies it is polynomial everywhere.
Lemma
17
Any non-zero GL(W)-invariant linear map ⨂MW∗→⋀pW∗ has M=p and is a multiple of the anti-symmetrization map. In particular, any such map is anti-symmetric.
For convenience of notation, replace
W∗ with
W. Since the map is in particular invariant under
R×⊆GL(W), we must have
M=p. By Schur's lemma, the second part of the lemma is equivalent to claiming that if we decompose
W⊗p as a direct sum of irreducible
GL(W) representations, then
⋀pW appears exactly once. In fact, we know the complete decomposition of
W⊗p by Schur–Weyl duality.
Let {Vλ} be the set of irreducible representations of Sp. Then as an Sp×GL(W)-representation, we have
W⊗p=λ⨁Vλ⊗Wλ,
where Wλ=HomSp(Vλ,W⊗p) is either zero or irreducible, and are distinct for different λ. Under this decomposition, ⋀pW corresponds to the sign representation of Sp.
So we know ω~W is a polynomial in ⨁jSymjW∗⊗W∗⊗V, and is anti-symmetric in the W∗. So the only terms that can contribute are when j=0 or j=1. In the j=1 case, it has to factor through ⋀2W∗⊗V. So ω~W is polynomial in (W∗⊗V)⊕(⋀2W∗⊗V). This exactly says ωW(α) is given by wedging together α and dα (and pairing with elements of V∗).