# 2 The proof

We now prove of Theorem 12. The $p = 0$ case is trivial, so assume $p > 0$.

Recall that we have to show that any natural transformation

$\omega _ M: \Omega ^1(M; V) \to \Omega ^ p(M)$is (uniquely) a linear combination of transformations of the form

$\sum \alpha _ i \otimes v_ i \mapsto \sum _{I, J} M_{I, J}(v_{i_1}, \ldots , v_{i_ k}, v_{j_1}, \ldots , v_{j_\ell })\, \alpha _{i_1} \wedge \cdots \wedge \alpha _{i_ k} \wedge \mathrm{d}\alpha _{j_1} \wedge \cdots \wedge \mathrm{d}\alpha _{j_\ell }.$The uniqueness part is easy to see since we can extract $M_{I, J}$ by evaluating $\omega _ M(\alpha )$ for $M$ of dimension large enough. So we have to show every $\omega _ M$ is of this form.

The idea of the proof is to first use naturality to show that for $x \in M$, the form $\omega _ M(\alpha )_ x$ depends only on the $N$-jet of $\alpha$ at $x$ for some large but finite number $N$ (of course, *a posteriori*, $N = 1$ suffices). Once we know this, the problem is reduced to one of finite dimensional linear algebra and invariant theory.

For $\omega \in \Omega ^ p(\Omega ^1 \otimes V)$ and $\alpha \in \Omega ^1(M; V)$, the value of $\omega _ M(\alpha )$ at $x \in M$ depends only on the $N$-jet of $\alpha$ at $p$ for some $N$. In fact, $N = p$ suffices.

Suppose $\alpha$ and $\alpha '$ have identical $p$-jets at $x$. Then there are functions $f_0, f_1, \ldots , f_ p$ vanishing at $p$ and $\beta \in \Omega ^1(M; V)$ such that

$\alpha ' = \alpha + f_0 f_1 \cdots f_ p \beta .$The first step is to replace the $f_ i$ with more easily understood coordinate functions. Consider the maps

Let $\tilde{\alpha }, \tilde{\beta }$ be the pullbacks of the corresponding forms under $\mathrm{pr}_1$, and $t_0, \ldots , t_ p$ the standard coordinates on $\mathbb {R}^{p + 1}$. Then $\alpha , f_0 f_1\cdots f_ p \beta$ are the pullbacks of $\tilde{\alpha }, t_0 t_1\cdots t_ p \tilde{\beta }$ under the first map.

So it suffices to show that $\omega _{M \times \mathbb {R}^{p + 1}}(\tilde{\alpha })$ and $\omega _{M \times \mathbb {R}^{p + 1}}(\tilde{\alpha } + t_0 t_1 \cdots t_ p \tilde{\beta })$ agree as $p$-forms at $(x, 0)$.

The point now is that by multilinearity of a $p$-form, it suffices to evaluate these $p$-forms on $p$-tuples of standard basis basis vectors (after choosing a chart for $M$), and there is at least one $i$ for which the $\partial _{t_ i}$ is not in the list. So by naturality we can perform this evaluation in the submanifold defined by $t_ i = 0$, in which these two $p$-forms agree.

□By naturality, we may assume $M = W$ is a vector space and $x$ is the origin. The value of $\omega _ W(\alpha )$ at the origin is given by a map

$\tilde{\omega }_ W: J^ N(W; W^* \otimes V) \to {\textstyle \bigwedge }^ p W^*,$where $J^ N(W; W^* \otimes V)$ is the space of $N$-jets of elements of $\Omega ^1(W; V)$. This is a finite dimensional vector space, given explicitly by

$J^ N(W; W^* \otimes V) = \bigoplus _{j = 0}^ N \operatorname{Sym}^ j W^* \otimes W^* \otimes V.$ Under this decomposition, the $j$^{th} piece captures the $j$^{th} derivatives of $\alpha$. Throughout the proof, we view $\operatorname{Sym}^ j W^*$ as a *quotient* of $(W^*)^{\otimes j}$, hence every function on $\operatorname{Sym}^ j W^*$ is in particular a function on $(W^*)^{\otimes j}$.

At this point, everything else follows from the fact that $\tilde{\omega }_ W$ is functorial in $W$, and in particular $\mathrm{GL}(W)$-invariant.

$\tilde{\omega }_ W$ is a polynomial function.

This lemma is true in much greater generality — it holds for any set-theoretic natural transformation between “polynomial functors” $\mathsf{Vec}\to \mathsf{Vec}$. Here a set-theoretic natural transformation is a natural transformations of the underlying set-valued functors. This is a polynomial version of the fact that a natural transformation between additive functors is necessarily additive, because being additive is a *property* and not a structure.

Write

$F(W) = \bigoplus _{j = 0}^ N \operatorname{Sym}^ j W^* \otimes W^* \otimes V,\quad G(W) = {\textstyle \bigwedge }^ p W.$We think of these as a functor $\mathsf{Vec}\to \mathsf{Vec}$ (with $V$ fixed). The point is that for $f \in \operatorname{Hom}_\mathsf{Vec}(W, W')$, the functions $F(f), G(f)$ are polynomial in $f$. This together with naturality will force $\tilde{\omega }_ W$ to be polynomial as well.

To show that $\tilde{\omega }_ W$ is polynomial, we have to show that if $v_1, \ldots , v_ n \in F(W)$, then $\tilde{\omega }_ W(\sum \lambda _ i v_ i)$ is a polynomial function in $\lambda _1, \ldots , \lambda _ n$. Without loss of generality, we may assume each $v_ i$ lives in the $(j_ i - 1)$th summand (so that the summand has $j_ i$ tensor powers of $W^*$).

Fix a number $j$ such that $j_ i \mid j$ for all $i$. We first show that $\tilde{\omega }_ W(\sum \lambda _ i^ j v_ i)$ is a polynomial function in the $\lambda _ i$'s.

Let $f: W^{\oplus n} \to W^{\oplus n}$ be the map that multiplies by $\lambda _ i^{j / j_ i}$ on the $i$th factor, and $\Sigma : W^{\oplus n} \to W$ be the sum map. Consider the commutative diagram

Let $\tilde{v}_ i \in F(W^{\oplus n})$ be the image of $v_ i$ under the inclusion of the $i$th summand. Then $x = \sum \tilde{v}_ i$ gets sent along the top row to $\sum \lambda _ i^ j v_ i$. On the other hand, $\tilde{\omega }_{W^{\oplus n}}(x)$ is some element in $G(W^{\oplus n})$, and whatever it might be, the image along the bottom row gives a polynomial function in the $\lambda _ i^{j/j_ i}$, hence in the $\lambda _ i$. So we are done.

We now know that for any finite set $v_1, \ldots , v_ n$, we can write

$\tilde{\omega }_ W(\lambda _1^ j v_1 + \cdots + \lambda _ n^ j v_ n) = \sum _{r_1, \ldots , r_ m} a_ R \lambda _1^{r_1} \cdots \lambda _ n^{r_ n}.$We claim each $r_ i$ is a multiple of $j$ (if the corresponding $a_ R$ is non-zero). Indeed, if we set $\lambda _ i = (\mu _ i^ j - \nu _ i^ j)^{1/j}$, then the result must be a polynomial in the $\mu _ i$ and $\nu _ i$ as well, since it is of the form $\tilde{\omega }_ W(\sum \mu _ i^ j v_ i - \nu _ i^ j v_ i)$. But $\sum a_ R (\mu _1^ j - \nu _1^ j)^{r_1/ j} \cdots (\mu _ n^ j - \nu _ n^ j)^{r_ n/ j}$ is polynomial in $\mu _ i, \nu _ i$ if and only if $j \mid r_ i$.

Now by taking $j$th roots, we know $\tilde{\omega }_ W(\sum \lambda _ i v_ i)$ is polynomial in the $\lambda _ i$ when $\lambda _ i \geq 0$. That is, it is polynomial when restricted to the cone spanned by the $v_ i$'s. But since the $v_ i$'s are arbitrary, this implies it is polynomial everywhere.

□Any non-zero $\mathrm{GL}(W)$-invariant linear map $\bigotimes ^ M W^* \to {\textstyle \bigwedge }^ p W^*$ has $M = p$ and is a multiple of the anti-symmetrization map. In particular, any such map is anti-symmetric.

For convenience of notation, replace $W^*$ with $W$. Since the map is in particular invariant under $\mathbb {R}^\times \subseteq \mathrm{GL}(W)$, we must have $M = p$. By Schur's lemma, the second part of the lemma is equivalent to claiming that if we decompose $W^{\otimes p}$ as a direct sum of irreducible $\mathrm{GL}(W)$ representations, then ${\textstyle \bigwedge }^ p W$ appears exactly once. In fact, we know the complete decomposition of $W^{\otimes p}$ by Schur–Weyl duality.

Let $\{ V_\lambda \}$ be the set of irreducible representations of $S_ p$. Then as an $S_ p \times \mathrm{GL}(W)$-representation, we have

$W^{\otimes p} = \bigoplus _\lambda V_\lambda \otimes W_\lambda ,$where $W_\lambda = \operatorname{Hom}_{S_ p} (V_\lambda , W^{\otimes p})$ is either zero or irreducible, and are distinct for different $\lambda$. Under this decomposition, ${\textstyle \bigwedge }^ p W$ corresponds to the sign representation of $S_ p$.

□So we know $\tilde{\omega }_ W$ is a polynomial in $\bigoplus _ j \operatorname{Sym}^ j W^* \otimes W^* \otimes V$, and is anti-symmetric in the $W^*$. So the only terms that can contribute are when $j = 0$ or $j = 1$. In the $j = 1$ case, it has to factor through ${\textstyle \bigwedge }^2 W^* \otimes V$. So $\tilde{\omega }_ W$ is polynomial in $(W^* \otimes V) \oplus ({\textstyle \bigwedge }^2 W^* \otimes V)$. This exactly says $\omega _ W(\alpha )$ is given by wedging together $\alpha$ and $\mathrm{d}\alpha$ (and pairing with elements of $V^*$).