# 2 The proof

We now prove of Theorem 12. The $p = 0$ case is trivial, so assume $p > 0$.

Recall that we have to show that any natural transformation

$\omega _M: \Omega ^1(M; V) \to \Omega ^p(M)$is (uniquely) a linear combination of transformations of the form

$\sum \alpha _i \otimes v_i \mapsto \sum _{I, J} M_{I, J}(v_{i_1}, \ldots , v_{i_k}, v_{j_1}, \ldots , v_{j_\ell })\, \alpha _{i_1} \wedge \cdots \wedge \alpha _{i_k} \wedge \mathrm{d}\alpha _{j_1} \wedge \cdots \wedge \mathrm{d}\alpha _{j_\ell }.$The uniqueness part is easy to see since we can extract $M_{I, J}$ by evaluating $\omega _M(\alpha )$ for $M$ of dimension large enough. So we have to show every $\omega _M$ is of this form.

The idea of the proof is to first use naturality to show that for $x \in M$, the form $\omega _M(\alpha )_x$ depends only on the $N$-jet of $\alpha$ at $x$ for some large but finite number $N$ (of course, *a posteriori*, $N = 1$ suffices). Once we know this, the problem is reduced to one of finite dimensional linear algebra and invariant theory.

For $\omega \in \Omega ^p(\Omega ^1 \otimes V)$ and $\alpha \in \Omega ^1(M; V)$, the value of $\omega _M(\alpha )$ at $x \in M$ depends only on the $N$-jet of $\alpha$ at $p$ for some $N$. In fact, $N = p$ suffices.

The first step is to replace the $f_i$ with more easily understood coordinate functions. Consider the maps

Let $\tilde{\alpha }, \tilde{\beta }$ be the pullbacks of the corresponding forms under $\mathrm{pr}_1$, and $t_0, \ldots , t_p$ the standard coordinates on $\mathbb {R}^{p + 1}$. Then $\alpha , f_0 f_1\cdots f_p \beta$ are the pullbacks of $\tilde{\alpha }, t_0 t_1\cdots t_p \tilde{\beta }$ under the first map.

So it suffices to show that $\omega _{M \times \mathbb {R}^{p + 1}}(\tilde{\alpha })$ and $\omega _{M \times \mathbb {R}^{p + 1}}(\tilde{\alpha } + t_0 t_1 \cdots t_p \tilde{\beta })$ agree as $p$-forms at $(x, 0)$.

The point now is that by multilinearity of a $p$-form, it suffices to evaluate these $p$-forms on $p$-tuples of standard basis basis vectors (after choosing a chart for $M$), and there is at least one $i$ for which the $\partial _{t_i}$ is not in the list. So by naturality we can perform this evaluation in the submanifold defined by $t_i = 0$, in which these two $p$-forms agree.

By naturality, we may assume $M = W$ is a vector space and $x$ is the origin. The value of $\omega _W(\alpha )$ at the origin is given by a map

$\tilde{\omega }_W: J^N(W; W^* \otimes V) \to {\textstyle \bigwedge }^p W^*,$where $J^N(W; W^* \otimes V)$ is the space of $N$-jets of elements of $\Omega ^1(W; V)$. This is a finite dimensional vector space, given explicitly by

$J^N(W; W^* \otimes V) = \bigoplus _{j = 0}^N \operatorname{Sym}^j W^* \otimes W^* \otimes V.$ Under this decomposition, the $j$^{th} piece captures the $j$^{th} derivatives of $\alpha$. Throughout the proof, we view $\operatorname{Sym}^j W^*$ as a *quotient* of $(W^*)^{\otimes j}$, hence every function on $\operatorname{Sym}^j W^*$ is in particular a function on $(W^*)^{\otimes j}$.

At this point, everything else follows from the fact that $\tilde{\omega }_W$ is functorial in $W$, and in particular $\mathrm{GL}(W)$-invariant.

$\tilde{\omega }_W$ is a polynomial function.

This lemma is true in much greater generality — it holds for any set-theoretic natural transformation between “polynomial functors” $\mathsf{Vec}\to \mathsf{Vec}$. Here a set-theoretic natural transformation is a natural transformations of the underlying set-valued functors. This is a polynomial version of the fact that a natural transformation between additive functors is necessarily additive, because being additive is a *property* and not a structure.

We think of these as a functor $\mathsf{Vec}\to \mathsf{Vec}$ (with $V$ fixed). The point is that for $f \in \operatorname{Hom}_\mathsf{Vec}(W, W')$, the functions $F(f), G(f)$ are polynomial in $f$. This together with naturality will force $\tilde{\omega }_W$ to be polynomial as well.

To show that $\tilde{\omega }_W$ is polynomial, we have to show that if $v_1, \ldots , v_n \in F(W)$, then $\tilde{\omega }_W(\sum \lambda _i v_i)$ is a polynomial function in $\lambda _1, \ldots , \lambda _n$. Without loss of generality, we may assume each $v_i$ lives in the $(j_i - 1)$th summand (so that the summand has $j_i$ tensor powers of $W^*$).

Fix a number $j$ such that $j_i \mid j$ for all $i$. We first show that $\tilde{\omega }_W(\sum \lambda _i^j v_i)$ is a polynomial function in the $\lambda _i$'s.

Let $f: W^{\oplus n} \to W^{\oplus n}$ be the map that multiplies by $\lambda _i^{j / j_i}$ on the $i$th factor, and $\Sigma : W^{\oplus n} \to W$ be the sum map. Consider the commutative diagram

Let $\tilde{v}_i \in F(W^{\oplus n})$ be the image of $v_i$ under the inclusion of the $i$th summand. Then $x = \sum \tilde{v}_i$ gets sent along the top row to $\sum \lambda _i^j v_i$. On the other hand, $\tilde{\omega }_{W^{\oplus n}}(x)$ is some element in $G(W^{\oplus n})$, and whatever it might be, the image along the bottom row gives a polynomial function in the $\lambda _i^{j/j_i}$, hence in the $\lambda _i$. So we are done.

We now know that for any finite set $v_1, \ldots , v_n$, we can write

$\tilde{\omega }_W(\lambda _1^j v_1 + \cdots + \lambda _n^j v_n) = \sum _{r_1, \ldots , r_m} a_R \lambda _1^{r_1} \cdots \lambda _n^{r_n}.$We claim each $r_i$ is a multiple of $j$ (if the corresponding $a_R$ is non-zero). Indeed, if we set $\lambda _i = (\mu _i^j - \nu _i^j)^{1/j}$, then the result must be a polynomial in the $\mu _i$ and $\nu _i$ as well, since it is of the form $\tilde{\omega }_W(\sum \mu _i^j v_i - \nu _i^j v_i)$. But $\sum a_R (\mu _1^j - \nu _1^j)^{r_1/ j} \cdots (\mu _n^j - \nu _n^j)^{r_n/ j}$ is polynomial in $\mu _i, \nu _i$ if and only if $j \mid r_i$.

Now by taking $j$th roots, we know $\tilde{\omega }_W(\sum \lambda _i v_i)$ is polynomial in the $\lambda _i$ when $\lambda _i \geq 0$. That is, it is polynomial when restricted to the cone spanned by the $v_i$'s. But since the $v_i$'s are arbitrary, this implies it is polynomial everywhere.

Any non-zero $\mathrm{GL}(W)$-invariant linear map $\bigotimes ^M W^* \to {\textstyle \bigwedge }^p W^*$ has $M = p$ and is a multiple of the anti-symmetrization map. In particular, any such map is anti-symmetric.

Let $\{ V_\lambda \}$ be the set of irreducible representations of $S_p$. Then as an $S_p \times \mathrm{GL}(W)$-representation, we have

$W^{\otimes p} = \bigoplus _\lambda V_\lambda \otimes W_\lambda ,$where $W_\lambda = \operatorname{Hom}_{S_p} (V_\lambda , W^{\otimes p})$ is either zero or irreducible, and are distinct for different $\lambda$. Under this decomposition, ${\textstyle \bigwedge }^p W$ corresponds to the sign representation of $S_p$.

So we know $\tilde{\omega }_W$ is a polynomial in $\bigoplus _j \operatorname{Sym}^j W^* \otimes W^* \otimes V$, and is anti-symmetric in the $W^*$. So the only terms that can contribute are when $j = 0$ or $j = 1$. In the $j = 1$ case, it has to factor through ${\textstyle \bigwedge }^2 W^* \otimes V$. So $\tilde{\omega }_W$ is polynomial in $(W^* \otimes V) \oplus ({\textstyle \bigwedge }^2 W^* \otimes V)$. This exactly says $\omega _W(\alpha )$ is given by wedging together $\alpha$ and $\mathrm{d}\alpha$ (and pairing with elements of $V^*$).