1Integrability of ODE's
II Integrable Systems
1.1 Vector fields and flow maps
In the first section, we are going to look at the integrability of ODE’s. Here we
are going to consider a general
m
-dimensional first order non-linear ODE’s. As
always, restricting to only first-order ODE’s is not an actual restriction, since
any higher-order ODE can be written as a system of first-order ODE’s. At the
end, we will be concerned with a special kind of ODE given by a Hamiltonian
system. However, in this section, we first give a quick overview of the general
theory of ODEs.
An m-dimensional ODE is specified by a vector field V : R
m
→ R
m
and an
initial condition
x
0
∈ R
m
. The objective is to find some
x
(
t
)
∈ R
m
, which is a
function of t ∈ (a, b) for some interval (a, b) containing 0, satisfying
˙
x = V(x), x(0) = x
0
.
In this course, we will assume the vector field
V
is sufficiently “nice”, so that
the following result holds:
Fact.
For a “nice” vector field
V
and any initial condition
x
0
, there is always
a unique solution to
˙
x
=
V
(
x
),
x
(0) =
x
0
. Moreover, this solution depends
smoothly (i.e. infinitely differentiably) on t and x
0
.
It is convenient to write the solution as
x(t) = g
t
x
0
,
where
g
t
:
R
m
→ R
m
is called the flow map. Since
V
is nice, we know this is a
smooth map. This flow map has some nice properties:
Proposition.
(i) g
0
= id
(ii) g
t+s
= g
t
g
s
(iii) (g
t
)
−1
= g
−t
If one knows group theory, then this says that
g
is a group homomorphism
from
R
to the group of diffeomorphisms of
R
m
, i.e. the group of smooth invertible
maps R
m
→ R
m
.
Proof.
The equality
g
0
=
id
is by definition of
g
, and the last equality follows
from the first two since t + (−t) = 0. To see the second, we need to show that
g
t+s
x
0
= g
t
(g
s
x
0
)
for any
x
0
. To do so, we see that both of them, as a function of
t
, are solutions
to
˙
x = V(x), x(0) = g
s
x
0
.
So the result follows since solutions are unique.
We say that
V
is the infinitesimal generator of the flow
g
t
. This is because
we can Taylor expand.
x(ε) = g
ε
x
0
= x(0) + ε
˙
x(0) + o(ε) = x
0
+ εV(x
0
) + o(ε).
Given vector fields
V
1
, V
2
, one natural question to ask is whether their flows
commute, i.e. if they generate g
t
1
and g
s
2
, then must we have
g
t
1
g
s
2
x
0
= g
s
2
g
t
1
x
0
for all
x
0
? In general, this need not be true, so we might be interested to find out
if this happens to be true for particular
V
1
, V
2
. However, often, it is difficult
to check this directly, because differential equations are generally hard to solve,
and we will probably have a huge trouble trying to find explicit expressions for
g
1
and g
2
.
Thus, we would want to be able to consider this problem at an infinitesimal
level, i.e. just by looking at
V
1
, V
2
themselves. It turns out the answer is given
by the commutator:
Definition
(Commutator)
.
For two vector fields
V
1
, V
2
:
R
m
→ R
m
, we define
a third vector field called the commutator by
[V
1
, V
2
] =
V
1
·
∂
∂x
V
2
−
V
2
·
∂
∂x
V
1
,
where we write
∂
∂x
=
∂
∂x
1
, ··· ,
∂
∂x
n
T
.
More explicitly, the ith component is given by
[V
1
, V
2
]
i
=
m
X
j=1
(V
1
)
j
∂
∂x
j
(V
2
)
i
− (V
2
)
j
∂
∂x
j
(V
1
)
i
The result we have is
Proposition.
Let
V
1
, V
2
be vector fields with flows
g
t
1
and
g
s
2
. Then we have
[V
1
, V
2
] = 0 ⇐⇒ g
t
1
g
s
2
= g
s
2
g
t
1
.
Proof. See example sheet 1.