1Integrability of ODE's

II Integrable Systems



1.2 Hamiltonian dynamics
From now on, we are going to restrict to a very special kind of ODE, known as
a Hamiltonian system. To write down a general ODE, the background setting
is just the space
R
n
. We then pick a vector field, and then we get an ODE.
To write down a Hamiltonian system, we need more things in the background,
but conversely we need to supply less information to get the system. These
Hamiltonian system are very useful in classical dynamics, and our results here
have applications in classical dynamics, but we will not go into the physical
applications here.
The background settings of a Hamiltonian is a phase space
M
=
R
2n
. Points
on M are described by coordinates
(q, p) = (q
1
, ··· , q
n
, p
1
, ··· , p
n
).
We tend to think of the
q
i
are “generalized positions” of particles, and the
p
n
as
the “generalized momentum” coordinates. We will often write
x = (q, p)
T
.
It is very important to note that here we have “paired up” each
q
i
with the
corresponding
p
i
. In normal
R
n
, all the coordinates are equal, but this is
no longer the case here. To encode this information, we define the 2
n ×
2
n
anti-symmetric matrix
J =
0 I
n
I
n
0
.
We call this the symplectic form, and this is the extra structure we have for a
phase space. We will later see that all the things we care about can be written
in terms of
J
, but for practical purposes, we will often express them in terms of
p and q instead.
The first example is the Poisson bracket :
Definition
(Poisson bracket)
.
For any two functions
f, g
:
M R
, we define
the Poisson bracket by
{f, g} =
f
x
J
g
x
=
f
q
·
g
p
f
p
·
g
q
.
This has some obvious and not-so-obvious properties:
Proposition.
(i) This is linear in each argument.
(ii) This is antisymmetric, i.e. {f, g} = −{g, f }.
(iii) This satisfies the Leibniz property:
{f, gh} = {f, g}h + {f, h}g.
(iv) This satisfies the Jacobi identity:
{f, {g, h}}+ {g, {h, f}} + {h, {f, g}} = 0.
(v) We have
{q
i
, q
j
} = {p
i
, p
j
} = 0, {q
i
, p
j
} = δ
ij
.
Proof.
Just write out the definitions. In particular, you will be made to write
out the 24 terms of the Jacobi identity in the first example sheet.
We will be interested in problems on M of the following form:
Definition
(Hamilton’s equation)
.
Hamilton’s equation is an equation of the
form
˙
q =
H
p
,
˙
p =
H
q
()
for some function H : M R called the Hamiltonian.
Just as we think of
q
and
p
as generalized position and momentum, we tend
to think of H as generalized energy.
Note that given the phase space
M
, all we need to specify a Hamiltonian
system is just a Hamiltonian function
H
:
M R
, which is much less information
that that needed to specify a vector field.
In terms of J, we can write Hamilton’s equation as
˙
x = J
H
x
.
We can imagine Hamilton’s equation as specifying the trajectory of a particle.
In this case, we might want to ask how, say, the speed of the particle changes as
it evolves. In general, suppose we have a smooth function
f
:
M R
. We want
to find the value of
df
dt
. We simply have to apply the chain rule to obtain
df
dt
=
d
dt
f(x(t)) =
f
x
·
˙
x =
f
x
J
H
x
= {f, H}.
We record this result:
Proposition.
Let
f
:
M R
be a smooth function. If
x
(
t
) evolves according
to Hamilton’s equation, then
df
dt
= {f, H}.
In particular, a function
f
is constant if and only if
{f, H}
= 0. This is
very convenient. Without a result like this, if we want to see if
f
is a conserved
quantity of the particle (i.e.
df
dt
= 0), we might have to integrate the equations
of motion, and then try to find explicitly what is conserved, or perhaps mess
around with the equations of motion to somehow find that
df
dt
vanishes. However,
we now have a very systematic way of figuring out if
f
is a conserved quantity
we just compute {f, H}.
In particular, we automatically find that the Hamiltonian is conserved:
dH
dt
= {H, H} = 0.
Example.
Consider a particle (of unit mass) with position
q
= (
q
1
, q
2
, q
3
) (in
Cartesian coordinates) moving under the influence of a potential
U
(
q
). By
Newton’s second law, we have
¨
q =
U
q
.
This is actually a Hamiltonian system. We define the momentum variables by
p
i
= ˙q
i
,
then we have
˙
x =
˙
q
˙
p
=
p
U
q
= J
H
x
,
with
H =
1
2
|p|
2
+ U (q).
This is just the usual energy! Indeed, we can compute
H
p
= p,
H
q
=
U
q
.
Definition
(Hamiltonian vector field)
.
Given a Hamiltonian function
H
, the
Hamiltonian vector field is given by
V
H
= J
H
x
.
We then see that by definition, the Hamiltonian vector field generates the
Hamiltonian flow. More generally, for any f : M R, we call
V
f
= J
f
x
.
This is the Hamiltonian vector field with respect to f .
We know have two bracket-like things we can form. Given two
f, g
, we can
take the Poisson bracket to get
{f, g}
, and consider its Hamiltonian vector field
V
{f,g}
. On the other hand, we can first get
V
f
and
V
g
, and then take the
commutator of the vector fields. It turns out these are not equal, but differ by a
sign.
Proposition. We have
[V
f
, V
g
] = V
{f,g}
.
Proof. See first example sheet.
Definition
(First integral)
.
Given a phase space
M
with a Hamiltonian
H
, we
call f : M R a first integral of the Hamiltonian system if
{f, H} = 0.
The reason for the term “first integral” is historical when we solve a
differential equation, we integrate the equation. Every time we integrate it, we
obtain a new constant. And the first constant we obtain when we integrate is
known as the first integral. However, for our purposes, we can just as well think
of it as a constant of motion.
Example.
Consider the two-body problem the Sun is fixed at the origin,
and a planet has Cartesian coordinates
q
= (
q
1
, q
2
, q
3
). The equation of motion
will be
¨
q =
q
|q|
3
.
This is equivalent to the Hamiltonian system p =
˙
q, with
H =
1
2
|p|
2
1
|q|
.
We have an angular momentum given by
L = q p.
Working with coordinates, we have
L
i
= ε
ijk
q
j
p
k
.
We then have (with implicit summation)
{L
i
, H} =
L
i
q
`
H
p
`
L
i
p
`
H
q
`
= ε
ijk
p
k
δ
`j
p
`
+
q
j
q
k
|q|
3
= ε
ijk
p
k
p
j
+
q
j
q
k
|q|
3
= 0,
where we know the thing vanishes because we contracted a symmetric tensor
with an antisymmetric one. So this is a first integral.
Less interestingly, we know
H
is also a first integral. In general, some
Hamiltonians have many many first integrals.
Our objective of the remainder of the chapter is to show that if our Hamilto-
nian system has enough first integrals, then we can find a change of coordinates
so that the equations of motion are “trivial”. However, we need to impose some
constraints on the integrals for this to be true. We will need to know about the
following words:
Definition (Involution). We say that two first integrals F, G are in involution
if {F, G} = 0 (so F and G Poisson commute”).
Definition
(Independent first integrals)
.
A collection of functions
f
i
:
M
R
are independent if at each
x M
, the vectors
f
i
x
for
i
= 1
, ··· , n
are
independent.
In general we will say a system is “integrable” if we can find a change of
coordaintes so that the equations of motion become “trivial” and we can just
integrate it up. This is a bit vague, so we will define integrability in terms of
the existence of first integrals, and then we will later see that if these conditions
are satisfied, then we can indeed integrate it up.:
Definition
(Integrable system)
.
A 2
n
-dimensional Hamiltonian system (
M, H
)
is integrable if there exists
n
first integrals
{f
i
}
n
i=1
that are independent and in
involution (i.e. {f
i
, f
j
} = 0 for all i, j).
The word independent is very important, or else people will cheat, e.g. take
H, 2H, e
H
, H
2
, ···.
Example. Two-dimensional Hamiltonian systems are always integrable.