4Structure of integrable PDEs

II Integrable Systems



4.1 Infinite dimensional Hamiltonian system
When we did ODEs, our integrable ODEs were not just random ODEs. They
came from some (finite-dimensional) Hamiltonian systems. If we view PDEs
as infinite-dimension ODEs, then it is natural to ask if we can generalize the
notion of Hamiltonian systems to infinite-dimensional ones, and then see if we
can put our integrable systems in the form of a Hamiltonian system. It turns
out we can, and nice properties of the PDE falls out of this formalism.
We recall that a (finite-dimensional) phase space is given by
M
=
R
2n
and a non-degenerate anti-symmetric matrix
J
. Given a Hamiltonian function
H : M R, the equation of motion for x(t) M becomes
dx
dt
= J
H
x
,
where
x
(
t
) is a vector of length 2
n
,
J
is a non-degenerate anti-symmetric matrix,
and H = H(x) is the Hamiltonian.
In the infinite-dimensional case, instead of having 2
n
coordinates
x
i
(
t
), we
have a function
u
(
x, t
) that depends continuously on the parameter
x
. When
promoting finite-dimensional things to infinite-dimensional versions, we think
of
x
as a continuous version of
i
. We now proceed to generalize the notions we
used to have for finite-dimensional to infinite dimensional ones.
The first is the inner product. In the finite-dimensional case, we could take
the inner product of two vectors by
x · y =
X
x
i
y
i
.
Here we have an analogous inner product, but we replace the sum with an
integral.
Notation. For functions u(x) and v(x), we write
hu, vi =
Z
R
u(x)v(x) dx.
If u, v are functions of time as well, then so is the inner product.
For finite-dimensional phase spaces, we talked about functions of
x
. In
particular, we had the Hamiltonian
H
(
x
). In the case of infinite-dimensional
phase spaces, we will not consider arbitrary functions on
u
, but only functionals:
Definition
(Functional)
.
A functional
F
is a real-valued function (on some
function space) of the form
F [u] =
Z
R
f(x, u, u
x
, u
xx
, ···) dx.
Again, if u is a function of time as well, the F [u] is a function of time.
We used to be able to talk about the derivatives of functions. Time derivatives
of
F
would work just as well, but differentiating with respect to
u
will involve
the functional derivative, which you may have met in IB Variational Principles.
Definition
(Functional derivative/Euler-Lagrange derivative)
.
The functional
derivative of F = F[u] at u is the unique function δF satisfying
hδF, ηi = lim
ε0
F [u + εη] F [u]
ε
for all smooth η with compact support.
Alternatively, we have
F [u + εη] = F [u] + εhδF, ηi + o(ε).
Note that δF is another function, depending on u.
Example. Set
F [u] =
1
2
Z
u
2
x
dx.
We then have
F [u + εη] =
1
2
Z
(u
x
+ εη
x
)
2
dx
=
1
2
u
2
x
dx + ε
Z
u
x
η
x
dx + o(ε)
= F [u] + εhu
x
, η
x
i + o(ε)
This is no good, because we want something of the form
hδF, ηi
, not an inner
product with η
x
. When in doubt, integrate by parts! This is just equal to
= F [u] + εh−u
xx
, ηi + o(ε).
Note that when integrating by parts, we don’t have to mess with the boundary
terms, because η is assumed to have compact support. So we have
δF = u
xx
.
In general, from IB Variational Principles, we know that if
F [u] =
Z
f(x, u, u
x
, u
xx
, ···) dx,
then we have
δF =
f
u
D
x
f
u
x
+ D
2
x
f
u
xx
··· .
Here D
x
is the total derivative, which is different from the partial derivative.
Definition
(Total derivative)
.
Consider a function
f
(
x, u, u
x
, ···
). For any
given function u(x), the total derivative with respect to x is
d
dx
f(x, u(x), u
x
(x), ···) =
f
x
+ u
x
f
u
+ u
xx
f
u
x
+ ···
Example.
x
(xu) = u, D
x
(xu) = u + xu
x
.
Finally, we need to figure out an alternative for
J
. In the case of a finite-
dimensional Hamiltonian system, it is healthy to think of it as an anti-symmetric
bilinear form, so that
vJw
is
J
applied to
v
and
w
. However, since we also have
an inner product given by the dot product, we can alternatively think of
J
as a
linear map R
2n
R
2n
so that we apply it as
v · Jw = v
T
Jw.
Using this J, we can define the Poisson bracket of f = f(x), g = g(x) by
{f, g} =
f
x
· J
g
x
.
We know this is bilinear, antisymmetric and satisfies the Jacobi identity.
How do we promote this to infinite-dimensional Hamiltonian systems? We
can just replace
f
x
with the functional derivative and the dot product with the
inner product. What we need is a replacement for
J
, which we will write as
J
.
There is no obvious candidate for
J
, but assuming we have found a reasonable
linear and antisymmetric candidate, we can make the following definition:
Definition
(Poisson bracket for infinite-dimensional Hamiltonian systems)
.
We
define the Poisson bracket for two functionals to be
{F, G} = hδF, JδGi =
Z
δF (x)JδG(x) dx.
Since
J
is linear and antisymmetric, we know that this Poisson bracket is
bilinear and antisymmetric. The annoying part is the Jacobi identity
{F, {G, H}} + {G, {H, F }} + {H, {F, G}} = 0.
This is not automatically satisfied. We need conditions on
J
. The simplest
antisymmetric linear map we can think of would be
J
=
x
, and this works, i.e.
the Jacobi identity is satisfied. Proving that is easy, but painful.
Finally, we get to the equations of motions. Recall that for finite-dimensional
systems, our equation of evolution is given by
dx
dt
= J
H
x
.
We make the obvious analogues here:
Definition
(Hamiltonian form)
.
An evolution equation for
u
=
u
(
x, t
) is in
Hamiltonian form if it can be written as
u
t
= J
δH
δu
.
for some functional
H
=
H
[
u
] and some linear, antisymmetric
J
such that the
Poisson bracket
{F, G} = hδF, JδGi
obeys the Jacobi identity.
Such a J is known as a Hamiltonian operator.
Definition
(Hamiltonian operator)
.
A Hamiltonian operator is linear antisym-
metric function
J
on the space of functions such that the induced Poisson
bracket obeys the Jacobi identity.
Recall that for a finite-dimensional Hamiltonian system, if
f
=
f
(
x
) is any
function, then we had
df
dt
= {f, H}.
This generalizes to the infinite-dimensional case.
Proposition. If u
t
= JδH and I = I[u], then
dI
dt
= {I, H}.
In particular I[u] is a first integral of u
t
= JδH iff {I, H} = 0.
The proof is the same.
Proof.
dI
dt
= lim
ε0
I[u + εu
t
] I[u]
ε
= hδI, u
t
i = hδI, JδHi = {I, H}.
In summary, we have the following correspondence:
2n-dimensional phase space infinite dimensional phase space
x
i
(t) : i = 1, ··· , 2n u(x, t) : x
x · y =
P
i
x
i
y
i
hu, vi =
R
u(x, t)v(x, t) dx
d
dt
t
x
δ
δu
anti-symmetric matrix J anti-symmetric linear operator J
functions f = f(x) functionals F = F [u]