3Transient growth

III Hydrodynamic Stability



3.3 A general mathematical framework
Let’s try to put this phenomenon in a general framework, which would be
helpful since the vector spaces we deal with in fluid dynamics are not even
finite-dimensional. Suppose x evolves under an equation
˙
x = Ax.
Given an inner product
h·, ·i
on our vector space, we can define the adjoint of
A by requiring
hx, Ayi = hA
x, yi
for all
x, y
. To see why we should care about the adjoint, note that in our
previous example, the optimal perturbation came from an eigenvector of
A
for
λ
1
=
ε
, namely
ε
1
+ ε
2
1
and we might conjecture that this is a general
phenomenon.
For our purposes, we assume
A
and hence
A
have a basis of eigenvectors.
First of all, observe that the eigenvalues of
A
are the complex conjugates of
the eigenvalues of
A
. Indeed, let
v
1
, . . . , v
n
be a basis of eigenvectors of
A
with eigenvalues
λ
1
, . . . , λ
n
, and
w
1
, . . . , w
n
a basis of eigenvectors of
A
with
eigenvalues µ
1
, . . . , µ
n
. Then we have
λ
j
hw
i
, v
j
i = hw
i
, Av
j
i = hA
w
i
, v
j
i = µ
i
hw
i
, v
j
i.
But since the inner product is non-degenerate, for each
i
, it cannot be that
hw
i
, v
j
i = 0 for all j. So there must be some j such that λ
j
= µ
i
.
By picking appropriate basis for each eigenspace, we can arrange the eigen-
vectors so that
hw
i
, v
j
i
= 0 unless
i
=
j
, and
kw
i
k
=
kv
j
k
= 1. This is the
biorthogonality property. Crucially, the basis is not orthonormal.
Now if we are given an initial condition
x
0
, and we want to solve the equation
˙
x
=
Ax
. Note that this is trivial to solve if
x
0
=
v
j
for some
j
. Then the
solution is simply
x = e
λ
j
t
v
j
.
Thus, for a general
x
0
, we should express
x
0
as a linear combination of the
v
j
’s.
If we want to write
x
0
=
k
X
i=1
α
i
v
i
,
then using the biorthogonality condition, we should set
α
i
=
hx
0
, w
i
i
hv
i
, w
i
i
.
Note that since we normalized our eigenvectors so that each eigenvector has
norm 1, the denominator
hv
i
, w
i
i
can be small, hence
α
i
can be large, even if
the norm of
x
0
is quite small, and as we have previously seen, this gives rise to
transient growth if the eigenvalue of
v
i
is also larger than the other eigenvalues.
In our toy example, our condition for the existence of transient growth is that
we have two eigenvectors that are very close to each other. In this formulation,
the requirement is that
hv
i
, w
i
i
is very small, i.e.
v
i
and
w
i
are close to being
orthogonal. But these are essentially the same, since by the biorthogonality
conditions,
w
i
is normal to all other eigenvectors of
A
. So if there is some
eigenvector of
A
that is very close to
v
i
, then
w
i
must be very close to being
orthogonal to v
i
.
Now assuming we have transient growth, the natural question to ask is how
large this growth is. We can write the solution to the initial value problem as
x(t) = e
At
x
0
.
The maximum gain at time t is given by
G(t) = max
x
0
6=0
kq(t)k
2
kq(0)k
= max
x
0
6=0
ke
At
x
0
k
2
kx
0
k
2
.
This is, by definition, the matrix norm of B.
Definition
(Matrix norm)
.
Let
B
be an
n × n
matrix. Then the matrix norm
is
kBk = max
v6=0
kBvk
kvk
.
To understand the matrix norm, we may consider the eigenvalues of the matrix.
Order the eigenvalues of
A
by their real parts, so that
Re
(
λ
1
)
··· Re
(
λ
n
).
Then the gain is clearly bounded below by
G(t) e
2 Re(λ
1
)t
,
achieved by the associated eigenvector.
If
L
is normal, then this is the complete answer. We know the eigenvectors
form an orthonormal basis, so we can write
L
=
V
Λ
V
1
for Λ =
diag
(
λ
1
, . . . , λ
n
)
and V is unitary. Then we have
G(t) = ke
Lt
k
2
= kV e
Λt
V
1
k
2
= ke
Λt
k
2
= e
2 Re(λ
1
)t
.
But life gets enormously more complicated when matrices are non-normal. As a
simple example, the matrix
1 1
0 1
has 1 as the unique eigenvalue, but applying
it to (0, 1) results in a vector of length
2.
In the non-normal case, it would still be convenient to be able to diagonalize
e
At
in some sense, so that we can read off its norm. To do so, we must relax
what we mean by diagonalizing. Instead of finding a
U
such that
U
e
At
U
is
diagonal, we find unitary matrices U, V such that
U
e
At
V = Σ = diag(σ
1
, . . . , σ
n
),
where
σ
i
R
and
σ
1
> ··· > σ
n
0. We can always do this. This is known
as the singular value decomposition, and the diagonal entries
σ
i
are called the
singular values. We then have
G(t) = ke
At
k
2
= max
x6=0
(e
At
x, e
At
x)
(x, x)
= max
x6=0
(UΣV
x, UΣV
x)
(x, x)
= max
x6=0
V
x, ΣV
x)
(x, x)
= max
y6=0
y, Σy)
(y, y)
= σ
2
1
(t).
If we have an explicit singular value decomposition, then this tells us the optimal
initial condition if we want to maximize G(t), namely the first column of v.