5Distributions

IB Methods



5.1 Distributions
When performing separation of variables, we also have the problem of convergence
as well. To perform separation of variables, we first find some particular solutions
of the form, say,
X
(
x
)
Y
(
y
)
Z
(
z
). We know that these solve, say, the wave equation.
However, what we do next is take an infinite sum of these functions. First of
all, how can we be sure that this converges at all? Even if it did, how do we
know that the sum satisfies the differential equation? As we have seen in Fourier
series, an infinite sum of continuous functions can be discontinuous. If it is not
even continuous, how can we say it is a solution of a differential equation?
Hence, at first people were rather skeptical of this method. They thought
that while these methods worked, they only work on a small, restricted domain
of problems. However, later people realized that this method is indeed rather
general, as long as we allow some more “generalized” functions that are not
functions in the usual sense. These are known as distributions.
To define a distribution, we first pick a class of “nice” test functions, where
“nice” means we can do whatever we want to them (e.g. differentiate, integrate
etc.) A main example is the set of infinitely smooth functions with compact
support on some set
K
Ω, written
C
cpt
(Ω) or
D
(Ω). For example, we can
have the bump function defined by
φ(x) =
(
e
1/(1x
2
)
|x| < 1
0 otherwise.
x
We now define a distribution
T
to be a linear map
T
:
D
(Ω)
R
. For those
who are doing IB Linear Algebra, this is the dual space of the space of (“nice”)
real-valued functions. For φ D(Ω), we often write its image under T as T [φ].
Example.
The simplest example is just an ordinary function that is integrable
over any compact region. Then we define the distribution T
f
as
T
f
[φ] =
Z
f(x)φ(x) dx.
Note that this is a linear map since integration is linear (and multiplication
is commutative and distributes over addition). Hence every function “is” a
distribution.
Of course, this would not be interesting if we only had ordinary functions.
The most important example of a distribution (that is not an ordinary function)
is the Dirac delta “function”.
Definition (Dirac-delta). The Dirac-delta is a distribution defined by
δ[φ] = φ(0).
By analogy with the first case, we often abuse notation and write
δ[φ] =
Z
δ(x)φ(x) dx,
and pretend
δ
(
x
) is an actual function. Of course, it is not really a function, i.e.
it is not a map
R
. If it were, then we must have
δ
(
x
) = 0 whenever
x 6
= 0,
since
δ
[
φ
] =
φ
(0) only depends on what happens at 0. But then this integral
will just give 0 if
δ
(0)
R
. Some people like to think of this as a function that
is zero anywhere and “infinitely large” everywhere else. Formally, though, we
have to think of it as a distribution.
Although distributions can be arbitrarily singular and insane, we can nonethe-
less define all their derivatives, defined by
T
0
[φ] = T [φ
0
].
This is motivated by the case of regular functions, where we get, after integrating
by parts,
Z
f
0
(x)φ(x) dx =
Z
f(x)φ
0
(x) dx,
with no boundary terms since we have a compact support. Since
φ
is infinitely
differentiable, we can take arbitrary derivatives of distributions.
So even though distributions can be crazy and singular, everything can be
differentiated. This is good.
Generalized functions can occur as limits of sequences of normal functions.
For example, the family of functions
G
n
(x) =
n
π
exp(n
2
x
2
)
are smooth for any finite n, and G
n
[φ] δ[φ] for any φ.
t
D
It thus makes sense to define
δ
0
(φ) = δ[φ
0
] = φ
0
(0),
as this is the limit of the sequence
lim
n→∞
Z
G
0
n
(x)φ(x) dx.
It is often convenient to think of
δ
(
x
) as
lim
n→∞
G
n
(
x
), and
δ
0
(
x
) =
lim
n→∞
G
0
n
(
x
)
etc, despite the fact that these limits do not exist as functions.
We can look at some properties of δ(x):
Translation:
Z
−∞
δ(x a)φ(x) dx =
Z
−∞
δ(y)φ(y + a) dx.
Scaling:
Z
−∞
δ(cx)φ(x) dx =
Z
−∞
δ(y)φ
y
c
dy
|c|
=
1
|c|
φ(0).
These are both special cases of the following: suppose
f
(
x
) is a continuously
differentiable function with isolated simple zeros at
x
i
. Then near any of
its zeros x
i
, we have f(x) (x x
i
)
f
x
x
i
. Then
Z
−∞
δ(f(x))φ(x) dx =
n
X
i=1
Z
−∞
δ
(x x
i
)
f
x
x
i
!
φ(x) dx
=
n
X
i=1
1
|f
0
(x
i
)|
φ(x
i
).
We can also expand the
δ
-function in a basis of eigenfunctions. Suppose we live
in the interval [L, L], and write a Fourier expansion
δ(x) =
X
nZ
ˆ
δ
n
e
inπx/L
with
ˆ
δ
n
=
1
2L
Z
L
L
e
inπx/L
δ(x) dx =
1
2L
So we have
δ(x) =
1
2L
X
nZ
e
inπx/L
.
This does make sense as a distribution. We let
S
N
δ(x) =
1
2L
N
X
n=N
e
inπx/L
.
Then
lim
N→∞
Z
L
L
S
N
δ(x)φ(x) dx = lim
N→∞
1
2L
Z
L
L
N
X
n=N
e
inπx/L
φ(x) dx
= lim
N→∞
N
X
n=N
"
1
2L
Z
L
L
e
inπx/L
φ(x) dx
#
= lim
N→∞
N
X
n=N
ˆ
φ
n
= lim
N→∞
N
X
n=N
ˆ
φ
n
e
inπ0/L
= φ(0),
since the Fourier series of the smooth function
φ
(
x
) does converge for all
x
[L, L].
We can equally well expand
δ
(
x
) in terms of any other set of orthonormal
eigenfunctions. Let
{y
n
(
x
)
}
be a complete set of eigenfunctions on [
a, b
] that are
orthogonal with respect to a weight function w(x). Then we can write
δ(x ξ) =
X
n
c
n
y
n
(x),
with
c
n
=
Z
b
a
y
n
(x)δ(x ξ)w(x) dx = y
n
(ξ)w(ξ).
So
δ(x ξ) = w(ξ)
X
n
y
n
(ξ)y
n
(x) = w(x)
X
n
y
n
(ξ)y
n
(x),
using the fact that
δ(x ξ) =
w(ξ)
w(x)
δ(x ξ).