2The method of Lagrange multipliers

IB Optimisation



2.3 Lagrange duality
Consider the problem
minimize f(x) subject to h(x) = b, x X.
Denote this as P .
The Lagrangian is
L(x, λ) = f(x) λ
T
(h(x) b).
Define the dual function g : R
m
R as
g(λ) = inf
xX
L(x, λ).
ie, we fix λ, and see how small we can get L to be. As before, let
Y = {λ R
n
: g(λ) > −∞}.
Then we have
Theorem (Weak duality). If x X(b) (i.e. x satisfies both the functional and
regional constraints) and λ Y , then
g(λ) f(x).
In particular,
sup
λY
g(λ) inf
xX(b)
f(x).
Proof.
g(λ) = inf
x
0
X
L(x
0
, λ)
L(x, λ)
= f(x) λ
T
(h(x) b)
= f(x).
This suggests that we can solve a dual problem - instead of minimizing
f
,
we can maximize
g
subject to
λ Y
. Denote this problem as (
D
). The original
problem (P ) is called primal.
Definition (Strong duality). (P ) and (D) are said to satisfy strong duality if
sup
λY
g(λ) = inf
xX(b)
f(x).
It turns out that problems satisfying strong duality are exactly those for
which the method of Lagrange multipliers work.
Example. Again consider the problem to minimize x
1
x
2
2x
3
subject to
x
1
+ x
2
+ x
3
= 5
x
2
1
+ x
2
2
= 4
We saw that
Y = {λ R
2
: λ
1
= 2, λ
2
< 0}
and
x
(λ) =
3
2λ
2
,
1
2λ
2
, 5
4
2λ
2
.
The dual function is
g(λ) = inf
xX
L(x, λ) = L(x
(λ), λ) =
10
4λ
2
+ 4λ
2
10.
The dual is the problem to
maximize
10
4λ
2
+ 4λ
2
10 subject to λ
2
< 0.
The maximum is attained for
λ
2
=
r
5
8
After calculating the values of
g
and
f
, we can see that the primal and dual do
have the same optimal value.
Right now, what we’ve got isn’t helpful, because we won’t know if our problem
satisfies strong duality!