Download this complete Project material titled; Minimum Principle Of Pontryagin. Download Your Complete Project Material With Abstract, Reference, Questionnaire,And Chapters 1 To 5 with abstract, chapters 1-5, references, and questionnaire. Preview Abstract or chapter one below

  • Format: PDF and MS Word (DOC)
  • pages = 65

 5,000

TABLE OF CONTENTS

1 Preliminaries 8
1.1 Linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.1 A basic result concerning linear maps . . . . . . . . . . . 8
1.1.2 Bounded Linear Maps . . . . . . . . . . . . . . . . . . . . 9
1.2 Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Dierential Calculus in Banach spaces . . . . . . . . . . . . . . . 13
1.5 Convex sets and convex functions . . . . . . . . . . . . . . . . . . 14
1.5.1 Notation and Further denitions . . . . . . . . . . . . . . 17
1.6 Lower Semi-Continuous Functions . . . . . . . . . . . . . . . . . 18
1.7 Existence Result . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.8 Optimality condition: . . . . . . . . . . . . . . . . . . . . . . . . 21
1.9 Optimization with equality constrains . . . . . . . . . . . . . . . 23
2 Pontryagin minimum method principle 26
2.0.1 Towards the principle of pontryagin . . . . . . . . . . . . 28
3 Minimum Principle of Pontryagin: Linear Quadratic Case 30
3.1 Existence and uniqueness of the optimal control . . . . . . . . . . 31
3.2 characterization of the optimal control . . . . . . . . . . . . . . . 35
3.2.1 Riccati equation . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2

CHAPTER ONE

Preliminaries
1.1 Linear maps
In this part we dene linear map and present some basic results concerning
them.
Denition :Let X and linear spaces over a scalar eld K.A mapping T : X 􀀀!
Y is said to be a linear map if:
T(x + y) = T(x) + T(y) (1.1)
for arbitrary x; y 2 X and arbitrary scalars ; 2 K.Some authors use the term
linear operator or linear transformation instead of linear map.Condition (1.1) is
equivalent to the following two conditions:
(i)T(x + y) = T(x) + T(y)8x; y 2 X
(ii)T(x) = T(x)8x 2 X and for each scalar, .
1.1.1 A basic result concerning linear maps
We remark rst that since linear functionals are special forms of linear maps,any
result proved for linear map holds for linear functionals.
Proposition 1.1.1 Let X and Y be two linear spaces over a scalar eld, K, and
let T : X 􀀀! Y be a linear map.Then
1. T(0) = 0
2. The rang of T, R(T) = fy 2 Y : T(x) = y for some x 2 Xg is a linear
subspace of Y
3. T is one to one if and only if T(x) = 0 implies that x = 0
4. If T is one to one , then T􀀀1 exist on R(T) and T􀀀1 : R(T) 􀀀! X is also
a linear map.
8
Proof. (1) Since T is linear,we have, T(x) = T(x) for each x 2 X and each
scalar . Take = 0 and (1) follows immediately.
(2)We need to show that for y1; y2 2 R(T) and ; scalars, y1 + y2 2 R(T).
Now, y1; y2 2 R(T) implies that there exists x1; x2 2 X such that T(x1) =
y1; T(x2) = y2. Moreover, x1 + x2 2 X(since X is a linear space). Further-
more, by the linearity of T,we have
T(x1 + x2) = T(x1) + T(x2)
Hence y1 + y2 2 R(T), and so R(T) is a linear subspace of Y .
(3)())Assume that T is one to one. Clearly T(x) = 0 ) T(x) = T(0) since T
is linear (and so T(0) = 0). But T is one-to-one.So,x = 0.
(()Assume that whenever T(u) = 0 then u must be 0. We want to prove that
T is one-to-one.So,let T(x) = T(y). Then T(x) 􀀀 T(y) = 0 and by the linearity
of T; T(x 􀀀 y) = 0.By hypothesis, x 􀀀 y = 0 which implies x = y. Hence T is
one to-one.
(4)Assume that T is one-to-one,since the restriction of T on R(T) is always
onto, then T is bijective from X into R(T).So T􀀀1 exists on R(T). For the
linearity, let y1; y2 2 R(T) and a scalar. Then there exists x1; x2 2 X such
that y1 = T(x1); y2 = T(x2). so
T􀀀1(y1 + y2) = T􀀀1(T(x1) + T(x2)
which is equivalent to:
T􀀀1(y1 + y2) = T􀀀1(T(x1 + x2)) by using the linearity of T;
which is also equivalent to
T􀀀1(y1 + y2) = x1 + x2 = T􀀀1(y1) + T􀀀1(y2)
Therefore T􀀀1 is linear
1.1.2 Bounded Linear Maps
Denition : Let X and Y be normed linear spaces over a scalar eld K, and
let T : X 􀀀! Y be a linear map. Then T is said to be bounded if there exists
some constant K 0 such that for each x 2 X,
kT(x)k Kkxk;
the constant K is called a bound for T and in this case, T is called a bounded
linear map. We denote by B(X; Y ), the family of all bounded linear maps from
X into Y .
We now turn our attention to linear maps that are continuous. The notion of
continuity can be state, for linear maps in several useful equivalent forms. We
state these equivalent forms in the following theorem.
Theorem 1.1.1 Let X and Y be normed linear space and let T : X 􀀀! Y be a
linear map.Then the following are equivalent:
1. T is continuous;
9
2. T is continuous at the origin(in the sense that if fxng is a sequence of X
such that xn ! 0 as n ! +1,then T(xn) ! 0 as n ! +1)
3. T is Lipschitz, ie , there exists a constant K 0 such that,for each x 2 X,
kT(x)k Kkxk
;
4. If D = fx 2 X : kxk 1g is the closed unit disc in X, then T(D)
is bounded(in the sense that there exists a constant M 0 such that
kT(x)k M for all x 2 D)
1.2 Banach spaces
Denition Let X be a real linear space, and k:kX a norm on X , and dX the
corresponding metric dened by dX(x; y) = kx􀀀ykX8x; y 2 X. The norm linear
space (X; k:kX) is a real Banach space if the metric space (X; dX) is complete,
ie if any Cauchy sequence of elements of space (X; k:kX) converges in (X; k:kX:)
That is every sequence satisfying the the following Cauchy criterion:
8 > 0; 9n0 2 N : p; q > n0 ) dX(xp; xq) <
Denition Given any vector space V over a vector eld F (where F = R or C),
the topological dual space(or simply) dual space of V is the linear space of all
bounded linear functionals. We shall denote it by V .
Remark 1.2.1 1. The dual space V has a canonical norm dened by
kfk = sup
x2V;kxk6=0
jf(x)j
kxk
; 8f 2 V : (1.2)
2. The dual of every real normed linear space, endowed with its canonical
norm is a Banach space.
In order to dene other useful topologies on dual spaces, we recall the following:
Denition (Initial topology) Let X be an nonempty set, fYigi2I be a family
of topological spaces(where I is an arbitrary index set)and i : X 􀀀! Y ; i 2 I
a family of maps.
The smallest topology on X such that the map i; i 2 I are continuous is called
the initial topology.
Next, we dene the weak topology of a normed vector space X and the weak
star topology of its dual space X which are special initial topologies.
Denition (weak topology) Let X be a real normed linear space, and let us
associate to each f 2 X the map f : X 􀀀! R given by f (x) = f(x)8x 2 X
The weak topology on X is the smallest topology on X for which all the f are
continuous.
We write !-topology for the weak topology.
10
Denition (weak star topology) Let X be a real normed linear space and X
its dual. Let us associate to each x 2 X the map x : X 􀀀! R given by
x(f) = f(x)8f 2 X.
The weak star topology on X is the smallest topology on X for which all the
x are continuous. We write !-topology for the weak star topology
Proposition 1.2.1 Let X be a real normed linear space and X its dual space.
Then, there exists on X three standard topologies, the strong topology given
by the canonical norm k:kx , the weak topology(!-topology) and the weak star
topology ((!-topology) such that:
(X; !) ,! (X; !) ,! (X; k:kX ):
The following part of this section is devoted to reexive spaces. For any normed
real linear space X; the space X of all bounded linear functionals on X is a
Banach space and as a linear space, it has its own corresponding . Let X be
a Banach space, there exists a natural mapping J : X 􀀀! X of X into X
dened, for each x 2 X by J(x) = x where x : X 􀀀! R is given by
x(f) = hf; xi, for each f 2 X. Thus,
hJ(x); fi = hf; xi for each ; f 2 X
We verify the following properties of J:
(i)J is linear(which is trivial);
(ii)kJ(x)k = kxk for all x 2 X;ie J is an isometry.In fact, for each f 2
X; kJ(x)k = sup
kfk=1;f2X
jhf; xij = kxk.
In general, the map J need not to be onto. Consequently, we always identify X
as a subspace of X. Since an isometry is always injective, it follows that J is
an isomorphism onto J(X) X. The mapping J dened above is called the
canonical map(or canonical embedding) of X into X, and the space X is said
to be embedded in X. This leads to the following denition.
Denition Let X be a norm linear space and let J be the canonical embedding
of X into X. If J is onto, then X is called reexive. Thus, a reexive Banach
space is one in which the canonical embedding is onto.
We now state the following important theorem.
Theorem 1.2.1 (Eberlein-Smul’yan theorem)
A real Banach space X is reexive if and only if every ( norm ) bounded sequence
in X has a subsequence which converges weakly to an element of X.
1.3 Hilbert Spaces
Denition 1. A map : E E 􀀀! C is Sesquilinear if
(i)(x + y; z + w) = (x; z) + (x;w) + (y; z) + (y;w)
(ii)(ax; by) = ab(x; y) , where the “bar” indicates the complex conjuga-
tion
11
for all x; y; z;w 2 E and for all a; b 2 C.
2. AHermitian form is a sesquilinear form : E E 􀀀! C such that
(x; y) = (y; x)
3. A positive Hermitian form is a Hermitian form such that (x; x) 0
for all x 2 E.
4. A denite Hermitian form is a Hermitian form such that (x; x) =
0 ) x = 0
5. An inner product on E is a positive denite Hermitian form and
will be denoted h:; :i := (:; :). The pair(E; h; i) is called an inner prod-
uct space.
We shall simply write E for the inner product space (E, h; i) when the
inner product h; i) is known.
In the case where we are using more than one inner product spaces, spec-
ication will be made by writing h; h; iE when talking about the inner
product space (E; h; i)
Denition Two vectors x and y in an inner product space E are said to be
orthogonal if hx; yi = 0. For a subset F of E, we have x is orthogonal to F if x
is orthogonal to y for all y in F.
Proposition 1.3.1 Let E be an inner product space andx; y 2 E: Then
khx; yik (hx; xi)
1
2 (hy; yi)
1
2
For an inner product space (E; h; i),the function k:k : E 􀀀! R dened by
kxkE =
p
hx; xiE
is a norm on E
Thus, (E; k:kE ) is a normed vector space, hence a metric space endowed with
the distance
dE : E E 􀀀! R dened by dE(x; y) = kx 􀀀 ykE.
Denition (Hilbert Space). An inner product space E is called a Hilbert
space if it is complete.
Remark 1.3.1 1. Hilbert spaces are thus a special class of Banach spaces.
2. Every nite dimension inner product space is complete and simply called
Euclidian Space.
Proposition 1.3.2 : Let H be a Hilbert space. Then,
for all u 2 H; Tu(v) := hu; vi denes a bounded linear functional,ie Tu 2 H.
Furthermore kukH = kTukH
Theorem 1.3.1 (Riesz Representation theorem) Let H be a Hilbert space and
let f be a bounded linear functional on H: Then,
12
1. There exists a unique vector y0 2 H such that
f(x) = hx; y0i for each ; x 2 H
2. Moreover, kfk = ky0k.
Remark 1.3.2 The map T : H 􀀀! H dened by T(u) = Tu is linear,(anti-
linear in the complex case) and isometric. Therefore the canonical embedding
is an isometry showing that “any Hilbert space is reexive”.
At the end of this part, we state this important proposition which is just a
corollary of Eberlein-Smul’yan theorem.
Proposition 1.3.3 Let H be a Hilbert space, then any bounded sequence in H
has a subsequence which converges weakly to an element of H
1.4 Dierential Calculus in Banach spaces
In this section we dene the derivative of a map dened between real Banach
spaces.
Denition ( Directional Dierentiability) Let f be a function dened from a
real linear space X into a real normed linear space Y and let x0 in X and v in
X/{0}. The function f is said to be dierentiable at x0 in the direction v if the
function t 7􀀀! f(x0 + tv) is dierentiable at t = 0. i.e.
t 7􀀀!
f(x0 + tv) 􀀀 f(x0)
t
; t 6= 0
has a limit in Y when t tends to 0.This,when it exists is denoted f0(xo; v) or
@f
@v (x0)
Denition ( Gateau Dierentiability) A function f dened from a real linear
space X into a real normed linear space Y is Gateaux Dierentiable at a point
x0 in X if :
1. f is dierentiable at x0 in every direction v
2. there exists a bounded linear map A : X 􀀀! Y such that f0(x0; v) =
A(v);in others words,the mapv 7􀀀! f0(x0; v) is a bounded linear map
from X into Y
In this case the map f0(x0; 🙂 is called the Gateaux dierential of f at x0
and is denoted by DGf(x0; 🙂 or f0G
(x0).
Denition (Frechet Dierentiability) A map f : U X 􀀀! Y whose domain
U is an open set of a real Banach space X and whose range is a real Banach
space Y is (Frechet) dierentiable at x 2 U if there exists a bounded linear map
A : X 􀀀! Y such that
lim
kuk!0
k
f(x + u) 􀀀 f(x) 􀀀 Au
kuk
k = 0
or equivalently to
f(x + u) 􀀀 f(x) 􀀀 Au = o(kuk)
13
Proposition 1.4.1 : If f : U X 􀀀! Y is Frechet Dierentiable, then f is
Gateaux Dierentiable.
Proof. Indeed by taking u = tv; in the denition of Frechet Dierentiability we
have:f(x + tv) 􀀀 f(x) = A(tv) + o(ktvk). Simplifying by t and using the fact
that A is linear,we have:
f(x + tv) 􀀀 F(x)
t
= (A(v) +
(oktvk)
t
)
by the Frechet Dierentiability of f And since as t ! 0; u ! 0 so we have
lim
t!0
f(x + tv) 􀀀 f(x)
t
= Av
Proposition 1.4.2 Let X be a real Banach space and Y be a real normed linear
space. Then
1. The set of Gateaux dierentiable mappings from X into Y is a linear sub-
space of the linear space of all the mappings dened from X into Y space
is contained in B(X; Y );
2. The set of Frechet Dierentiable mappings from X into Y is also a sub-
space of B(X; Y )
1.5 Convex sets and convex functions
Let us recall rst the following basic notions of optimization.
Denition Let V be a norm linear space, and let F : V 􀀀! R [ f+1g.
1. A point u 2 K is a minimizer of F on K if
F(u) F(v)8v 2 K
2. A point u 2 K is a local minimizer of F on K if there exists r > 0 such that
F(u) F(v)8v 2 K \ B(u; r)
3. A point u 2 K is a strict minimizer of F on K if
F(u) < F(v)8v 2 K; v 6= u
4. A point u 2 K is a strict local minimizer of F on K if there exists r > 0 such
that
F(u) < F(v)8v 2 K \ B(u; r); v 6= u
Denition Let X be a norm linear space.A set C X is convex if and only if:
8x; y 2 C; 8 2 (0; 1), we have x + (1 􀀀 )y 2 C
ie [x; y] C ; 8x; y 2 C
14
Denition Let C be a convex subset of a norm linear space X. A function
f : X 􀀀! R [ f+1g is convex on C if 8x; y 2 C; 8 2 (0; 1)
f(x + (1 􀀀 )y) f(x) + (1 􀀀 )f(y) (1.3)
If (1.3) is strict for x; y 2 C, with x 6= y and f(x); f(y) nite then f is strictly
convex.
f linear functional implies that f is convex and concave
Theorem 1.5.1 (Slope Inequality) Let I be an interval of R and let f : I 􀀀!
R [ f+1g convex. Let r1; r2; r3 2 I : r1 < r2 < r3;with f(r1); f(r2) nite then:
f(r1) 􀀀 f(r2)
r2 􀀀 r1

f(r3) 􀀀 f(r1)
r3 􀀀 r1

f(r3) 􀀀 f(r2)
r3 􀀀 r2
Proof. Set =
r2 􀀀 r1
r3 􀀀 r1
2 (0; 1). We have 1 􀀀 =
r3 􀀀 r2
r3 􀀀 r1
2 (0; 1) and
r2 = (1 􀀀 )r1 + r3. Using the convexity of f, it follows that f(r2)
f(r3) + (1 􀀀 )f(r1) =
r3 􀀀 r2
r3 􀀀 r1
f(r1) +
r2 􀀀 r1
r3 􀀀 r1
f(r3). Adding 􀀀f(r1) in both
sides,we get f(r2) 􀀀 f(r1)
r1 􀀀 r2
r3 􀀀 r1
f(r1) +
r2 􀀀 r1
r3 􀀀 r1
f(r3). Which implies that
f(r2) 􀀀 f(r1)
r2 􀀀 r1

f(r3) 􀀀 f(r1)
r3 􀀀 r1
. This prove rst inequality. Replacing 􀀀f(r1)
by 􀀀f(r3) in this rst inequality and using the same argument, we get the
second inequality.
Corollary 1.5.1 Let f : I 􀀀! R derivable on I, then The following are equiva-
lent:
1. f is convex
2. f0 is increasing
3. f(y) f(x) + f0(x)(y 􀀀 x)8x; y 2 I
If f is twice derivable on I the f is convex if and only if f0(x) 08x 2 I
Proof.
(1) ) (2). Let r < t.By the rst inequality of the slope inequality we have
f(s) 􀀀 f(r)
s 􀀀 r

f(t) 􀀀 f(r)
t 􀀀 r
, for r < s < t. So we have, f0(r) = lim
s&r
f(s) 􀀀 f(r)
s 􀀀 r

f(t) 􀀀 f(r)
t 􀀀 r
. By taking s % t, and by using the second inequality of the slope
inequality, we have
f(t) 􀀀 f(r)
t 􀀀 r
lim
s%t
f(s) 􀀀 f(t)
s 􀀀 t
= f0(t). Therefore we have
f0(r) f0(t)8r < t. Thus f0 is increasing.
(2)) (3)
Let x 2 I, let us dene g(y) = f(y) 􀀀 f(x) 􀀀 f0(x)(y 􀀀 x). We have that
g is dierentiable and g0(y) = f0(y) 􀀀 f0(x). this give us:g0(y) 0 if and
only if f0(y) f0(x), if and only if y x since f0 is increasing. And also
we have,g0(y) 0if and only if y x. So by studding the function g, we
have that x is a minimizer of this function. So we have g(y) g(x)8y. Since
g(x) = 0,this implies that g(y) 0 which implies f(y)􀀀f(x)􀀀f0(x)(y􀀀x) 0,
15
ie f(y) f(x) + f0(x)(y 􀀀 x), ie (3)
(3)) (1)
We have by assumption that f(y) f(x) + f0(x)(y 􀀀 x)8x 2 I, which implies
that f(y) sup
x2I
[f(x) + f0(x)(y 􀀀 x)],which is equivalent to f(y) sup
x2I
hx(y),
where hx(y) = f(x)+f0(x)(y􀀀x) = f0(x):y+f(x)􀀀f0(x):x. Since for x = y we
have equality on the previous inequality,then we have: f(y) = sup
x2I
hx(y), which
give us that f is convex.
Theorem 1.5.2 Let X a norm linear space, U X; is open, convex, nonempty.
Let f : U 􀀀! R dierentiable, then the following are equivalent:
1. f is convex
2. f0 : U 􀀀! X is monotone increasing, ie hf0(y)􀀀f0(x); y􀀀xi 08x; y 2 U
3. f(y) f(x) + hf0(x); y 􀀀 xi8x; y 2 U
If f is twice dierentiable then f is convex if and only if f”(x) is positive
semi-denite ie:
hf0(x):y; yi 08y 2 X; 8x 2 U
Proof.
Let x; y 2 U, dene I = fs 2 R : x + s(y 􀀀 x) 2 Ug
Claim:I is an interval of R such that 0; 1 2 I
Let s1; s2 2 I; t 2 (0; 1), then we have: x + s1(y 􀀀 x) 2 U and x + s2(y 􀀀 x) 2
U. Our aim is to show that x + (ts1 + (1 􀀀 t)s2)(y 􀀀 x) 2 U to get that
ts1 + (1 􀀀 t)s2(y 􀀀 x) 2 I, which is equivalent to say that I is an interval. For
that, we have x + (ts1 + (1 􀀀 t)s2)(y 􀀀 x) = x + ts1(y 􀀀 x) + (1 􀀀 t)s2(y 􀀀 x)
= tx + (1 􀀀 t)x + ts1(y 􀀀 x) + (1 􀀀 t)s2(y 􀀀 x)
= t(x + s1(y 􀀀 x)) + (1 􀀀 t)(x + s2(y 􀀀 x))
Since x + s1(y 􀀀 x) 2 U and x + s2(y 􀀀 x) 2 U and U is convex, t 2 (0; 1), we
have = t(x+s1(y 􀀀x))+(1􀀀t)(x+s2(y 􀀀x)) 2 U. Which is equivalent to say
x + (ts1 + (1 􀀀 t)s2)(y 􀀀 x) 2 U, ie I is an interval.
Assume that (1) is true, we want to show (2), ie f0 is monotone. Let us dene
h : I 􀀀! R by h(s) = f(x + s(y 􀀀 x))
We have that:
i)h is derivable on I
ii)h is convex if and only if f is convex on U
f convex implies that h is convex. so according to the previous theorem, we
have h0 is increasing, which give us that h0(1) h0(0)
Buy dierentiating h, we have h0(s) = hf0(x + s(y 􀀀 x)); y 􀀀 xi. So h0(0) =
hf0(x); y 􀀀 xi and h0(1) = hf0(y); y 􀀀 xi. So using our inequality h0(1) h0(0),
we have hf0(y); y 􀀀 xi hf0(x); y 􀀀 xi, which is equivalent to f0 is monotone.
we assume (2), ie a dire f is monotone, we have to prove rst that f0 monotone
implies that h0 is increasing. For that, let us evaluate the dierence (h0(s2) 􀀀
h0(s1))(s2 􀀀 s1). We have that
(h0(s2)􀀀h0(s1))(s2 􀀀s1) = (hf0(x+s2(y􀀀x)􀀀f0(y+s1(y􀀀x); y􀀀xi)(s2 􀀀s1)
= hf0(x + s2(y 􀀀 x) 􀀀 f0(y + s1(y 􀀀 x); y 􀀀 x; (s2 􀀀 s1)(y 􀀀 x)i
16
= hf0(z2) 􀀀 f0(z1); z2 􀀀 z1i
By using the monotony of f0, we have that hf0(z2) 􀀀 f0(z1); z2 􀀀 z1i 0,which
implies that (h0(s2) 􀀀 h0(s1))(s2 􀀀 s1) 0, which means that h0 is increasing.
So buy using our previous theorem, we have:h(s) h(r)+h0(r)(s􀀀r)8s; r 2 I.
So h(1) h(0)+h0(0), which implies that f(y) f(x)+hf0(x); y􀀀xi8x; y 2 U,
ie (3)
we assume (3) and we have to (1), ie f is convex. We have by hypothesis that
) f(y) f(x) + hf0(x); y 􀀀 xi, which implies that
f(y) = sup
x2U
(f(x) + hf0(x); y 􀀀 xi) = sup
x2U
fx(y)
where fx(y) = f(x) + hf0(x); y 􀀀 xi.
Therefore f is convex.
Denition (Domain of a Function) Let F : X 􀀀! R [ f+1g be a map. The
domain of F is the set dened by
dom(F) := fx 2 X : F(x) < +1g
Domain of F is sometimes called the eective domain of F. The map F is called
Proper if D(F) 6= ;.Recall that this means there exists at least one x0 2 D(F)
such that F(x0) 2 R or F is not identically +1
1.5.1 Notation and Further denitions
Let X be a real normed space and D X is a convex subset of X. Let
f : D 􀀀! R be a convex function. We dene the convex extension of f,
F : X 􀀀! R [ f+1g by
F(x) =

f(x); if x 2 D
+1; if x 2 XnD
(1.4)
We observe that f is convex on D if and only if F is convex on X. Moreover
dom(f) = dom(F).
Denition The epigraph of F is the subset of X] R denoted by epi(F) and
dened by
epi(F) = f(x; ) 2 X R : x 2 dom(F) and; F(x) g:
Denition Let 2 R. We have the following denitions:
SF; = fx 2 X : F(x) g = fx 2 D(F) : F(x) g
Proposition 1.5.1 F is convex if and only if epi(F) is convex
Proof.
))Assume that F is convex. Let (x1; 1) 2 epi(F); (x2; 2) 2 epi(F); t 2 (0; 1).
Now F(x1) 1; F(x2) 2. Hence F(tx1+(1􀀀t)x2) tF (x1)+(1􀀀t)F(x2)
t1 + (1 􀀀 t)2:
17
Thus (tx1 + (1 􀀀 t)x2; t1 + (1 􀀀 t)2) 2 epi(F). But
(tx1 + (1 􀀀 t)x2; t1 + (1 􀀀 t)2) = t(x1; 1) + (1 􀀀 t)(x2; 2). Hence, epi(F) is
convex.
(() Assume that epi(F) is convex. We show that F is convex. Let x1; x2 2
D(F); t 2 (0; 1). Then x1; x2 2 D(F) implies that F(x1) 2 R; F(x2) 2 R.
Thus (x1; F(x1)) 2 epi(F) and (x2; F(x2) 2 epi(F). But epi(F) is convex,thus
t(x1; F(x1) + (1 􀀀 t)(x2; F(x2)) 2 epi(F) if and only if F(tx1 + (1 􀀀 t)x2)
tF (x1) + (1 􀀀 t)F(x2) if and only if f is convex.
1.6 Lower Semi-Continuous Functions
Let V be a real norm linear space, let F : V 􀀀! R [ f+1g be a map.
Denition : We says that F is lower semi-continuous(lsc)at x0 2 Dom(F) if:
Given; > 0; 9 > 0 : kx 􀀀 x0k < ) F(x0) 􀀀 < F(x):
Denition One says that F is lsc if and only if epi(F) is closed in X R
Proposition 1.6.1 : Let F : V 􀀀! R [ f+1g. F is lower semi-continuous if
and only if for all sequence (xn) of V converging to x 2 V , we have F(x)
lim inf F(xn:)
Proof. Assume that F is lsc and xn ! x in V . We take a subsequence (xnk)
of (xn) such that lim
k
F(xnk) = lim inf
n
F(xn). It follows that(xnk; F(xnk) is a
sequence of epi(F) converging to (x; lim inf
n
F(xn)). Since epi(F) is closed, then
(x; lim inf
n
F(xn)) 2 epi(F). Thus
F(x) lim inf
n
F(xn)
Reversely suppose that(xn ! xin V ) ) F(x) lim inf
n
F(xn). Let (an; xn) be
a sequence of epi(F) converging to (x; a) in V R. Then xn ! x and an ! a.
So by hypothesis we have F(x) lim inf
n
F(xn) lim inf an = limn an = a
therefore (x; a) 2 epi(F)
Proposition 1.6.2 : A function F : V 􀀀! R [ f+1g is lsc if and only if for
all a 2 R; Sa;F = fx 2 V : F(x) ag is closed.
Proof. Assume that F is lsc. Let a 2 R and let (xn) be a sequence of Sa;F
converges to x 2 V .
Since F(xn) a8n,then lim inf
n
F(xn) a. So according to proposition 1,
F(x) lim inf
n
F(xn) a.
Thus x 2 Ca, which means that Ca is closed.
Reversely assume that Sa;F is closed for each a 2 R. Let xn be a sequence of V
converging to x 2 V . There exists a subsequence xnk of xn such that
lim
k
F(xnk) = lim inf
n
F(xn)
Assume that F(x) > lim inf
n
F(xn). Then there exists a 2 R such that
lim inf
n
F(xn) < a < F(x): (1.5)
18
So there exists N 2 IN such that F(xnk < a for all k N. So xnk 2 Sa;F 8k
N. Since xnk ! x and Sa;F is closed then x 2 Sa;F or equivalent to say that
F(x) a which contradict (2.1) Thus F(x) lim inf
n
F(xn)
Proposition 1.6.3 Let F : V 􀀀! R [ f+1g be a l.s.c. function on a compact
set K. Then there exists x 2 K such that:
F(x) = inf
x2K
F(x)
.
Proof. Let fxng be a minimizing sequence of F on K. Since K is compact,there
exists a subsequence (xnk) of (xn) such that xnk ! x 2 K. From Proposition 2
we have:
F(x) lim inf
k
F(xnk) = lim
k
F(xnk) = inf
x2K
F(x):
Therefore F(x) = inf
x2K
F(x).
1.7 Existence Result
A concept that will be needed in the proof of one of our fundamental theorems
of optimization is that of coercivity. We introduce this concept now
Denition : Let X be a topological space. Let K be a subset of X. The set
K is said to be sequentially compact if every sequence in K has a subsequence
which converges to an element of K. The subset K is called Weakly sequentially
compact if every sequence in K has a subsequence which converges weakly to
an element of K.
Denition : Let X be a topological space. A function F : X 􀀀! R[f+1g is
called (weakly, respectively) sequentially coercive if the closure of every section
S;F is (weakly, respectively) sequentially compact in X
Denition :let X be a real reexive.A function F : X 􀀀! R [ f+1g is called
coercive if:
lim
kxk!+1
F(x) = +1
In fact, we have the following Proposition:
Proposition 1.7.1 : Let X be a real reexive space. Then F : X 􀀀! R[f+1g
is weakly sequentially coercive if and only if lim
kxk!+1
F(x) = +1
Proof. ()) Let F be weakly sequentially coercive. We prove that
limkxk!+1F(x) = +1 i.e. , 8M 2 R; 9N0 2 R :if kxk > N0; then F(x) > M.
Assume by contradiction that this is not the case. Then there exists a sequence
fxng such that lim
n!+1
kxnk = +1and fF(xn)g is bounded. Let 2 R be such
that jF(xn)j 8n 1:Since F is weakly sequentially coercive, the section
19
S;F X is weakly sequentially compact. Hence the sequence fxng which is
in the section S;F = fx 2 X : F(x) g, has a subsequence fxnjg which
converges weakly to an element of S;F ; F(xnj) whereas lim kxnjk = +1.
This subsequence is therefore bounded(Since every weakly convergent sequence
is bounded), which is a contradiction.
(()We assume that lim
kxk!+1
F(x) = +1
we want to prove that F is weakly sequentially coercive, ie for arbitrary 2
R; S;F is weakly sequentially compact, or that any sequence in S;F has a
weakly convergent subsequence. So let fxng be an arbitrary sequence in S;F
for arbitrary 2 R.
Then fxng is bounded. Since X is reexive, fxng has a subsequence fxnkg
converges weakly to an element of S;F . Thus F is weakly sequentially coercive.
Remark 1.7.1 If X is reexive and if lim
kxk!+1
F(x) = +1, we simply say that
F is coercive. This condition actually implies that
8A > 0; 9B > 0 : 8x 2 X; kxk > B ) F(x) > A
Equivalently,this means that F(x) A ) kxk B, i.e. ,if the range of F is
bounded, then the domain of F is also bounded.
Theorem 1.7.1 (Existence of minimum in nite dimension)
Let K be a nonempty closed subset of Rn and F a continuous real value appli-
cation on K satisfying the following property:
8(un) K; lim
n!1
kunk = +1 ) lim
n!1
F(un) = +1 (1.6)
Then there exists a least one minimum point of F over K. Moreover for any
minimizing sequence of F over K,one can take a subsequence converging to a
minimum point.
Proof.
Let (un) be a minimizing sequence of F over K. The condition (1:3) implies
that (un) is bounded since (F(un)) is a bounded real sequence. So by Bonzano-
weistrass there exists a subsequence (unk) of(un) converging to a point u of Rn.
But u is in K since K is closed,and F(unk) converges to F(u) by the continuity
of F, thus F(U) = inf
v2K
F(v)
Theorem 1.7.2 (Existence in innite dimension)
Let K be a closed,convex and nonempty subset of a reexive real Banach space
V . Let F : V 􀀀! R [ f+1g convex and lsc.
If K is bounded or if F is coercive, then there exists u 2 K such that F(u) =
min
v2K
F(v)
Moreover if F is strictly convex, then u is unique
Proof. Assume that F 6= +1, ie a there exists at least x0 2 V : F(x0) 6= +1.
In the case where F = +1, we obviously have inf
v2K
F(v) = +1
K is nonempty,let (vn) be a minimizing sequence of F on K. K bounded or
20
F coercive implies that (vn) is bounded. Indeed, if K is bounded,then vn is
bounded as a sequence of K, if F is coercive,assume by contradiction that vn is
not bounded,then vn has a subsequence vnk : vnk ! +1 as k ! +1. Since F
is coercive,then lim
k!+1
F(vnk) = +1. But since (vn) is a minimizing sequence of
F, we have that lim
k!+1
F(vnk) = inf
v2K
F(v) which implies that inf
v2K
F(v) = +1.
Therefore we have F(x) = +1 which is a contradiction,thus vn is bounded.
V reexive implies that there exists (vnk) subsequence of vn such that vnk
converges weakly to u in V . K convex and closed implies that K is weakly
closed,which implies that u 2 K So in one hand we have lim
k!+1
F(vnk) =
inf
v2K
F(v),and in a other hand, we have F convex and lsc implies that epi(f) is
convex and closed, which implies that F is weakly lsc.Or vnk converges weakly
to u,so we have F(u) lim inf F(vnk) = inf
v2K
F(v). Thus F(u) = inf
v2K
F(v).
but we have u 2 K,which implies that F(u) = min
v2K
F(v) For the uniqueness,
assume that F is strictly convex. Let u and w two minimum of F over K
Assume by contradiction that u 6= w. K convex implies that 1
2 u + 1
2 w 2 K.
So we have min
v2K
F(v) F( 1
2 u + 1
2 w). Using the strict convexity of F,we have
min
v2K
F(v) < 1
2F(u) + 1
2F( w), which is equivalent that
min
v2K
F(v) < min
v2K
F(v)
which is a contradiction, therefore u = w, ie the minimum is unique
Corollary 1.7.1 Let A be an (n n) positif dened matrix. Then there exists
> 0 such that
hAx; xi kxk28x 2 Rn
Proof. Consider the following problem
8<
:
min J(x) := hAx; xi
kxk = 1
(1.7)
-J is continuous
-dim(Rn) < 1
-Dene K = fx 2 Rn : kxk = 1g is compact
Then by Weistrass theorem there exists x 2 K : x = arg min J(x).
ie 9x 2 K : J(x) J( x
kxk )8x 6= 0. Taking = J(x) > 0,we have
1
kxk2 hAx; xi ,
which implies that hAx; xi kxk2
1.8 Optimality condition:
-Convex case
In this section, we will try to get the necessary condition,and sometimes su-
cient, for minimizer. Our aim is now in one hand more practical than the last
section, since these condition will more often be useful for computing a mini-
mizer.
21
These conditions will be achieved by using the rst derivative(necessary condi-
tions of rst order) or second(necessary conditions of second order)
Theorem 1.8.1 : Let K be a nonempty convex set of norm linear space V . Let
F : V 􀀀! R derivable (in the sens of Gateaux).
If u is a local minimum of F over K then
F0(u)(v 􀀀 u) 08v 2 K (1.8)
Moreover if u 2 Int(K) then (1.8) become
F0(u) = 0 EULER EQUATION ;
ie u is a critical point.
Proof.
u minimum local implies that there exists r 0 such that F(u) F(v); 8v 2
K \ B(u; r). Let v 2 K; v 6= u, let t 2]0; 1[, let wt = u + t(v 􀀀 u) =
(1 􀀀 t)u + tv 2 K. For wt to be in B(u; r), it suces to have kwt 􀀀 uk < r,
ie 0 < t <
r
kv 􀀀 uk
. let 0 < t < minf
r
kv 􀀀 uk
; 1g = , for all t 2]0; [, we
have wt 2 K \ B(u; r), which implies that F(u) F(u + t(v 􀀀 u)); which
gives that
F(u) 􀀀 F(u + t(v 􀀀 u))
t
08t 2]0; [. Letting t goes to 0, we get
F0(u)(v 􀀀 u) 0
Assume u 2 int(K), so 9r > 0;B(u; r) K. we already know that 8u 2
K; F0(u)(v 􀀀 u) 0, let h 6= 0. Consider for all t 2 R; u + th. So for all
tsuch that jtj <
r
krk
,we have u + th 2 K,which implies that for all t such that
jtj <
r
krk
, we have tF 0(u):h 0. this give us tF 0(u):(􀀀h) = 0, which implies
that F0(u):h = 08h 6= 0, which means that F0(u) = 0
-General case
Denition : Let K be a nonempty subset of a norm linear space V . Let u 2 K,
let d 2 V; d 6= 0.
One says that d is an admissible direction of K on U if it exists > 0 such
that u + td 2 K, for all t 2]o; [. We denote by Dad(u) the set of admissible
direction on the point u.
Theorem 1.8.2 Consider

MinF (v)
v 2 K 6= ;
(1.9)
If F is dierentiable and if F is a local minimum of F over K, then F0(u):d
08d 2 Dad(u)
In particular if u 2 int(K), then F0(u) = 0
Proof.
u minimum local of F implies that there existsr 0 : F(u) F(v), for all
v 2 K \ B(u; r). Let d 2 Dad(u), it follows that : u + td 2 K; 8t 2]0; [.So for
22
u+td to be on B(u; r),it suces to have 0 < t <
r
kdk
. Now let displaystyle =
minf; r
kdkg, so for all t 2]0; [, we have u + td 2 K \ B(u; r), therefore F(u +
td) F(u); 8t 2]0; [ implies that
F(u + td) 􀀀 F(u)
t
0; 8t 2]0; [, letting t
goes to 0, we have F0(u):d 0
If u 2 int(K),then Dad = V=f0g implies that F0(u):d 0; 8d 2 V=f0g, which
implies that F0(u):d 0; 8d 2 V=f0g, which implies that F0(u) = 0
Theorem 1.8.3 (necessary and sucient condition of second order)
One assume that K is nonempty, F is of class C2.
If u 2 K is a local minimum of F on K then:
1) F0(u):d 08d 2 Dad(U)
2) If d 2 Dad(u) and F0(u):d = 0 then F00(u):d:d 0
3) If F0(u):d = 0 and F00(u):d:d kdk2; 8d 6= 0; > 0, then u is strict local
minimum of F over K
Proof.
(1)Let d 2 Dad(u) : F0(u):d = 0, let > 0 : 8t 2]0; [; u + td 2 K \ B(u; r). We
have by Taylor expansion that:
F(u + td) = F(u) + tF 0(u):d +
t2
2
F00(u):d:d + t2kdk2″(t)
where “(t) ! 0 as t ! 0. This implies that t2[ 1
2F00(u):d:d + kdk2″(t) 0. By
simplifying the last inequality by t2 and bu letting t goes to 0 we have:F0(u):d
0
(2) Let v 2 V; v 6= u,we have also by the Taylor expansion of F that
F(v) 􀀀 F(u) = F0(u)(v 􀀀 u) +
1
2
:(v 􀀀 u)(v 􀀀 u) + kv 􀀀 uk2″(v)
where “(v) ! 0 as v ! 0. This implies that F(v) 􀀀 F(u) kv 􀀀 uk2[

2
+ “(v)]
But
2 > 0 and varepsilon(v) ! 0 as v ! 0 implies that there exists r > 0 :
kv 􀀀 uk < r implies that “(v)j <
2 . This implies that for kv 􀀀 uk < r,we have
􀀀

2
< “(v) <

2
,which implies that for all v 2 B(u; r),we have

2
+ “(v) > 0,
which give us: for all v 2 B(u; r); v 6= u, we have F(v) > F(u)
Therefore u is a strict local minimum If F is convex, then F0(u):(v 􀀀 u) 0
is a (Necessary and sucient condition)
1.9 Optimization with equality constrains
In this part we are interested with optimization problem with equality constrains
in Rn. Given
an open set of Rn; F and F1; cdots; Fp functions dened on

taking value on R. One consider the following problem:
inf
v2K
F(v) (1.10)
with
fv 2
; gi(v) = 0; 1 i mg (1.11)
23
The function F is called objective or cost.The function gi dene the equality
constrains,the elements of K are called the admissible elements. Obviously the
rst question we have to ask is the one on the existence of solution of (1.10), for
that we use our last results. Indeed they have been obtained in spaces which
are more general, they are applicable in particularly in Rn. It is preferable in
the following to denote in a more synthetic form the constrains. For that, one
pose.
G(x) = (g1(x); ; gm(x))8x 2

Hence the set of the constrains is written K = G􀀀1f0g.
Consider the following optimization problem
8>>>><
>>>>:
min F(x)
gi(x) = 0; i = 1; :::p
x 2 RN(p N)
(1.12)
where F; gi : Rn 􀀀! R
Set C = fx 2 RN : gi(x) = 0; 8i = 1; ; pg
So our problem (1.4) become:
8<
:
min F(x)
x 2 C
(1.13)
Denition A point x satises the qualication condition (QC) or we can say
x is regular if rgi(x); i = 1; :::p are linearly independant
If p = 1; x is regular if rg1(x) 6= 0
G : RN 􀀀! Rp
x 7! G(x) = (rg1(x); ;rgp(x))
C = fx 2 RN=G(x) = 0g
x is regular if and only if JG(x) has rank p
Theorem 1.9.1 (Lagrange)
Assume that x 2 C is regular. If x is an extremum then:
91; ; p 2 R :
8>>><
>>>:
rF(x) +
Xp
i=1
irgi(x) = 0
gi(x) = 0; i = 1; ; p
(1.14)
The real 1; ; p are called the Lagrange multipliers associated with the
constrains of the problem and the minimum local point. The uniqueness of these
reals is assured by the qualication condition(QC)
24
Proposition 1.9.1 Let
be an open convex subset of Rn. Let F :
! R be a
convex dierentiable function on
and gi :
! R be ane. Let x 2 K such
that there exists 1; 2; ; p 2 R such that
rF(x) +
Xp
i=1
irgi(x) = 0: (1.15)
Then, x is a minimizer of F on K.
Proof. Let x 2 K,by convexity of F we have:
F(x) 􀀀 F(x) rF(x)(x 􀀀 x) (1.16)
Using (1.15) it follows that:
F(x) 􀀀 F(x) 􀀀
Xp
i=1
irgi(x)(x 􀀀 x) (1.17)
Since gi are ane and gi(x) = gi(x) = 0,we have also the identities
gi(x) 􀀀 gi(x) = rgi(x):(x 􀀀 x) = 0
By replacing in (1.17),we obtain F(x) F(x)

GET THE COMPLETE PROJECT»

Do you need help? Talk to us right now: (+234) 08060082010, 08107932631 (Call/WhatsApp). Email: [email protected].

IF YOU CAN'T FIND YOUR TOPIC, CLICK HERE TO HIRE A WRITER»

Disclaimer: This PDF Material Content is Developed by the copyright owner to Serve as a RESEARCH GUIDE for Students to Conduct Academic Research.

You are allowed to use the original PDF Research Material Guide you will receive in the following ways:

1. As a source for additional understanding of the project topic.

2. As a source for ideas for you own academic research work (if properly referenced).

3. For PROPER paraphrasing ( see your school definition of plagiarism and acceptable paraphrase).

4. Direct citing ( if referenced properly).

Thank you so much for your respect for the authors copyright.

Do you need help? Talk to us right now: (+234) 08060082010, 08107932631 (Call/WhatsApp). Email: [email protected].

//
Welcome! My name is Damaris I am online and ready to help you via WhatsApp chat. Let me know if you need my assistance.