ABSTRACT
Compact operators are linear operators on Banach spaces that maps bounded set to relatively
compact sets. In the case of Hilbert space H it is an extension of the concept of matrix acting on
a nite dimensional vector space. In Hilbert space, compact operators are the closure of the nite
rank operators in the topology induced by the operator norm. In general, operators on innite
dimensional spaces feature properties that do not appear in the nite dimension case; i.e matrices.
The compact operators are notable in that they share as much similarity with matrices as one can
expect from a general operator. Spectral decomposition of compact operators on Banach spaces
takes the form that is very similar to the Jordan canonical form of matrices. In the context of
Hilbert spaces, the spectral properties of compact operators resembles those of square matrices.
TABLE OF CONTENTS
Certication i
1 Linear Operators and Boundedness 3
1.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Examples of Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Examples of linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Bounded linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Examples of bounded operators on innite dimensional spaces . . . . . . . . . . . . 10
1.6 Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Some properties of Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.1 Examples of Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Compact linear Operators on Banach spaces 18
2.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Compact operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Spectral Decomposition of Compact operators on Hilbert spaces 28
3.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 Spectral theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Classication of 2 (T) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Spectral decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.1 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
CHAPTER ONE
LINEAR OPERATORS AND
BOUNDEDNESS
In this chapter, some well-known results which will be needed in sequel are provided.
1.1 Denitions
Denition 1.1. (Norm): A non-negative function jj:jj on a vector space X over R is called a norm
on X if and only if the following are satised.
(N1) jjxjj > 0 8 x 2 X (positivity).
(N2) jjxjj = 0 if and only if x = 0 (Nondegeneracy).
(N3) jjxjj = jjjjxjj 8 x 2 X, for all 2 R (Homogeneity).
(N4) jjx + yjj jjxjj + jjyjj for all x; y 2 X (Sub-additivity).
A vector space X endowed with a norm jj:jj denoted by (X, jj:jj) is called a normed linear space
(or just a normed space).
Denition 1.2. A sequence (xn)n1 is said to be Cauchy if given ” > 0 there exists N0 2 N such
that jjxn xmjj < ” for all m; n N0.
Denition 1.3. A space (X; d), where d is a metric is said to be complete if every Cauchy sequence
in X converges to a point in it.
3
Remark
Completeness is a metric space concept. In a normed space, the metric is d(x; y) = jjxyjj where
it satises the following special properties:
(a) The underlying space is a vector space
(b) Homogenity: d(x; y) = jjd(x; y)
(c) Translation invariance d(x + z; y + z) = d(x; y)
Conversely, every metric satisfying those three conditions denes a norm: jjxjj = d(x; 0)
Denition 1.4. A complete normed vector space is called a Banach space.
Denition 1.5. Space C([a; b];R)
The space C([a; b];R) denotes the set of all real valued continous functions on [a; b] into R:
1.2 Examples of Banach spaces
1. The space C([a; b];R) endowed with the sup-norm is Banach.
Proof. Let (fn)n1 be a Cauchy sequence in C[a; b]. This implies for every x 2 [a; b] and for all
” > 0 there exists an N 2 N such that
jjfn fmjjC[a; b] = sup jfn(x) fm(x)j ”
for all x 2 [a; b] and for all m; n N:
This implies jfn(x) fm(x)j ” for all x 2 [a; b] and m; n N,
thus (fn(x))n1 is a Cauchy sequence in R and since R is complete, it implies
fn(x) ! f(x) 2 R as n ! 1
this implies jfn(x) f(x)j ” for all x 2 [a; b] and for all n N we have sup jfn(x) f(x)j ”
for all x 2 [a; b] and for all n N
thus
jjfn fjjC[a;b] ” for all n N;
this implies f 2 C[a; b]
Hence C[a; b] endowed with the sup-norm is Banach.
2.The space Rn with jjxjjRn = (
P1
n=1 jxij2)1=2 is Banach.
4
1.3 Linear operators
Denition 1.6. Let T be an operator from a vector space X to a vector space Y, then the domain
D(T) is given by D(T) = f x 2 X : Tx exists in Y g
and the range R(T) is given by R(T) = fy 2 Y : 9 x 2 X such that Tx = yg.
Denition 1.7. (Null space)
Let T be an operator from a vector space X to a vector space Y, then the null space N(T) is given
by N(T) = f x 2 X : Tx = 0 g.
Denition 1.8. (Injectivity)
An operator T from X to a vector space Y is said to be injective if 8 x1; x2 2 D(T), such that
Tx1 = Tx2 implies x1 = x2.
Remark: If T is injective, then there exists an operator
T1 : R(T) Y ! D(T) X such that T1(y0) = x0 =) Tx0 = y0.
Denition 1.9. (Continuity)
An operator T from a vector space X to a vector space Y said to be continous at a point x0 2 X
if given any > 0 9 > 0 such that
jjx x0jj =) jjTx Tx0jj
Denition 1.10. (Linear Operators)
Let X and Y be vector spaces. Let T : X ! Y . Then T is said said to be linear if:
i. The domain D(T) is a vector space and the range R(T) lies in a vector space over the same
eld.
ii. 8 x; y 2 D(T) and scalars ,
T(x + y) = Tx + Ty (1.1)
T(x) = Tx (1.2)
1.3.1 Examples of linear operators
1. Dierential operator: Let X be the vector space of all polynomials on [a; b]. We dene a linear
operator T on X by setting Tx(t) = x0(t) 8 x 2 X, where the prime denotes dierentiation
with respect to t. This operator maps X into itself.
2. Integral operator: A linear operator T from C[a; b] into itself can be dened by Tx(t) =
R t
a x(s)ds; t 2 [a; b].
5
3. Multiplication by t: This is linear operator from C[a; b] into itself dened by: Tx(t) = tx(t).
Theorem 1.11. Let T : X ! Y be a linear operator space, then
(a) The range R(T) is a vector space.
(b) If dimX = n < 1, then dimR(T) n.
(c) The null space N(T) is a vector space.
Proof. (a). Let y1; y2 2 R(T), we show that y1 + y2 2 R(T) for any scalars , . Since
y1; y2 2 R(T), we have y1 = Tx1; y2 = Tx2 for some x1; x2 2 D(T), and x1+x2 2 D(T) because
D(T) is a vector space. The linearity of T yields
T(x1 + x2) = Tx1 + Tx2 = y1 + y2:
Hence y1 + y2 2 R(T). Since y1; y2 2 R(T) were arbitrary and so were the scalars, this prove
that R(T) is a vector space.
(b). We choose n + 1 elements y1; y2; :::; yn+1 in R(T) arbitrary. Then we have
y1 = Tx1; :::; yn+1 = Txn+1 for some x1; x2; :::; xn+1 in X. Since dimX = n, the set fx1; :::; xn+1g
must be linearly dependent. Hence
1×1 + + n+1xn+1 = 0 (1.3)
for some scalars 1; ; n+1 not all zero. Since T is linear then T(0) = 0. Applying T to both
sides of (1.3) gives T(1×1 + + n+1xn+1) = 1y1 + ::: + n+1yn+1 = 0. This shows that
fy1; :::; yn+1g is linearly independent set because the {‘s are not all zero.
Remembering that this subset of R(T) was chosen arbitrary, we conclude that R(T) has no linearly
independent subsets of n+1 or more elements, this implies dimR(T) n.
(c). Let x1; x2 2 N(T), then Tx1 = Tx2 = 0. Since T is linear then for any ; we have
T(x1 + x2) = Tx1 + Tx2 = 0:
It implies x1 + x2 2 N(T). Hence N(T) is a vector space.
Theorem 1.12. ( Inverse of a linear operator)
Let X and Y be vector spaces over R. Let T : X ! Y be linear operator then:
(a) The inverse T1 : R(T) ! X exists if and only if Tx = 0 =) x = 0 (T is injective).
(b) If T1 exists, then it is a linear operator.
(c) If dimX = n < 1 and T1 exists, then dimR(T) = dimX
6
Proof. (a).Suppose Tx = 0 =) x = 0. Let Tx1 = Tx2. Since T is linear,
T(x1 x2) = Tx1 Tx2 = 0;
so that x1 x2 = 0 by hypothesis. Hence Tx1 = Tx2 =) x1 = x2 and T1 exist by remark on
Denition 1.4. Conversely T1 exists then remark on denition 1.4 holds.
From Denition 1.4 with x2 = 0, we obtain Tx1 = T0 = 0 =) x1 = 0.
(b). We assume T1 exists and show that it is linear. The domain of T1 is R(T) and it is a
vector space, then by Theorem 1.7a, we consider any x1; x2 2 D(T) and their images
y1 = Tx1 and y2 = Tx2, then x1 = T1y1 and x2 = T1y2. T is linear so that for any scalar
and , we have
y1 + y2 = Tx1 + Tx2 = T(x1 + x2):
It implies T1(y1 + y2) = x1 + x2 = T1y1 + T1y2: It implies T1 is linear.
(c). We have dimR(T) dimX by Theorem 1.7b and dimX dimR(T) by the same theorem
applied to T1. Hence, dim X = dimR(T).
Lemma 1.13. (Inverse of product)
Let T : X ! Y and S : Y ! Z be bijective linear operator, where X; Y;Z are vector spaces.Then
the inverse (ST)1 : Z ! X of the product (the composite) ST exists and (ST)1 = T1S1.
Proof. The operator ST : X ! Z is bijective, so (ST)1 exists. We have
(ST)(ST)1 = IZ;
where IZ is the identity operator on Z. Applying S1 and using S1S = IY (the identity operator
on Y ), we obtain
S1(ST)(ST)1 = T(ST)1 = S1IZ = S1:
Applying T1 and using T1T = IX, we obtain the desired result
T1T(ST)1 = (ST)1 = T1S1:
Implies (ST)1 = T1S1.
Theorem 1.14. Every linear operator on a nite dimensional vector space can be represented by
means of matrix.
Proof. Let X and Y be nite dimensional vector spaces over thesame eld. Let T : X ! Y
be a linear operator, let dimX = n and dimY = r, then there exists a basis fe1; e2; :::eng for X and
a basis fb1; b2; :::brg for Y.
Let x 2 X ) x =
Pn
i=1 iei where 0i
s are scalars . Since T is linear
y = T(x) =
Xn
i=1
iT(ei):
7
So T is uniquely determined if the images Tei 1 i n are prescribed. Since y and Tei are in
Y so y =
Pr
j=1 jbj and Tei =
Pr
j=1 jibj where j and ji are scalars, thus
y =
Xr
j=1
jbj =
Xn
i=1
iT(ei) =
Xn
i=1
i
Xr
j=1
jibj =
Xr
j=1
(
Xn
i=1
jii)bj :
Hence
j =
Xn
i=1
jii: 1 j r:
1.4 Bounded linear operators
Denition 1.15. (Bounded linear operator): Let X and Y be normed spaces and T:X ! Y be
linear operator. The operator T is said to be bounded if there exist a real number c >0 such that
jjTxjj cjjxjj for all x 2 D(T).
Theorem 1.16. Let T : X ! Y be a bounded linear operator. Then
jjTjj := sup
x2X;jjxjj=1
jjTxjj = sup
x2X;x6=0
jjTxjj
jjxjj
:
Proof. Let jjxjj = a, set y = ( 1
a )x, where x 6= 0. Then jjyjj = jjxjj
a = 1. Since T is linear, then
sup
x2X;x6=0
jjTxjj
jjxjj
= sup
jjTxjj
a
= sup jjT(
1
a
)x)jj = sup
y2X;jjyjj=1
jjTyjj := jjTjj:
Remark: jj:jj denes a norm on X.
Theorem 1.17. (Finite dimension): If a normed space X is nite dimensional, then every linear
operator on X is bounded.
Proof. Let dim X = n and fe1; e2; : : : ; eng be a basis for X, then for all x 2 X;
x =
Xn
i=1
iei
i scalars. Since T is linear,
jjTxjj = jj
Xn
i=1
iTeijj
Xn
i=1
jijjjTeijj max
i
jjTeijj
Xn
i=1
jij = jjxjj1 where ( = max jjTeijj)
= cjjxjj (where c= k by equivalence of norms on nite dimensional vector space)
Theorem 1.18. (Continuity and boundedness): Let T : X ! Y be a linear operator, where X
and Y are normed spaces. Then:
(a) T is continuous if and only if T is bounded.
8
(b) If T is continuous at the origin, then T is continuous.
Proof. (a) For T = 0, the statement is trivial. Let T 6= 0,then jjTjj 6= 0. We assume T is
bounded and consider x0 2 X such that jjx x0jj < where = =jjTjj, we obtain
jjTx Tx0jj = jjT(x x0)jj jjTjjjjx x0jj < jjTjj = :
Since x0 2 X was arbitrary, this shows that T is continuous.
Conversely, assume that T is continuous at an arbitrary x0 2 X, then given any > 0, there exist
> 0 such that
jjTx Tx0jj
for all x 2 X satisfying
jjx x0jj :
We now take y 6= 0 2 X and set x = x0 +
jjyjjy. Then x x0 =
jjyjjy. Hence jjx x0jj = . Since
T is linear we have
jjTx Tx0jj = jjT(x x0)jj = jjT(
jjyjj
y)jj =
jjyjj
jjTyjj :
Thus jjTyjj
jjyjj; jjTyjj cjjyjj =) T is bounded, where c =
(b) Suppose T is continuous at a point x0 = 0, then it suces to show that T is bounded
(continuous). T is continuous at x0 = 0, take = 1, there exist > 0. such that
jjxjj =) jjTxjj 1:
Let z 2 D(T) z 6= 0, then jj z
jjzjj
2 jj =
2 < =) jjT( z
jjzjj
2 )jj < 1
(By linearity of T) =) jjTzjj 2
jjzjj 8z 2 D(T) =) T is continuous.
Corollary 1.19. (Continuity and null space)
Let T : X ! Y be a bounded linear operator. Then
(a) xn ! x implies Txn ! Tx:
(b)The null space N(T) is closed.
Proof. (a) Suppose xn ! x in X i.e jjxn xjj ! 0. Since T is linear and bounded, then
jjTxn Txjj = jjT(xn x)jj kjTjjjjxn xjj ! 0:
It implies Txn ! Tx as n ! 1
(b) Let x 2 N(T) it implies there exist (xn)n1 N(T) such that xn ! x. Since T is bounded
by corollary 1.19a Txn ! Tx, but xn 2 N(T): It implies Txn = 0 8n 1, thus T(x) = 0:
It implies x 2 N(T). Hence N(T) is closed.
9
1.5 Examples of bounded operators on innite dimensional spaces
1. Let K : [0; 1] [0; 1] ! R be continuous. Let T : C([0; 1];R) ! C([0; 1];R) be dened by
T(f)(x) =
Z 1
0
K(x; y)f(y)dy:
Then T 2 L(C[0; 1]) for f 2 C[0; 1] and bounded.
Proof. Clearly T is linear. We next show boundedness.
jT(f)(x)j
Z 1
0
jK(x; y)jjf(y)jdy supjf(y)j
Z 1
0
jK(x; y)jdy jjfjj
Z 1
0
jK(x; y)jdy:
It implies
jjT(f)(x)jj1 cjjfjj1
where
R 1
0 jK(x; y)j c since K is continuous. It implies T is bounded.
2. Let p 1, we dene
lp = f(xn)n1 R :
1X
n=1
jxnjp < 1g
Let T : lp ! lp be dened by
T((xn)n1) = (xn+1)n1 (The left shift operator) is bounded
where (xn)n1 = (x1; x2; x3; :::) and T((xn)n1) = (x2; x3; :::)
Proof.
jjT((xn)n1)jj = (
1X
n=2
jxnjp)1=p (
1X
n=1
jxnjp)1=p = jj(xn)n1jj:
Thus T is bounded with jjTjj 1
3. Let T : L2([0; 1];R) ! L2([0; 1];R) be dened by
(Tf)(t) = tf (t) for a:e t 2 [0; 1]:
Then T is bounded.
Proof. jjTfjj2
L2[0;1] =
R 1
0 j(Tf)(t)j2dt =
R 1
0 jtj2jf(t)j2dt
R 1
0 jf(t)j2dt = jjfjj2
L2[0;1]:
It implies jjTfjjL2[0;1] jjfjjL2[0;1]: Hence T is bounded with jjTjj 1
10
1.6 Hilbert spaces
Denition 1.20. Let E be a real vector space. An inner product on E is a function,
h:; :i : E E ! R such that
(a) jjxjj2 hx; xi 0 with equality jjxjj2 = 0 i x = 0
(b) hx; yi = hy; xi
(c)hax + by; zi = ahx; zi + bhy; zi i.e x ! hx; zi is linear.
A real vector space E endowed with the inner product i.e (E; h:; :i) is called an inner product space.
Lemma 1.21. (Cauchy-Schwartz Inequality) Let E be an inner product space. Then for arbitrary
x; y 2 E,
jhx; yij jjxjjjjyjj
Lemma 1.22. (The Parallelogram Law) Let E be a real inner product space. Then for arbitary
vector x; y 2 E,
jjx + yjj2 + jjx yjj2 = 2(jjxjj2 + jjyjj2):
Proof. Expanding the LHS jjx+yjj2+jjxyjj2 = hx; xi+2hx; yi+hy; yi+hx; xi2hx; yi+hy; yi
= 2(hx; xi + hy; yi) = 2(jjxjj2 + jjyjj2) = RHS.
Denition 1.23. A complete inner product space is called a Hilbert space.
Denition 1.24. Let x, y be vectors in a Hilbert space H, then we say that x and y are orthog-
onal,written x ? y, if hx; yi = 0. We say that subsets A and B are orthogonal, written A ? B, if
x ? y for every x 2 A and y 2 B. The orthogonal complement A? of a subset of A is the set of
vectors orthogonal to A,
A? = fx 2 H : x ? y for all y 2 Ag:
Denition 1.25. Let M and N be closed linear subspaces of a Hilbert space H, we dene the
orthogonal direct sum or simply the direct sum M
L
N of M and N by
M
M
N = fy + z : y 2M and z 2 Ng:
Denition 1.26. A subset U of nonzero vectors in a Hilbert space H is orthogonal if any two
distinct elements in U are othorgonal. A set of vectors U is orthonormal if it is orthogonal and
jjujj = 1 for all u 2 U.
1.7 Some properties of Hilbert spaces
Theorem 1.27. The orthogonal complement of a subset of a Hilbert space is a closed linear
subspace.
11
Proof. Let H be a Hilbert space and A a subset of H. if y; z 2 A? and ; 2 R:Then the
linearity of the inner product implies that
hx; y + zi = hx; yi + hx; zi = 0
for all x 2 A:
Therefore, y + z 2 A?, so A? is a linear subspace.
To show that A? is closed, we show that if (yn)n1 is a convergent sequence in A?, then the limit
y also belongs to A?. Let x 2 A then by continuity of inner product we have
hx; yi = hx; lim
n!1
yni = lim
n!1
hx; yni = 0:
Since hx; yni = 0 for every x 2 A and yn 2 A?. Hence y 2 A?
Theorem 1.28. Let M be a closed linear subspace of a Hilbert space H
(a) For every x 2 H there is a unique closest point y 2M such that
jjx yjj = minjjx zjj; z 2M
(b)The point y 2Mclosest to x 2 H is the unique element ofMwith the property that (xy) ?M
Proof. (a). Let d be the distance of x from M i.e
d = inffjjx zjj : z 2 Mg:
First, we prove that there is a closest point y 2 M at which this inmum is attained, meaning
that jjx yjj = d: From the denition of d, there is a sequence of elements yn 2M such that
lim
n!1
jjx ynjj = d:
Thus, for any ” > 0; there is an N such that
jjx ynjj d + ” when n N:
We show that the sequence (yn)n1 is Cauchy. From the parallelogram law, we have
jjym ynjj2 + jj2x ym ynjj2 = 2jjx ymjj2 + 2jjx ynjj2:
Since (ym + yn)=2 2M; it implies that jjx (ym + yn)=2jj d. Thus for all m; n N
jjym ynjj2 = 2jjx ymjj2 + 2jjx ynjj2 jj2x ym ynjj2 4(d + “)2 4d2 = 4″(2d + “):
Therefore,(yn)n1 is Cauchy.Since a Hilbert space is complete, there is a y such that yn ! y and
since M is closed, we have y 2M: By continuity of norm we have
jjx yjj = lim
n!1
jjx ynjj = d
12
We prove the uniqueness of the vector y 2Mthat minimizes jjxyjj. Suppose that y and y0 both
minimize the distance to x, meaning that jjx yjj = d, jjx y0jj = d:
Then the parallelogram law implies that
2jjx yjj2 + 2jjx y0jj2 = jj2x y y0jj2 + jjy y0jj2:
Since (y + y0)=2 2M,
jjy y0jj2 = 4d2 4jjx (y + y0)=2jj2 0:
Therefore, jjy y0jj = 0 so that y = y0: (b) We show that the unique y 2M found above satises
the condition that the vector x y is orthogonal to M. Since y minimizes the distance to x, we
have for every 2 C and z 2M that
jjx yjj2 jjx y + zjj2:
Expanding the right-hand side of this equation, we obtain that
2Rehx y; zi jj2jjzjj2:
Suppose that hx y; zi = jhx y; zijei’: Choosing = “ei’; where ” > 0 and dividing by “, we
get
2jhx y; zij “jjzjj2:
Taking the limit as ” ! 0+, we get hx y; zi = 0 so (x y) ?M.
Finally, we show that y is the only element in M such that (x y) ? M: Suppose that y0 is
another such element in M: Then y y0 2M; and for any z 2M; we have
hz; y y0i = hz; x y0i hz; x yi = 0:
In particular, we may take z = y y0 and therefore we have y = y0
Denition 1.29. Let
be an open set in Rn, then
Lp(
) = ff :
! R; measurable :
R
jfjpdx < 1g, 1 p < 1.
Denition 1.30. Let
be an open set in Rn and n 2 N, the Sobolev space Hm(
) is dened by
Hm(
) = ff 2 L2(
);Df 2 L2(
); 2 Nn; jj mg where Df = @jj
@x
1
1 :::@xn
n
f, jj = 1 + ::: +
n and jjujjHm(
) = jjujjL2 +
P
:jjm jDujL2 ; u 2 Hm(
)
Denition 1.31. Let ‘ :
! R be continous, then the support of ‘ is dened by
Supp(‘) = fx 2
: ‘(x) 6= 0g
Denition 1.32. D(
) the space of test functions is dened by
D(
) = ff 2 C1 : Supp(f) is compact in
g
13
Denition 1.33. A distribution is a continuous linear map T : D(
) ! R such that
lim
n!1
T(‘n) = T(‘)
for any sequence
‘n
D(
)
! ‘:
The space of distribution on
is denoted by D0(
).
Denition 1.34. A sequence of distribution Tn 2 D0(
) is said to converge in the sense of
distribution to T 2 D0(
), if for every test function ‘ 2 D(
) one has
limn!1hTn; ‘i = hT; ‘i
1.7.1 Examples of Hilbert spaces
(a). L2(
) equipped with the norm jjfjjL2(
) = (
R
jfj2dx)1=2 is Hilbert.
(b). H1(
) equipped with the norm jjujj2
H1(
) =
R
u2dx +
R
jruj2dx is Hilbert, where
H1(
) = ff 2 L2(
) :
@f
@xi
2 L2(
)g
Proof. (a).Let (fn)n1 be a Cauchy sequence in L2(
), then we can nd a subsequence (fnk )k1
such that
jjfnk fnk+1jj <
1
2k ; k = 1; 2; 3; :::
Choose a function g 2 L2(
). By the Schwartz inequality,
Z
jg(fnk fnk+1jd
jjgjj
2k :
Hence
1X
k=1
Z
jg(fnk fnk+1jd jjgjj:
Thus
jg(x)j
1X
k=1
jfnk (x) fnk+1(x)j < +1
almost everywhere on X.
It implies
1X
k=1
jfnk (x) fnk+1(x)j < +1
almost everywhere on X.
Since the kth partial sum of the series
P1
k=1(fnk (x)fnk+1(x)) which converges almost everywhere
on X is fnk (x) fnk+1(x):
It implies
f(x) = lim
k!1
fnk (x):
14
Let ” > 0 be given, there exists N0 2 N such that
jjf fnk jj lim inf
j!1
jjfnj fnk jj “:
Thus f fnk 2 L2(
), and since f = (f fnk ) + fnk , we see that f 2 L2(
).
Also, since ” is arbitrary,
lim
k!1
jjf fnk jj = 0:
Finally, the inequality
jjf fnjj jjf fnk jj + jjfnk fnjj
shows that (fn) converges to f in L2(
).
(b). Let (un)n1 be a Cauchy sequence in H1(
) then given ” > 0 there exist n0 2 N such that
8m; n > n0
jjun umjjH1(
) < “:
which implies that
Z
jun umj2dx +
Z
jrun rumj2dx
1=2
< ” ():
Thus (un)n1 be a Cauchy sequence in L2(
) and (run)n1 is also a Cauchy sequence in L2(
).
Since L2(
) is complete,
un ! u 2 L2(
) and run ! wi 2 L2(
):
We need to show that ru = wi.
But
un ! u 2 L2(
) ) un ! u 2 D0(
);
thus
run ! ru 2 D0(
):
By uniqueness of limit in D0(
), we have ru = wi
From (**), let n be xed and let m ! 1 we have
Z
jun uj2dx +
Z
jrun ruj2dx
1=2
< “;
thus un ! u in H1(
). Hence H1(
) is Hilbert.
15
Theorem 1.35. (Riesz Theorem) Let H be a Hilbert space over R or C. If T is a bounded linear
functional on H i.e T is a bounded operator from H to the eld R or C, then there exists some
g 2 H such that for every f 2 H we have
T(f) = hf; gi . Moreover, jjTjj = jjgjj:
Proof. We can choose an orthonormal basis j ; j 1 for H. Let T be bounded linear
functional and set aj = T(j ). Choose f 2 H, let cj = hf; ji and dene
fn =
Xn
j=1
cjj :
Since j forms a basis we know that jjfn fjj ! 0 as n ! 1.
Since T is linear we have
T(fn) =
Xn
j=1
ajcj (1)
Since T is bounded, say with norm jjTjj < 1 we have
jjT(fn) T(f)jj jjTjjjjfn fjj (2)
Because jjfn fjj ! 0 as n ! 1, we conclude from equations (1) and (2) that
T(f) = lim
n!1)
T(fn) =
1X
j=1
ajcj (3)
Infact, the sequence aj must itself be square-summable. To see this, rst note that since jT(f)j
jjTjjjjfjj we have
j
1X
j=1
ajcj j jjTjj(
1X
j=1
c2j
)1=2 (4)
Equation (4) must hold for any square-summable sequence cj (since any cj corresponds to some
elements in H).Fix a positive integer N and dene a sequence cj = aj for j N, cj = 0 for j N.
Clearly such a sequence is square-summable and equation (4) then yields
j
XN
j=1
a2j
j jjTjj(
XN
j=1
a2j
)1=2
or
(
XN
j=1
a2j
)1=2 jjTjj (5)
Thus aj is square-summable the function g =
P
j ajj is well dened as an element of H and
T(f) =
X
j
ajcj = hf; gi:
16
Finally, equation (5) makes it clear that jjgjj jjTjj. But from Cauchy-Schwartz we also have
jT(f)j = jhf; gij jjfjjjjgjj implying jjTjj jjgjj, so jjTjj = jjgjj.
17
Do you need help? Talk to us right now: (+234) 08060082010, 08107932631 (Call/WhatsApp). Email: [email protected].
IF YOU CAN'T FIND YOUR TOPIC, CLICK HERE TO HIRE A WRITER»