Applied linear operators and spectral methods/Lecture 3: Difference between revisions

From testwiki
Jump to navigation Jump to search
imported>ShakespeareFan00
No edit summary
 
(No difference)

Latest revision as of 19:12, 26 August 2020

Review

In the last lecture we talked about norms in inner product spaces. The induced norm was defined as

𝐱=𝐱,𝐱=𝐱2.

We also talked about orthonomal bases and biorthonormal bases. The biorthonormal bases may be thought of as dual bases in the sense that covariant and contravariant vector bases are dual.

The last thing we talked about was the idea of a linear operator. Recall that

𝑨φj=iAijφiAijφi

where the summation is on the first index.

In this lecture we will learn about adjoint operators, Jacobi tridiagonalization, and a bit about the spectral theory of matrices.

Adjoint operator

Assume that we have a vector space with an orthonormal basis. Then

𝑨φj,φi=Akjφk,φi=Akjδki=Aij

One specific matrix connected with 𝑨 is the Hermitian conjugate matrix. This matrix is defined as

Aij*=Aji

The linear operator 𝑨* connected with the Hermitian matrix is called the adjoint operator and is defined as

𝑨*=𝑨T

Therefore,

𝑨*φj,φi=Aji

and

φi,𝑨*φj=Aji=𝑨φi,φj

More generally, if

𝐟=iαiφiand𝐠=jβjφj

then

𝐟,𝑨*𝐠=i,jαiβjφi,𝑨*φj=i,jαiβj𝑨φi,φj=𝑨𝐟,𝐠

Since the above relation does not involve the basis we see that the adjoint operator is also basis independent.

Self-adjoint/Hermitian matrices

If 𝑨*=𝑨 we say that 𝑨 is self-adjoint, i.e., Aij=Aji in any orthonomal basis, and the matrix Aij is said to be Hermitian.

Anti-Hermitian matrices

A matrix 𝐁 is anti-Hermitian if

Bij=Bji

There is a close connection between Hermitian and anti-Hermitian matrices. Consider a matrix 𝐀=i𝐁. Then

Aij=iBij=iBji=Aji.

Jacobi Tridiagonalization

Let 𝑨 be self-adjoint and suppose that we want to solve

(𝑰λ𝑨)𝐲=𝐛

where λ is constant. We expect that

𝐲=(𝑰λ𝑨)1𝐛

If λ𝐚 is "sufficiently" small, then

𝐲=(𝑰+λ𝑨+λ2𝑨2+λ3𝑨3+)𝐛

This suggest that the solution should be in the subspace spanned by 𝐛,𝑨𝐛,𝑨2𝐛,.

Let us apply the Gram-Schmidt orthogonalization procedure where

𝐱1=𝐛,𝐱2=𝑨𝐛,𝐱3=𝑨2𝐛,

Then we have

φn=𝑨n𝐛j=1n1𝑨n𝐛,φjφj2φj

This is clearly a linear combination of (𝐱1,𝐱2,,𝐱n). Therefore, 𝑨φn is a linear combination of 𝑨(𝐱1,𝐱2,,𝐱n)=(𝐱2,𝐱3,,𝐱n+1). This is the same as saying that 𝑨φn is a linear combination of φ1,φ2,,φn+1.

Therefore,

𝑨φn,φk=0ifk>n+1

Now,

Akn=𝑨φn,φk

But the self-adjointeness of 𝑨 implies that

Ank=Akn

So Akn=0 is k>n+1 or n>k+1. This is equivalent to expressing the operator 𝑨 as a tridiagonal matrix 𝐀 which has the form

𝐀=[xx00xxx0xxxx0xxx00xx]

In general, the matrix can be represented in block tridiagonal form.

Another consequence of the Gram-Schmidt orthogonalization is that

Lemma:

Every finite dimensional inner-product space has an orthonormal basis.

Proof:

The proof is trivial. Just use Gram-Schmidt on any basis for that space and normalize.

A corollary of this is the following theorem.

Theorem:

Every finite dimensional inner product space is complete.

Recall that a space is complete is the limit of any Cauchy sequence from a subspace of that space must lie within that subspace.

Proof:

Let {𝐮k} be a Cauchy sequence of elements in the subspace 𝒮n with k=1,,. Also let {𝐞1,𝐞2,,𝐞n} be an orthonormal basis for the subspace 𝒮n.

Then

𝐮k=j=1nαkj𝐞j

where

αki=𝐮k,𝐞i

By the Schwarz inequality

|αkiαpi|=|𝐮k,𝐞i𝐮p,𝐞i|𝐮k𝐮p0

Therefore,

αkiαpi0

But the α~s are just numbers. So, for fixed i, {αki} is a Cauchy sequence in (or ) and so converges to a number αi as k, i.e.,

limk𝐮k=i=1nαi𝐞i

which is is the subspace 𝒮n.

Spectral theory for matrices

Suppose 𝑨𝐱=𝐛 is expressed in coordinates relative to some basis φ1,φ2,,φn, i.e.,

𝑨φj=iAijφi;𝐱=ixiφi;𝐛=ibiφi

Then

𝑨𝐱=𝑨jxjφj=jxj(Aφj)=i,jxjAijφi

So 𝑨𝐱=𝐛 implies that

i,jAijxj=bi

Now let us try to see the effect of a change to basis to a new basis φ1',φ2',,φn' with

φi'=j=1nCjiφj

For the new basis to be linearly independent, 𝐂 should be invertible so that

φi=j=1nCmj1φm'

Now,

𝐱=jxjφj=i,jxjCij1φi'=ixi'φi'

Hence

xi'=Cij1xj

Similarly,

bi'=Cij1bj

Therefore

𝑨φ'=j=1nCji𝑨φj=j,kCjiAkjφk=j,k,mCjiAkjCmk1φm'=mAmi'φm'

So we have

Ami'=j,kCmk1AkjCji

In matrix form,

𝐱'=𝐂1𝐱;𝐛'=𝐂1𝐛;𝐀'=𝐂1𝐀𝐂

where the objects here are not operators or vectors but rather the matrices and vectors representing them. They are therefore basis dependent.

In other words, the matrix equation 𝐀𝐱=𝐛


Similarity transformation

The transformation

𝐀'=𝐂1𝐀𝐂

is called a similarity transformation. Two matrices are equivalent or similar is there is a similarity transformation between them.

Diagonalizing a matrix

Suppose we want to find a similarity transformation which makes 𝑨 diagonal, i.e.,

𝐀'=[λ1000λ2000λn]=Λ

Then,

𝐀𝐂=𝐂𝐀'=𝐂Λ

Let us write 𝐂 (which is a n×n matrix) in terms of its columns

𝐂=[𝐱1𝐱2𝐱n]

Then,

𝐀[𝐱1𝐱2𝐱n]=[𝐱1𝐱2𝐱n]Λ=[λ1𝐱1λ2𝐱2λn𝐱n]

i.e.,

𝐀𝐱i=λi𝐱i

The pair (λ,𝐱) is said to be an eigenvalue pair if 𝐀𝐱=λ𝐱 where 𝐱 is an eigenvector and λ is an eigenvalue.

Since (𝐀λ𝐈)=𝟎 this means that λ is an eigenvalue if and only if

det(𝐀λ𝐈)=𝟎

The quantity on the left hand side is called the characteristic polynomial and has n roots (counting multiplicities).

In there is always one root. For that root 𝐀λ𝐈 is singular, i.e., there always exists at least one eigenvector.

We will delve a bit more into the spectral theory of matrices in the next lecture. Template:Lecture