Applied linear operators and spectral methods/Lecture 4: Difference between revisions

From testwiki
Jump to navigation Jump to search
 
(No difference)

Latest revision as of 15:02, 13 September 2010

More on spectral decompositions

In the course of the previous lecture we essentially proved the following theorem:

Theorem:

1) If a n×n matrix 𝐀 has n linearly independent real or complex eigenvectors, the 𝐀 can be diagonalized. 2) If 𝐓 is a matrix whose columns are eigenvectors then 𝐓𝐀𝐓1=Λ is the diagonal matrix of eigenvalues.

The factorization 𝐀=𝐓1Λ𝐓 is called the spectral representation of 𝐀.

Application

We can use the spectral representation to solve a system of linear homogeneous ordinary differential equations.

For example, we could wish to solve the system

d𝐮dt=𝐀𝐮=[2112][u1u2]

(More generally 𝐀 could be a n×n matrix.)

Comment:

Higher order ordinary differential equations can be reduced to this form. For example,

d2u1dt2+adu1dt=bu1

Introduce

u2=du1dt

Then the system of equations is

du1dt=u2du2dt=bu1au2

or,

d𝐮dt=[01ba][u1u2]=𝐀𝐮

Returning to the original problem, let us find the eigenvalues and eigenvectors of 𝐀. The characteristic equation is

det(𝐀λ𝐈)=0

o we can calculate the eigenvalues as

(2+λ)(2+λ)1=0λ2+4λ+3=0λ1=1,λ2=3

The eigenvectors are given by

(𝐀λ1𝐈)𝐧1=𝟎;(𝐀λ2𝐈)𝐧2=𝟎

or,

n11+n21=0;n11n21=0;n12+n22=0;n12+n22=0

Possible choices of 𝐧1 and 𝐧2 are

𝐧1=[11];𝐧2=[11]

The matrix 𝐓 is one whose columd are the eigenvectors of 𝐀, i.e.,

𝐓=[1111]

and

Λ=𝐓1𝐀𝐓=[1003]

If 𝐮=𝐓𝐮' the system of equations becomes

d𝐮'dt=𝐓1𝐀𝐓𝐮'=Λ𝐮'

Expanded out

du1'dt=u1';du2'dt=3u2'

The solutions of these equations are

u1'=C1et;u2'=C2e3t

Therefore,

𝐮=𝐓𝐮'=[C1et+C2e3tC1etC2e3t]

This is the solution of the system of ODEs that we seek.

Most "generic" matrices have linearly independent eigenvectors. Generally a matrix will have n distinct eigenvalues unless there are symmetries that lead to repeated values.

Theorem

If 𝐀 has k distinct eigenvalues then it has k linearly independent eigenvectors.

Proof:

We prove this by induction.

Let 𝐧j be the eigenvector corresponding to the eigenvalue λj. Suppose 𝐧1,𝐧2,,𝐧k1 are linearly independent (note that this is true for k = 2). The question then becomes: Do there exist α1,α2,,αk not all zero such that the linear combination

α1𝐧1+α2𝐧2++αk𝐧k=0

Let us multiply the above by (𝐀λk𝐈). Then, since 𝐀𝐧i=λi𝐧i, we have

α1(λ1λk)𝐧1+α2(λ2λk)𝐧2++αk1(λk1λk)𝐧k1+αk(λkλk)𝐧k=𝟎

Since λk is arbitrary, the above is true only when

α1=α2==αk1=0

In thast case we must have

αk𝐧k=𝟎αk=0

This leads to a contradiction.

Therefore 𝐧1,𝐧2,,𝐧k are linearly independent.

Another important class of matrices which are diagonalizable are those which are self-adjoint.

Theorem

If 𝑨 is self-adjoint the following statements are true


  1. 𝑨𝐱,𝐱 is real for all 𝐱.
  2. All eigenvalues are real.
  3. Eigenvectors of distinct eigenvalues are orthogonal.
  4. There is an orthonormal basis formed by the eigenvectors.
  5. The matrix 𝑨 can be diagonalized (this is a consequence of the previous statement.)


Proof

1) Because the matrix is self-adjoint we have

𝑨𝐱,𝐱=𝐱,𝑨𝐱

From the property of the inner product we have

𝐱,𝑨𝐱=𝑨𝐱,𝐱

Therefore,

𝑨𝐱,𝐱=𝑨𝐱,𝐱

which implies that 𝑨𝐱,𝐱 is real.

2) Since 𝑨𝐱,𝐱 is real, 𝑰𝐱,𝐱=𝐱,𝐱 is real. Also, from the eiegnevalue problem, we have

𝑨𝐱,𝐱=λ𝐱,𝐱

Therefore, λ is real.

3) If (λ,𝐱) and (μ,𝐲) are two eigenpairs then

λ𝐱,𝐲=𝑨𝐱,𝐲

Since the matrix is self-adjoint, we have

λ𝐱,𝐲=𝐱,𝑨𝐲=μ𝐱,𝐲

Therefore, if λμ0, we must have

𝐱,𝐲=0

Hence the eigenvectors are orthogonal.

4) This part is a bit more involved. We need to define a manifold first.

Linear manifold

A linear manifold (or vector subspace) 𝒮 is a subset of 𝒮 which is closed under scalar multiplication and vector addition.

Examples are a line through the origin of n-dimensional space, a plane through the origin, the whole space, the zero vector, etc.

Invariant manifold

An invariant manifold for the matrix 𝑨 is the linear manifold for which 𝐱 implies 𝑨𝐱.

Examples are the null space and range of a matrix 𝑨. For the case of a rotation about an axis through the origin in n-space, invaraiant manifolds are the origin, the plane perpendicular to the axis, the whole space, and the axis itself.

Therefore, if 𝐱1,𝐱2,,𝐱m are a basis for and 𝐱m+1,,𝐱n are a basis for (the perpendicular component of ) then in this basis 𝑨 has the representation

𝑨=[xx|xxxx|xx00|xx00|xx]

We need a matrix of this form for it to be in an invariant manifold for 𝑨.

Note that if is an invariant manifold of 𝑨 it does not follow that is also an invariant manifold.

Now, if 𝑨 is self adjoint then the entries in the off-diagonal spots must be zero too. In that case, 𝑨 is block diagonal in this basis.

Getting back to part (4), we know that there exists at least one eigenpair (λ1,𝐱1) (this is true for any matrix). We now use induction. Suppose that we have found (n1) mutually orthogonal eigenvectors 𝐱i with 𝑨𝐱i=λi𝐱i and λi are real, i=1,,k1. Note that the 𝐱is are invariant manifolds of 𝑨 as is the space spanned by the 𝐱is and so is the manifold perpendicular to these vectors).

We form the linear manifold

k={𝐱|𝐱,𝐱j=0j=1,2,,k1}

This is the orthogonal component of the k1 eigenvectors 𝐱1,𝐱2,,𝐱k1 If 𝐱k then

𝐱,𝐱j=0and𝑨𝐱,𝐱j=𝐱,𝑨𝐱j=λj𝐱,𝐱j=0

Therefore 𝑨𝐱k which means that k is invariant.

Hence k contains at least one eigenvector 𝐱k with real eigenvalue λk. We can repeat the procedure to get a diagonal matrix in the lower block of the block diagonal representation of 𝑨. We then get n distinct eigenvectors and so 𝑨 can be diagonalized. This implies that the eigenvectors form an orthonormal basis.

5) This follows from the previous result because each eigenvector can be normalized so that 𝐱i,𝐱j=δij.

We will explore some more of these ideas in the next lecture. Template:Lecture