Eigenvalues and Eigenvectors in Linear Algebra

Eigenvalues and Eigenvectors in Linear Algebra
Slide Note
Embed
Share

Eigenvalues and eigenvectors are fundamental concepts in linear algebra, essential for solving problems related to matrices and transformations. This content delves into the geometric interpretation, verification of eigenvalues, eigenvectors, eigenspaces, and examples of eigenspaces on the xy-plane. Through explanations and examples, it elucidates the significance and applications of eigenvalues and eigenvectors in mathematical analysis.

  • Eigenvalues
  • Eigenvectors
  • Linear Algebra
  • Mathematics

Uploaded on Mar 01, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Eigenvalues and Eigenvectors Geetha Sivaraman Assistant Professor Department of Mathematics St. Joseph s College Tiruchirappalli 7.1

  2. 1 Eigenvalues and Eigenvectors Eigenvalue problem (one of the most important problems in the linear algebra): If A is an n n matrix, do there exist nonzero vectors x in Rn such that Ax is a scalar multiple of x? ( Eigenvalue and Eigenvector: A: an n n matrix : a scalar (could be zero) x: a nonzero vector in Rn Geometric Interpretation y x = x A Eigenvalue = x x A x Eigenvector x 7.2

  3. Ex 1: Verifying eigenvalues and eigenvectors Eigenvalue 2 0 1 0 0 1 = = = x x A 1 2 0 1 In fact, for each eigenvalue, it has infinitely many eigenvectors. For = 2, [3 0]Tor [5 0]Tare both corresponding eigenvectors. Moreover, ([3 0] + [5 0])Tis still an eigenvector. The proof is in Thm. 7.1. 2 0 0 1 0 2 0 1 0 = = = = x x 2 2 A 1 1 1 Eigenvector Eigenvalue 2 0 0 0 1 0 0 1 = = = ( 1) = x x 1 A 2 2 1 1 Eigenvector 7.3

  4. Thm. 7.1: The eigenspace corresponding to of matrix A If A is an n n matrix with an eigenvalue , then the set of all eigenvectors of together with the zero vector is a subspace of Rn. This subspace is called the eigenspace ( ) of Pf: x1and x2are eigenvectors corresponding to (i.e., , A A = = x x x + = + + x x (2) ( ) ( ) ( (i.e., is also an eigenvector corresponding to ) c x Since this set is closed under vector addition and scalar multiplication, this set is a subspace of Rnaccording to Theorem 4.5 = x ) 1 1 A 2 2 x + = + x x x x x x x (1) ( (i.e., A c ) ( ) A A 1 2 1 2 1 2 1 2 is also an eigenvector corresponding to ) ) ( ) c A c c = = x x x 1 2 = x 1 1 1 1 1 7.4

  5. Ex 3: Examples of eigenspaces on the xy-plane For the matrix A as follows, the corresponding eigenvalues are 1 = 1 and 2 = 1: = 0 Sol: For the eigenvalue 1= 1, corresponding vectors are any vectors on the x-axis = = = 1 0 A 1 Thus, the eigenspace corresponding to = 1 is the x- axis, which is a subspace of R2 1 0 0 x x x x 1 A 0 1 0 0 0 For the eigenvalue 2= 1, corresponding vectors are any vectors on the y-axis 0 y 1 0 0 0 y 0 y 0 y Thus, the eigenspace corresponding to = 1 is the y- axis, which is a subspace of R2 = = = 1 A 1 7.5

  6. Geometrically speaking, multiplying a vector (x, y) in R2by the matrix A corresponds to a reflection to the y-axis 0 y 0 y x y x x = = + = + v A A A A A 0 0 0 y x x = + = 1 1 0 y 7.6

  7. Thm. 0.2: Finding eigenvalues and eigenvectors of a matrix AMnn Let A be an n n matrix. = (1) An eigenvalue of A is a scalar such that (2) The eigenvectors of A corresponding to are the nonzero solutions of ( ) I A = x 0 det( ) 0 I A Note: follwing the definition of the eigenvalue problem A A I = = x x x x = x 0 (homogeneous system) ( ) I A = = has nonzero solutions for x iff (The above iff results comes from the equivalent conditions on Slide 4.101) x 0 det( ) 0 I A ( ) I A Characteristic equation ( ) of A: det( ) 0 I A = Characteristic polynomial ( ) of A Mn n: det( ) ( ) I A I A = = + + + + 1 n n c c c 1 1 0 n 7.7

  8. Ex 4: Finding eigenvalues and eigenvectors 2 12 = A 1 5 Sol: Characteristic equation: 2 12 + + = det( ) I A 1 + 5 = = + + = 2 3 2 ( 1)( 2) 0 = , 1 2 = , 1 = Eigenvalue: 2 1 2 7.8

  9. 3 12 1 0 0 x x 1 = = = x (1) 1 ( ) I A 1 1 4 2 3 12 1 1 0 4 G.-J. E. 4 0 4 t 4 1 x x t 1 = = , 0 t t 2 4 12 1 0 0 x x 1 = = x ( ) I A = (2) 2 2 2 3 2 4 12 1 1 0 3 G.-J. E. 3 0 3 3, 1 x x s 1 = = 0 s s s 2 7.9

  10. Ex 5: Finding eigenvalues and eigenvectors Find the eigenvalues and corresponding eigenvectors for the matrix A. What is the dimension of the eigenspace of each eigenvalue? 2 1 0 = 0 2 0 A 0 0 2 Sol: Characteristic equation: = 2 1 0 0 = = 3 0 0 2 ( 2) 0 I A 0 2 = Eigenvalue: 2 7.10

  11. The eigenspace of = 2: 0 0 0 0 1 0 0 0 0 0 0 x x x 1 = = x ( ) 0 0 I A 2 3 1 x s 1 = = + 0 t 0 0 , , 0 x s t s t 2 0 1 x 3 1 0 0 0 0 1 + = , :the eigenspace of corresponding to A 2 s t s t R Thus, the dimension of its eigenspace is 2 7.11

  12. Notes: (1) If an eigenvalue 1 occurs as a multiple root (k times) for the characteristic polynominal, then 1 has multiplicity k (2) The multiplicity of an eigenvalue is greater than or equal to the dimension of its eigenspace. (In Ex. 5, k is 3 and the dimension of its eigenspace is 2) 7.12

  13. Ex 6Find the eigenvalues of the matrix A and find a basis for each of the corresponding eigenspaces 5 1 0 A 1 0 0 0 10 = 1 0 2 0 1 0 0 3 Sol: Characteristic equation: According to the note on the previous slide, the dimension of the eigenspace of 1= 1 is at most to be 2 For 2= 2 and 3= 3, the demensions of their eigenspaces are at most to be 1 1 0 0 0 0 1 5 10 0 = 3 = = I A 1 1 0 0 2 0 3 = , 1 , 2 2 ( 1) ( = 2)( = 3) 0 Eigenvalues: 1 2 3 7.13

  14. 0 0 0 0 0 0 0 0 0 0 x x x x 1 5 10 1 0 2 = = x = ( ) I A (1) 1 1 1 1 0 1 0 0 3 2 4 2 s t t 0 1 0 0 2 x x x x t 1 0 2 1 G.-J.E. 2 = = + , , 0 s t s t 2 3 4 0 2 1 0 , is a basis for the eigenspace corresponding to 1 0 2 = 1 0 1 The dimension of the eigenspace of 1= 1 is 2 7.14

  15. 1 0 0 1 0 0 0 0 0 0 x x x x 1 5 10 0 0 2 = = = x ( ) I A (2) 2 2 2 1 0 1 0 0 3 1 4 0 5 t 0 5 1 0 x x x x 1 t G.-J.E. 2 = = , 0 t t 3 0 4 0 5 is a basis for the eigenspace corresponding to 1 = 2 2 0 The dimension of the eigenspace of 2= 2 is 1 7.15

  16. 2 0 0 2 0 0 0 0 0 0 x x x x 1 5 10 1 0 2 = = x ( ) I A = (3) 3 3 3 1 0 1 0 0 0 3 4 0 5 0 t 0 x x x x 1 5 t G.-J.E. 2 = = , 0 t t 0 1 3 4 0 5 is a basis for the eigenspace corresponding to 0 = 3 3 1 The dimension of the eigenspace of 3= 3 is 1 7.16

  17. Thm. 7.3: Eigenvalues for triangular matrices If A is an n n triangular matrix, then its eigenvalues are the entries on its main diagonal Ex 7: Finding eigenvalues for triangular and diagonal matrices 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 2 0 0 0 2 0 0 0 = (b) A = (a) 1 1 5 A 4 3 3 0 Sol: 2 0 0 0 + = = According to Thm. 3.2, the determinant of a triangular matrix is the product of the entries on the main diagonal + = (a) 1 1 ( 2)( 1)( 3) 0 I A 3 5 = 2, 3 = 0, 3 = = 1, = 2, 1, 1 2 3 = = = (b) 4, 3 1 2 3 4 5 7.17

  18. Eigenvalues and eigenvectors of linear transformations: A number is called an eigenvalue of a linear transformation : if there is a nonzero vector such that ( ) The vector is called an eigenvector of corresponding to , and the set of all eigenvectors of (together with the zero vector) is called the eigenspace of = x x x . T V V T x T The definition of linear transformation functions should be introduced in Ch 6 Here I briefly introduce the linear transformation and its some basic properties The typical example of a linear transformation function is that each component of the resulting vector is the linear combination of the components in the input vector x An example for a linear transformation T: R3 R3 ( , , ) T x x x = + + , 2 ) ( 3 ,3 x x x x x 1 2 3 1 2 1 2 3 7.18

  19. Theorem: Standard matrix for a linear transformation Let : be a linear trtansformation such that T R R a a T T = = where { , , , } is a standard basis for matrix , whose -th column correspond to ( ), A i n n . Then an e a a a a a a 11 12 1 n 21 22 2 n = ( ) e e e , ( ) , , ( ) T , 1 2 n a a a 1 2 n n nn n n n e e e R 1 2 n T i a a a a 11 12 1 n 21 22 2 n = = ( ) ( ) T e e e ( ) T , A T 1 2 n a a a 1 2 n n nn = n x x x satisfies that ( ) standard matrix for ( for every in T T . is called the A T A R ) 7.19

  20. Consider the same linear transformation T(x1, x2, x3) = (x1+ 3x2, 3x1+ x2, 2x3) 1 1 3 , ( ) 0 0 3 1 , ( ) 0 0 0 0 = = = = = = ( ) e e e ( 0 ) 0 ( 1 ) 0 ( 0 ) 1 T T T T T T 1 2 3 2 Thus, the above linear transformation T is with the following corresponding standard matrix A such that T(x) = Ax + 1 3 0 3 1 0 0 0 1 3 0 3 1 0 0 0 3 + x x x x x x 1 1 x 2 = = = x 3 A A 2 1 2 2 2 2 x 3 3 The statement on Slide 7.18 is valid because for any linear transformation T: V V, there is a corresponding square matrix such that T(x) = Ax. Consequently, the eignvalues and eigenvectors of a linear transformation T are in essence the eigenvalues and eigenvectors of the corresponding square matrix A 7.20

  21. Ex 8: Finding eigenvalues and eigenvectors for standard matrices Find the eigenvalues and corresponding eigenvectors for 1 3 0 3 1 0 0 0 2 Sol: = 2 0 0 eigenvalues 4, = = A is the standard matrix for T(x1, x2, x3) = (x1+ 3x2, 3x1+ x2, 2x3) (see Slides 7.19 and 7.20) = A 3 1 3 0 = + = 2 ( 2) ( 4) 0 1 0 I A + 2 1 2 = For For and (0, 0, 1). 4, the corresponding eigenvector is (1, 1, 0). 2, the corresponding eigenvectors are (1, 1, 0) = 1 2 7.21

  22. Transformation matrix for nonstandard bases ' A n Suppose is the standard basis of relative to the standard basis consists of the components of that vector, i.e., for any in , = [ ] , the theorem on B R x x x ( ) ( ) , where is the standard matrix for or the matrix of relative to the standard basis B . Since the coordinate matrix of a vector B R n Slide 7.19 can be restated as follows. B = = = x x x x ( ) e ( ) e ( ) e T A T A A T T T 1 2 n B B B B T T The above theorem can be extended to consider a nonstandard basis ', which consists of { , , , } n v v v ' ' ( ) ' , where ' is the transformation matrix for relative to the basis ' T B 1 2 = = x x v v v ( ) ( ) ( ) T A A T T T 1 2 n ' ' ' B B B B B B On the next two slides, an example is provided to verify numerically that this extension is valid 7.22

  23. to be {v1, v2, ' B EX. Consider an arbitrary nonstandard basis v3}= {(1, 1, 0), (1, 1, 0), (0, 0, 1)}, and find the transformation matrix such that corresponding to the same linear transformation T(x1, x2, x3) = (x1+ 3x2, 3x1+ x2, 2x3) = = = = = = x = ' A ( ) x ' T A ' ' B B 1 4 4 0 4 0 , 0 1 2 0 = = = v v ( ) ( 1 ) 0 ( ) ( 1 ) 0 2 0 2 , 0 T T T T 1 2 ' ' B B ' ' B B ' ' B B 0 0 0 0 0 v ( ) ( 0 ) 1 T T 3 ' B 2 2 ' B ' B 4 0 0 0 0 0 = ' 2 A 0 2 7.23

  24. x = ( ) x Consider x = (5, 1, 4), and check that corresponding to the linear transformation T(x1, x2, x3) = (x1+ 3x2, 3x1+ x2, 2x3) ' T A ' ' B B 5 2 8 5 2 3 , 4 x = = = = = ( ) x ( 1 ) 4 14 6 , 8 1 T T ' ' B B 8 4 ' ' B B ' B 4 0 0 0 0 0 2 3 4 8 x = = = ( ) x ' 2 6 8 A T ' ' B B 0 2 7.24

  25. For a special basis ?= ?1,?2,,??, where ??s are eigenvectors of the standard matrix ?, ? is obtained immediately to be diagonal due to ? ?? = ???= ???? and ???? ? = 0?1+ 0?2+ + ????+ + 0?? ? = 0 0 ??0 0? 3 et f ' be a basis of , e.g., ' { , B A made up of three linearly independent eigenvect } {(1, 1, 0),(1, 1, 0),(0 = v v v ors L B R = o , , 0, 1)} in Ex 8 . 1 2 3 Then ', the transformation matrix for relative to the basis ', defined as [[ ( )] [ ( )] [ ( )] ] (see Slide 7.22), is B B B T T T v v v diagonal entries are corresponding eigenvalues(see Slides 7.23) A T B diagonal the main , and 1 ' 2 ' 3 ' 4 0 0 = = for 2 for 4 2 1 = ' A 0 2 0 ' {(1, 1, 0),(1, 1, 0),(0, 0, 1)} B = 0 0 2 Eigenvectors of A Eigenvalues of A 7.25

  26. 7.2 Diagonalization Diagonalization problem For a square matrix A, does there exist an invertible matrix P such that P 1AP is diagonal? Diagonalizable matrix Definition 1: A square matrix A is called diagonalizable if there exists an invertible matrix P such that P 1AP is a diagonal matrix (i.e., P diagonalizes A) Definition 2: A square matrix A is called diagonalizable if A is similar to a diagonal matrix In Sec. 6.4, two square matrices A and B are similar if there exists an invertible matrix P such that B = P 1AP. Notes: In this section, I will show that the eigenvalue and eigenvector problem is closely related to the diagonalization problem 7.26

  27. Thm. 7.4: Similar matrices have the same eigenvalues If A and B are similar n n matrices, then they have the same eigenvalues Pf: For any diagonal matrix in the form of D = I, P 1DP = D = 1 and similar are A B B P AP Consider the characteristic equation of B: = = = 1 1 1 1 ( ) I B I P AP P IP P AP P I A P = = = 1 1 1 P I A P P P I A P P I A = I A Since A and B have the same characteristic equation, they are with the same eigenvalues Note that the eigenvectors of A and B are not identical 7.27

  28. Ex 1: Eigenvalue problems and diagonalization programs 1 3 0 = 3 1 0 A 0 0 2 Sol: Characteristic equation: = 1 3 0 0 + = + = 2 3 1 ( 4)( 2) 0 I A 0 0 2 = = = The eigenvalues: 4, 2, 2 1 2 3 1 1 0 = p = (1) 4 the eigenvector 1 7.28

  29. 1 0 0 1 = = p p = 1 , 0 (2) 2 the eigenvector 2 3 1 1 0 1 0 4 0 0 0 0 0 = = = 1 p p p [ ] 1 0 , and 0 1 2 P P AP 1 2 3 0 2 = p p p Note: If [ ] P 2 1 3 1 1 0 0 1 2 0 4 0 0 0 = = 1 1 1 0 0 0 P AP 0 2 The above example can verify Thm. 7.4 since the eigenvalues for both A and P 1AP are the same to be 4, 2, and 2 The reason why the matrix P is constructed with the eigenvectors of A is demonstrated in Thm. 7.5 on the next slide 7.29

  30. Thm. 7.5: Condition for diagonalization An n n matrix A is diagonalizable if and only if it has n linearly independent eigenvectors If there are n linearly independent eigenvectors, it does not imply that there are n distinct eigenvalues. In an extreme case, it is possible to have only one eigenvalue with the multiplicity n, and there are n linearly independent eigenvectors for this eigenvalue On the other hand, if there are n distinct eigenvalues, then there are n linearly independent eigenvectors (see Thm. 7.6), and thus A must be diagonalizable Pf: ( ) = 1 Since is diagonalizable, there exists an invertible s.t. is diagonal. Let [ P = p p A P D P AP = diag p ] and ( , , , ), then D 1 2 1 2 n n 0 0 0 1 0 2 = p p p [ ] PD 1 2 n 0 0 n = 1 1 p p p [ ] 2 2 n n 7.30

  31. = = 1 (since ) AP A PD A p D P AP ] [ = 1 1 p p p p p [ ] A 1 p 2 2 2 n n n = = p , 1, 2, , n A i i i i p (The above equations imply the column vectors of , and the diagonal entries in are eigenvalues of ) A of are eigenvectors P A i D i Because is diagonalizable Columns in , i.e., , (see Slide 4.101 in the lecture note) is invertible , are linearly independent n p A P p p , , P 1 2 Thus, has linearly independent eigenvectors A n ( ) p p p Since has linearly independent eigenvectors , corresponding eigenvalues , = = p p , with A n 1 2 n , (could be the same), then 1 2 n , 1, 2, , A i n i i i P = p p p Let [ ] 7.31 1 2 n

  32. = = = p p p p p p p [ ] [ are linearly independent ] AP A A A A 1 2 1 2 n n 1 1 p p [ ] 2 2 n n 0 0 0 1 0 2 = = p p p [ ] PD 1 2 n 0 0 n p p p Since , P , , 1 2 n is invertible (see Slide 4.101 in the lecture note) is diagonalizable (according to the definition of the diagonalizable matrix on Slide 7. = = 1 AP A PD P AP D 27) p Note that 's are linearly independent eigenvectors and the diagonal entries in the resulting diagonalized are eigenvalues of i i D A 7.32

  33. Ex 4: A matrix that is not diagonalizable Show that the following matrix is not diagonalizable 1 0 Sol: Characteristic equation: 1 2 ( 0 1 The eigenvalue 1, and then solve ( = 2 1 = A = = = 2 1) 0 I A = x 0 ) for eigenvectors I A 1 1 0 0 2 1 0 = = = p eigenvector I A I A 1 1 0 Since A does not have two linearly independent eigenvectors, A is not diagonalizable 7.33

  34. Steps for diagonalizing an nn square matrix: p p p , , , , Step 1: Find n linearly independent eigenvectors for A with corresponding eigenvalues 1 2 n , 1 2 n P = p p p Step 2: Let [ ] 1 2 n Step 3: 0 0 0 1 0 0 = = 1 P AP D 2 0 n = = p p where , 1, 2, , A i n i i i 7.34

  35. Ex 5: Diagonalizing a matrix 1 A 1 1 = 1 3 1 3 1 1 1AP matrix a Find such that P diagonal. is P Sol: Characteristic equation: 1 1 1 + = = + = 1 3 1 ( 2)( 2)( 3) 0 I A 3 1 1 = = = The eigenvalues: 2, 2, 3 1 2 3 7.35

  36. 1 1 1 1 0 0 0 1 0 1 0 0 0 0 0 x x x 1 = = G.-J. E. 1= 1 1 1 1 I A 2 1 2 3 3 3 1 x x x t 1 = = p 0 t eigenvector 0 1 2 1 3 3 1 1 1 1 0 0 0 1 0 0 0 0 x x x 1 4 1 = = G.-J. E. = 5 1 1 1 I A 2 1 4 0 2 2 2 3 3 1 x x x t 1 4 1 = = p eigenvector 1 t 1 4 t 2 2 4 3 7.36

  37. 2 1 0 1 1 0 0 0 1 0 1 0 0 0 x x x 1 = = G.-J. E. 3= 1 1 1 I A 3 3 2 3 1 4 0 3 1 x x x t 1 = = p eigenvector 1 1 t t 2 3 3 1 1 1 = = p p p [ ] 0 1 1 1 and it follows that 1 P 1 2 3 4 2 0 0 0 0 0 3 = 1 2 P AP 0 7.37

  38. Note: a quick way to calculate Akbased on the diagonalization technique 0 0 0 k 0 0 0 1 1 0 k 0 2 = = k 2 (1) D D 0 0 k n 0 0 n = = = 1 1 1 1 1 k k (2) D P AP D P AP P AP P AP P A P repeat times k k 0 0 0 1 k 0 = = 1 k k k 2 , where A PD P D k n 0 0 7.38

  39. Thm. 7.6: Sufficient conditions for diagonalization If an n n matrix A has n distinct eigenvalues, then the corresponding eigenvectors are linearly independent and thus A is diagonalizable according to Thm. 7.5. Pf: Let 1, 2, , nbe distinct eigenvalues and corresponding eigenvectors be x1, x2, , xn. In addition, consider that the first m eigenvectors are linearly independent, but the first m+1 eigenvectors are linearly dependent, i.e., c c + + x x x += + x , (1) c 1 1 1 2 2 m m m where ci s are not all zero. Multiplying both sides of Eq. (1) by A yields A Ac Ac c c + + = + x x x = + + + x 1 1 x x x Ac + + 1 2 2 m m c m + x (2) 1 1 1 1 1 2 2 2 m m m m m 7.39

  40. On the other hand, multiplying both sides of Eq. (1) by m+1yields + + + + = + + x x x + x (3) c c c + 1 1 1 1 1 2 1 2 1 m m m m m m m Now, subtracting Eq. (2) from Eq. (3) produces + + + = )x )x )x 0 ( ( ( c c c + + + 1 1 1 1 2 1 2 2 1 m m m m m m Since the first m eigenvectors are linearly independent, we can infer that all coefficients of this equation should be zero, i.e., ( ) ( m m c c + + = = = = ) ( ) 0 c + 1 1 1 2 1 2 1 m m m Because all the eigenvalues are distinct, it follows all ci s equal to 0, which contradicts our assumption that xm+1can be expressed as a linear combination of the first m eigenvectors. So, the set of n eigenvectors is linearly independent given n distinct eigenvalues, and according to Thm. 7.5, we can conclude that A is diagonalizable 7.40

  41. Ex 7: Determining whether a matrix is diagonalizable = 0 1 2 1 0 0 1 A 0 3 Sol: Because A is a triangular matrix, its eigenvalues are 1, 0, 3 = = = 1 2 3 According to Thm. 7.6, because these three values are distinct, A is diagonalizable 7.41

  42. Ex 8: Finding a diagonalized matrix for a linear transformation Let : be the linear transformation given by ( ) ( T x ,x ,x x x = 3 3 T R R + + + 3 3 ) x , x x x , x x x 1 2 3 1 R 2 3 1 2 3 1 T 2 3 3 Find a basis ' for to ' is diagonal B such that the matrix for relative B Sol: The standard matrix for T is given by 1 1 1 1 = 3 1 1 A 3 1 From Ex. 5 you know that 1= 2, 2 = 2, 3= 3 and thus A is diagonalizable. So, similar to the result on Slide 7.25, these three linearly independent eigenvectors found in Ex. 5 can be used to form the basis . That is ' B 7.42

  43. ' { , B = } {( 1, 0, 1),(1, 1, 4),( 1, 1, 1)} = v v v , 1 2 3 The matrix for T relative to this basis is = v v v ' [ ( T )] [ ( B )] [ ( B )] A T T 1 0 ' 2 ' 3 ' B 2 0 0 0 0 3 = 2 0 ' A Note that it is not necessary to calculate through the above equation. According to the result on Slide 7.25, we already know that matrix and its main diagonal entries are corresponding eigenvalues of A ' A is a diagonal 7.43

  44. 7.3 Symmetric Matrices and Orthogonal Diagonalization Symmetric matrix : A square matrix A is symmetric if it is equal to its transpose: A = T A Ex 1: Symmetric matrices and nonsymetric matrices 5 0 2 1 3 = 5 0 1 0 1 2 = 1 3 0 A (symmetric) 4 3 = B (symmetric) 2 3 1 1 4 0 C (nonsymmetric) 7.44

  45. Thm 7.7: Eigenvalues of symmetric matrices If A is an n n symmetric matrix, then the following properties are true (1) A is diagonalizable (symmetric matrices (except the matrices in the form of D = aI) are guaranteed to have n linearly independent eigenvectors and thus be diagonalizable) (2) All eigenvalues of A are real numbers (3) If is an eigenvalue of A with the multiplicity to be k, then has k linearly independent eigenvectors. That is, the eigenspace of has dimension k The above theorem is called the Real Spectral Theorem , and the set of eigenvalues of A is called the spectrum of A 7.45

  46. Ex 2: Prove that a 2 2 symmetric matrix is diagonalizable a c = A c b Pf: Characteristic equation: a c = = + + = 2 2 ( ) 0 I A a b ab c c b As a function in , this quadratic polynomial function has a nonnegative discriminant as follows a b + = = = + + + + 2 2 2 2 2 ( ) 4(1)( ) 2 2 4 4 4 ab c a a ab b ab b + + ab c c 2 2 2 a b 2 2 ( ) 4 c 0 real-number solutions 7.46

  47. + = 2 2 (1) ( ) 4 0 a b c = = , 0 a b c 0 a a c c b a = = itself is a diagonal matrix A 0 Note that in this case, A has one eigenvalue, a, and its multiplicity is 2 + 2 2 ( ) 2 ( ) 4 0 a b c The characteristic polynomial of A has two distinct real roots, which implies that A has two distinct real eigenvalues. According to Thm. 7.6, A is diagonalizable 7.47

  48. Orthogonal matrix: A square matrix P is called orthogonal if it is invertible and = = = 1 T T T (or ) P P PP P P I Thm. 7.8: Properties of orthogonal matrices An n n matrix P is orthogonal if and only if its column vectors form an orthonormal set Pf: Suppose the column vectors of P form an orthonormal set, i.e., 1 , where n i P = p p p p p = = p p 2 0 for and 1 i j j i i p p p p p p p p p p p p T T T p p p p p p p p p p p p 1 1 1 2 1 n 1 1 1 2 1 n T T T 2 1 2 2 2 1 = = = T 2 1 2 2 2 1 P P I n p p p p p p T T T p p p p p p 1 2 n n n n 1 2 n n n n It implies that P 1= PTand thus P is orthogonal 7.48

  49. Ex 5: Show that P is an orthogonal matrix. = P 1 2 2 3 3 3 0 2 1 5 5 5 2 4 3 5 3 5 3 5 = = 1 Sol: If P is a orthogonal matrix, then = T T P P PP I 1 3 2 5 2 1 0 0 0 1 0 0 0 1 1 3 2 3 1 2 3 0 3 5 = = T PP I 2 5 2 3 1 4 5 5 3 5 0 5 5 2 4 2 3 3 5 3 5 3 5 3 5 7.49

  50. 1 3 2 3 1 2 3 0 , = = = p p p Moreover, let , , and 2 5 1 2 3 5 5 2 4 3 5 3 5 3 5 = = = = p p p p p p p p we can produce = p p 0 and 1 2 1 3 2 3 1 1 = p p 1 2 2 3 3 p p p So, { , consistent with Thm. 7.8) , } is an orthonormal set (These results are 1 2 3 7.50

More Related Content