Linear Algebraic Equations - Solving Methods

Linear Algebraic Equations - Solving Methods
Slide Note
Embed
Share

In this chapter, we delve into solving linear algebraic equations using various methods such as graphical solutions and Cramer's Rule. Understand how to determine the values of simultaneous equations efficiently. Explore non-computer methods and techniques for solving systems of equations manually. Learn about the concept of coefficient matrices and determinant calculations in matrix form.

  • Linear Algebra
  • Equations
  • Simultaneous
  • Methods
  • Solving

Uploaded on Mar 09, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. In this chapter, we will deal with the case of determining the values of that equations: ( ( ...... x x f x1 , x2 the ,..., xn set simultaneously satisfy of ) ) = , ,..., 0 f x x x 1 1 2 n = , ,..., 0 f x x x 2 1 2 n ( (3.1) 3.1) ( ) = , ,..., 0 x 1 2 n n

  2. In particular we will consider linear algebraic equations which are of the form: + + 2 12 1 11 ... + = ... a x a x a x b 1 1 n n + + + = ( (3.2) 3.2) a .... x a x a x b 21 1 22 2 2 2 n n + + + = ... a x a x a x b 1 1 12 2 1 n n n n n n where the a's are constant coefficients, the b's are constants, and n is the number of equations. The above system of linear equations may also be written in matrix form as: A (3.3) (3.3) = X B

  3. Where [A] is an n by n square matrix of coefficients (called the coefficient matrix), ... a a a 11 12 1 n ... a ... a ... a A 21 22 2 n = ... ... ... a a a 1 2 n n nn {B} is an n by 1 column vector of constants, b b B 2 1 = T ... b n and {X} is an n by 1 row vector of unknowns: x x X 1 = ... n x 2

  4. There are some non-computer methods which are used to solve small (n 3 ) sets of simultaneous equations that do not require a computer. 1. The Plotting the graphs (straight lines) and finding the point of intersection of the graphs. E E. .g g Use the graphical method to solve 3x1+2x2=18 -x1+2x2=2 1. The Graphical Graphical Method Method

  5. 2. Cramer's Rule Cramer's rule linear equations given in Eq. 3.3 can be give as: D x 2. Cramer's Rule Cramer's rule states that the solution of a set of i= i (3.4) (3.4) D where D is the determinant of the coefficient matrix [A] , and Di is the determinant of the matrix obtained by replacing the coefficients of the unknown xi in the coefficient matrix by the constants b1,b2 , ...,bn .

  6. For example, x1 can be obtained as: b a a 1 12 13 b a a 2 22 23 b a a 3 32 33 1= x D For more than three equations, Cramer's rule becomes impractical because, as the number of equations increase, the determinants are time-consuming to evaluate by hand. Consequently, more efficient alternatives are used.

  7. 3. Elimination of Unknowns The basic strategy is to multiply the equations by constants so that one of the unknowns will be eliminated when the equations are combined. 3. Elimination of Unknowns The result is a single equation that can be solved for the remaining unknown. This can then be substituted into either of the original equations to compute the other variable. The elimination of unknowns can be extended to systems with more than two or three equations.

  8. However, the numerous calculations that are required for larger systems make the method extremely tedious to implement by hand. However, the technique can be formalized and readily programmed for the computer.

  9. Description of the Description of the method method The approach is designed to solve a general set of equations: + + + = ... a x a x a x b (3.5a) 11 1 12 2 1 1 n n + + + = ... a x a x a x b (3.5b) 21 1 22 2 2 2 n n ... a . + + + = ... a x x a x b (3.5c) 1 1 1 2 n n nn n n

  10. The naive Gauss elimination of two phases 1 1. . Forward designed to reduce the set of equations to an upper triangular system. naive Gauss elimination method consists of two phases: : Forward Elimination consists Elimination : : The first step is The initial step will be to eliminate the first unknown x1, from the second through the nth equations. To do this, multiply Eq. (3.5a) by a21/a11 to give: a a a + + + = 21 21 21 ... a x a x a x b (3.6) 21 1 12 2 1 1 n n a a a 11 11 11

  11. Now this equation can be subtracted from Eq. 3.5b to give: + + a a a = 21 21 21 ... a a x a a x b b 22 12 2 2 1 2 1 n n n a a a 11 11 11 Or a 22 2 2 + + = ..... x a x b 2 n n the prime indicates that the elements have been changed from their original values. The procedure is then repeated for the remaining equations.

  12. For instance, Eq. (3.5a) can be multiplied by a31/a11 and the result subtracted from the third equation. Repeating the procedure for the remaining equations results in the following modified system: + + + + + + ..... x a x a + + 3 3 2 2 + = ... a x a x a x a x b (3.7a) (3.7b) (3.7c) 11 1 12 2 13 3 1 1 n n + + + = = ... a x a x a x b 22 2 23 3 2 2 n n ... a x a x a x b 32 2 33 3 3 3 n n nn n + = ... a x b (3.7d) n n n

  13. For the foregoing steps, Eq.(3.5a) is called the pivot equation, and a11 is the pivot coefficient or element. Now repeat the above to eliminate the second unknown from Eq. (3.7c) through Eq. (3.7d). + + + + n= ... a x a x a x a x b 11 1 12 2 13 3 13 1 + + + = = ... a x a x a x b 22 2 23 x 3 + 2 2 n n 33 + ... a a x b 3 3 3 n n ..... ... ..... nn + + = a x a x b 3 3 n n n

  14. the double prime indicates that the elements have been modified twice. The procedure can be continued using the remaining pivot equations. The final manipulation in the sequence is to use the (n-1)th equation to eliminate xn-1the term from the n th equation. At this point, the system will have been transformed to an upper triangular system:

  15. (3.8a) (3.8b) + + + + = ... a x a x a x a x b 11 1 12 2 13 3 1 1 n n n 22 23 2 2 + + + = ... a x a x a x b 2 3 n n (3.8c) 33 + + = ... a x . . a x b 3 3 3 n n . ( nn ) ( n ) (3.8d) = 1 1 n n a x b n 2. Back Substitution : : Eq. (3.8d) can now be 2. Back Substitution solved for xn : ( n ) 1 n b = x ( n ) n 1 n a

  16. This result can be back substituted into the (n-1)th equation to solve for xn-1 . The procedure, which is repeated to evaluate the remaining x's, can be represented by the following formula: = i ii a n ( i ) ( ij ) 1 1 i i b a x j = + 1 j i = x 1, 2, ...,1 for i n n ( ) i 1

  17. There are some pitfalls that must be explored before writing general computer program to implement the method. i, i, Division Division by by Zero Zero During both elimination and back-substitution phases, it is possible that a division by zero can occur. Problems also arise when the coefficient is very close to zero.

  18. The technique of pivoting (to be discussed later) has been developed to partially avoid these problems. ii, Round The problem of round-off errors can become particularly important when large numbers of equations are to be solved. In any event, one should always substitute the answers back into the original equations to check whether a substantial error has occurred. ii, Round- -off Errors off Errors

  19. iii, ill conditioned Systems iii, ill conditioned Systems Ill-conditioned systems are those where a small change in coefficients results in large changes in the solution. An conditioning can approximately satisfy the equations. alternative interpretation of ill ill- - conditioning is that a wide range of answers An ill-conditioned system is one with a determinant of the coefficient matrix close to zero.

  20. iv, Singular Systems The system is singular the equations are identical. In such cases, we would lose one degree of freedom, impossible case of n-1 equations in n unknowns. Such cases might not be obvious particularly when dealing with large equation sets. Consequently, it would be nice to have some way of automatically detecting singularity. iv, Singular Systems singular when at least two of and would be dealing with

  21. The answer to this problem is neatly offered by the fact that the determinant of a singular system is zero. This idea can, in turn, be connected to Gauss elimination by recognizing that after the elimination step, the determinant can be evaluated as the product of the diagonal elements. Thus, a computer algorithm can test to detect whether a zero diagonal element is created during the elimination stage.

  22. 1. Use of more significant figures. 2. Pivoting 3. Scaling maximum element in a row is made to be 1 by dividing the equation by the largest coefficient. Pivoting: can be partial or complete. Scaling: Scaling is the process by which the

  23. Gauss elimination. The major difference is that: I. when an unknown is eliminated in Jordan equations rather than just the subsequent ones. II. In addition, all rows are normalized by dividing them by their pivot elements. Gauss- -Jordan Jordan is a variation of the Gauss in the the Gauss Gauss- - method, it is eliminated from all other Jordan method

  24. Thus, the elimination step results in an identity matrix rather than a triangular matrix. necessary Thus, back back- -substitution substitution is is not not necessary. . The method is attributed to Johann Carl Friedrich Gauss (1777-1855) and Wilhelm Jordan (1842 to 1899).

  25. Example method to solve the linear system Example : : Use the Gauss-Jordan elimination First form the augmented matrix M M = = [A [A , , B] B] Then perform Gauss-Jordan elimination.

  26. Hence, the solution is

  27. Gauss elimination is a sound way to solve systems of algebraic equations of the form B X A = (1) However, it becomes inefficient when solving equations with the same coefficients , but with different right-hand side constants. LU decomposition methods separate the time-consuming elimination of the matrix [A] from the manipulations of the right-hand side {B} .

  28. Thus, once has been "decomposed right-hand side vectors can be evaluated in an efficient manner. decomposed", multiple Derivation Eq. (1) can be rearranged to give: A X Derivation of of LU LU Decomposition Decomposition Method Method = 0 B (2 ) Suppose that Eq upper Eq. . ( (2 2) ) could be expressed as an upper triangular triangular system system:

  29. u u u x d 11 0 12 13 1 1 (3) (3) = u u x d 22 0 23 2 2 0 u x d 33 3 3 Eq. (3) notation and rearranged to give: 0 X U Eq. (3) can also be expressed in matrix (4) (4) D = Now, assume that there is a lower diagonal matrix with 1's on the diagonal, = 21 l 1 l 0 0 L 1 0 1 l 31 32

  30. that has the property that when Eq. (4) is premultiplied by it, Eq. (2) is the result. That is, L U X X (5) (5) = D A B If this equation holds, it follows that A U L = (6) (6) D L and and = B (7) (7)

  31. A A two obtaining solutions can be based on Eqs. (4), (6) and (7): two- -step step strategy strategy (see Figure 3.1) for 1. LU LU decomposition decomposed into the lower [L] and upper [U] triangular matrices. decomposition step step: : [A] is factored or 1. 2. Substitution determine a solution for a right-hand side . This step itself consists of two steps. Substitution step step: : [L] and [U] are used to

  32. First, Eq. (7) is used to generate an intermediate vector by forward forward substitution substitution. Then, the result is substituted into Eq. (4) which can be solved by back {X} . back substitution substitution for

  33. Figure 3.1 Figure 3.1

  34. Example Example Given Find matrices L L and U U so that LU LU = A A. 4 4 2 = = B Find the solution if

  35. Hence,

  36. From [L]{D} = {b} we get d1 = -4, d2=1 and d3= 2.8 From [U]{x}={d} we get x1 = -0.5, x2 = 2.55 and x3 = -0.7 Gauss Iteration is a popular technique for finding roots of equations. Gauss- -Seidel Method Seidel Method Generalization of fixed point iteration can be applied to systems of linear equations to produce accurate results.

  37. The Gauss-Seidel method is the most common iterative method and is attributed to Ludwig von Seidel (1821-1896). Philipp For the purpose of hand calculation let s see 3 set of linear equations containing 3 unknowns. + + = a x a x a x b 11 1 12 2 13 3 1 + + = a x a x a x b 21 1 22 2 23 3 2 + + = a x a x a x b 31 1 32 2 33 3 3

  38. If the diagonal elements are all nonzero, the first equation can be solved for x1, the second for x2 and the third x3 for to yield: b 12 2 a x 13 3 a x = 1 x 1 (a) a 11 b 21 1 a x a 23 3 a x = 2 x (b) 2 22 b 31 1 a x 32 2 a x (c) = 3 x 3 a 33

  39. Steps to be followed i . Using the initial guess x2 = x3= 0.0 solve for x1 from (a) ii . Using the values of x1 from step i and x3 = 0.0 solve for x2 from (b) iii. Using the value of x1 from step i and that of x2 from step ii solve for x3 from (c) iv. Using the value of x2 from step ii and that of x3 from step iii solve for x1 from (a) v. Using the value of x1 from step iv and that of x3 from step iii solve for x2 from (b) vi . Using the value of x1 from step iv and that of x2 from step v solve for x3 from(c) vii . Repeat the process until the required accuracy is achieved. Steps to be followed

  40. Example obtain the solution of the following system of linear equations. Example 2 2 Use the Gauss-Seidel method to + = 5 x 4 x x x 1 + x 2 3 + = 3 + 2 = x x 1 2 3 + 4 3 x x 1 2 3 + 4 x x Solving for: x1 from eq1 x2 from eq2 x2 from eq3 = 2 5 x 3 x 1 2 x = 1 3 3 x 2 + 3 x x = 1 4 2 x 3

  41. Executing the above steps, we will have the following result.

  42. As we can see the values start to repeat after the 8th iteration hence we can stop the calculation and take the final values as the solution of the linear system of equations. Hence, x1 = 0.65625 x2 = 0.15625 x3 = 0.875

  43. The Gauss-Seidal method is applicable to predominantly diagonal systems. A predominantly diagonal system has large diagonal elements. The absolute value of the diagonal element in each case is larger than the sum of the absolute values of the other elements in that row of the matrix A. For such predominantly diagonal systems, the Gauss-Seidal method always converges to the correct solution

More Related Content