Exploring Inexact Theories in Solving Inverse Problems

lecture 9 n.w
1 / 66
Embed
Share

Learn about representing and solving inexact theories in inverse problems, using tools like relative entropy maximization and F-test for solution comparison. Understand the transition from exact to inexact theories and the importance of probabilistic or fuzzy models in the process.

  • Inexact Theories
  • Inverse Problems
  • Probabilistic Models
  • Fuzzy Models
  • Relative Entropy

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Lecture 9 Inexact Theories

  2. Syllabus Lecture 01 Lecture 02 Lecture 03 Lecture 04 Lecture 05 Lecture 06 Lecture 07 Lecture 08 Lecture 09 Lecture 10 Lecture 11 Lecture 12 Lecture 13 Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Describing Inverse Problems Probability and Measurement Error, Part 1 Probability and Measurement Error, Part 2 The L2 Norm and Simple Least Squares A Priori Information and Weighted Least Squared Resolution and Generalized Inverses Backus-Gilbert Inverse and the Trade Off of Resolution and Variance The Principle of Maximum Likelihood Inexact Theories Nonuniqueness and Localized Averages Vector Spaces and Singular Value Decomposition Equality and Inequality Constraints L1 , L Norm Problems and Linear Programming Nonlinear Problems: Grid and Monte Carlo Searches Nonlinear Problems: Newton s Method Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Factor Analysis Varimax Factors, Empirical Orthogonal Functions Backus-Gilbert Theory for Continuous Problems; Radon s Problem Linear Operators and Their Adjoints Fr chet Derivatives Exemplary Inverse Problems, incl. Filter Design Exemplary Inverse Problems, incl. Earthquake Location Exemplary Inverse Problems, incl. Vibrational Problems

  3. Purpose of the Lecture Discuss how an inexact theory can be represented Solve the inexact, linear Gaussian inverse problem Use maximization of relative entropy as a guiding principle for solving inverse problems Introduce F-test as way to determine whether one solution is better than another

  4. Part 1 How Inexact Theories can be Represented

  5. How do we generalize the case of an exact theory to one that is inexact?

  6. exact theory case model,m 0 0.5 1 theory 1.5 dobs 2 d dpre 2.5 3 datum,d 3.5 4 d=g(m) 4.5 5 0 1 2 map 3 4 5 mest m

  7. to make theory inexact ... must make the theory probabilistic or fuzzy model,m 0 0.5 1 1.5 dobs 2 d dpre 2.5 3 datum,d 3.5 4 d=g(m) 4.5 5 0 1 2 map 3 4 5 mest m

  8. a prior p.d.f. theory combination model,m model,m model,m 0 0 0 1 1 1 dobs dobs dobs 2 2 2 d dpre datum,d d d 3 3 3 datum,d datum,d 4 4 4 5 5 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 map map mest map m m m

  9. how do you combine two probability density functions ?

  10. how do you combine two probability density functions ? so that the information in them is combined ...

  11. desirable properties order shouldn t matter combining something with the null distribution should leave it unchanged combination should be invariant under change of variables

  12. Answer

  13. a priori , pA theory, pg total, pT model,m model,m model,m 0 0 0 1 1 1 dobs 2 2 2 dobs dobs d dpre d d 3 3 3 datum,d datum,d datum,d 4 4 4 5 5 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 map map mest map m m m (D) (E) (F) model,m model,m model,m 0 0 0 1 1 1 dobs 2 2 2 dobs dobs d dpre d d 3 3 3 datum,d datum,d datum,d 4 4 4 5 5 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 map map mest map m m m

  14. solution to inverse problem maximum likelihood point of (with pN constant) simultaneously gives m mest and d dpre

  15. probability that the estimated model parameters are near m m and the predicted data are near d d T probability that the estimated model parameters are near m m irrespective of the value of the predicted data

  16. conceptual problem and T do not necessarily have maximum likelihood points at the same value of m

  17. model,m 0 0 1 dobs dpre datum, 1 2 2 3 3 4 4 5 0 1 2 3 4 5 5 d 0 1 2 map 3 4 5 mest 1 1 0.8 0.8 0.6 p(m) p(m) 0.6 0.4 p(m) 0.4 0.2 0.2 0 0 1 2 3 4 5 model,m mest 0 m 0 1 2 3 4 5 m

  18. illustrates the problem in defining a definitive solution to an inverse problem

  19. illustrates the problem in defining a definitive solution to an inverse problem fortunately if all distributions are Gaussian the two points are the same

  20. Part 2 Solution of the inexact linear Gaussian inverse problem

  21. Gaussian a priori information

  22. Gaussian a priori information a priori values of model parameters their uncertainty

  23. Gaussian observations

  24. Gaussian observations observed data measurement error

  25. Gaussian theory

  26. Gaussian theory linear theory uncertainty in theory

  27. mathematical statement of problem find (m,d) that maximizes pT(m,d) = pA(m) pA(d) pg(m,d) and, along the way, work out the form of pT(m,d)

  28. notational simplification group m m and d d into single vector x = [d dT, m mT]T group [cov m m]A and [cov d]A into single matrix write d d- -Gm=0 Gm=0 as Fx Fx=0 =0 with F=[I, F=[I, G] G]

  29. after much algebra, we find pT(x) is a Gaussian distribution with mean and variance

  30. after much algebra, we find pT(x) is a Gaussian distribution with mean solution to inverse problem and variance

  31. after pulling m mest out of x x*

  32. after pulling m mest out of x x* reminiscent of G GT(GG minimum length solution GGT)-1

  33. after pulling m mest out of x x* error in theory adds to error in data

  34. after pulling m mest out of x x* solution depends on the values of the prior information only to the extent that the model resolution matrix is different from an identity matrix

  35. and after algebraic manipulation which also equals reminiscent of (G GTG G)-1 G GT least squares solution

  36. interesting aside weighted least squares solution is equal to the weighted minimum length solution

  37. what did we learn? for linear Gaussian inverse problem inexactness of theory just adds to inexactness of data

  38. Part 3 Use maximization of relative entropy as a guiding principle for solving inverse problems

  39. from last lecture

  40. assessing the information content in pA(m m) Do we know a little about m m or a lot about m m ?

  41. Information Gain, S -S called Relative Entropy

  42. (A) pA(m) 0.3 p and q 0.2 pN(m) 0.1 0 -25 -20 -15 -10 -5 0 5 10 15 20 25 m m (B) 4 3 S( A) 2 S 1 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 sigmamp A

  43. Principle of Maximum Relative Entropy or if you prefer Principle of Minimum Information Gain

  44. find solution p.d.f. pT(m) that has the largest relative entropy as compared to a priori p.d.f. pA(m) or if you prefer find solution p.d.f. pT(m) that has smallest possible new information as compared to a priori p.d.f. pA(m)

  45. properly normalized p.d.f. data is satisfied in the mean or expected value of error is zero

  46. After minimization using Lagrange Multipliers process pT(m m) is Gaussian with maximum likelihood point m mest satisfying

  47. After minimization using Lagrane Multipliers process pT(m m) is Gaussian with maximum likelihood point m mest satisfying just the weighted minimum length solution

  48. What did we learn? Only that the Principle of Maximum Entropy is yet another way of deriving the inverse problem solutions we are already familiar with

  49. Part 4 F-test as way to determine whether one solution is better than another

Related


More Related Content