Binary Hypothesis Testing in ECE 313: Insights and Announcements

Slide Note
Embed
Share

The lecture covers insights from homework assignments and group activities in ECE 313, highlighting common mistakes and areas of confusion among students regarding probability distributions. Topics include likelihood ratio tests, binary hypothesis testing, and examples illustrating the challenges faced by students. Announcements regarding upcoming progress meetings and deadlines for assignments are also discussed.


Uploaded on Jul 22, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Binary Hypothesis Testing (continued.) ECE 313 Probability with Engineering Applications Lecture 22 Ravi K. Iyer Dept. of Electrical and Computer Engineering University of Illinois at Urbana Champaign Iyer - Lecture 19 ECE 313 Spring 2017

  2. Todays Topics and Announcements Likelihood Ratio Test Examples on binary hypothesis testing Group Activity 6 Announcements: Progress meeting with individual groups: Mon, Apr 24, 1pm-5pm at 249 CSL, 10 mins per group Make 3 slides presenting Task 0-2 results 1 slide on Task 0-1; 1 slide on Task 2; 1 slides on task division and problems encountered Fri, Apr 21, 5pm is the deadline to sign up for progress meeting. Must keep to this deadline HW 10 is released today. HW 10 is due on Wed, Apr 26 in class. Iyer - Lecture 19 ECE 313 Spring 2017

  3. Insights from HWs and GAs HW6: Some students don't know the difference between continuous and discrete distributions. They are evaluating PDF instead of PMF Many students have made a mistake in integrating the PDF to find the CDF. Only a few students were able to identify that the memoryless property; many were unable to write the correct expression for the conditional probability Reliability of a single block is wrong. Wrong answers included e t, 1 e t Quiz 2: Differences between Erlang, Hypo-exponential and Hyper-exponential are not clear Expectation of a Bernoulli distribution has been confused with Binomial (> 50% of class) Group Activity 5: Instantaneous failure rate of exponential distribution. Instantaneous failure rate for hypo and hyper PDF for hyper-exponential is wrong (they miss out the ifor each path i) HW 8: Confused the CDF with the PDF and used CDF to evaluate expectation, variance etc. Confusion between the role of arrival rate ? of Poisson and its relation with exponential distribution Iyer - Lecture 19 ECE 313 Spring 2017

  4. Iyer - Lecture 19 ECE 313 Spring 2017

  5. Iyer - Lecture 19 ECE 313 Spring 2017

  6. Iyer - Lecture 19 ECE 313 Spring 2017

  7. Iyer - Lecture 19 ECE 313 Spring 2017

  8. Iyer - Lecture 19 ECE 313 Spring 2017

  9. Iyer - Lecture 19 ECE 313 Spring 2017

  10. Likelihood Ratio Test (LRT) A way to generalize the ML and MAP decision rules into a single framework is using LRT. Define the likelihood ratio for each possible observation k as the ratio of the two conditional probabilities: ) ( ) ( 0 k p (k ) = ( | ) p k P X k H = = 1 1 k = ( ) ( | ) P X k H 0 A decision rule can be expressed as an LRT with threshold : ) ( declare H alarm false p . declare H is true 1 X . H is true 0 If the threshold is increased, then there are fewer observations that lead to deciding is true. As increases, decreases, and increases. 1 p miss Iyer - Lecture 19 ECE 313 Spring 2017

  11. Likelihood Ratio Test (LRT) (Contd) If the observation is X = k: The ML rule declares hypothesis is true, if and otherwise it declares is true. So the ML rule can be specified using an LRT with : ( ) ( ) p k p k H 1 0 1 H 0 = 1 1 . declare H is true ( ) p k 1 = = ( ) 1 X 1 . declare H is true ( ) p k 0 0 H ( ) ( ) p k p k The MAP rule declares hypothesis is true, if and otherwise it declares is true. So the MAP rule can be specified using an LRT with : = = ) ( 0 k p = 1 1 0 0 1 H 0 = 0 1 0 . declare H is true 1 ( ) p k ( ) 1 X 1 0 . declare H is true 0 1 For uniform priors , MAP and ML decision rules are the same 1 0 Iyer - Lecture 19 ECE 313 Spring 2017

  12. Example 2: Discrete Case Suppose you have a coin and you know that either: H1: the coin is biased, showing heads on each flip with probability 2/3; or H0: the coin is fair, showing heads and tails with probability 1/2 Suppose you flip the coin five times. Let X be the number of times heads shows. Describe the ML and MAP decision rules using LRT. Find , , and for both decision rules. Use the prior probabilities . p p e p false alarm miss = ( , ) ) 8 . 0 , 2 . 0 ( 0 1 Iyer - Lecture 19 ECE 313 Spring 2017

  13. Example 2 (Contd) X (number of times that head shows up) has a binomial distribution with n = 5, and: p = 2/3 for (Coin is biased) p = 1/2 for (Coin is fair) 0 H H 1 5 = = Remember for a binomial distribution: So we have: 5) k ( ) 1 ( P X k p p k 5 k 5 5 k k 5 k k 2 1 1 1 = = ( | ) P X k H = = ( | ) P X k H 1 0 3 3 2 2 k k The rows of the likelihood matrix consist of the pmf of X: X = 0 X = 1 X = 2 X = 3 X = 4 X = 5 5 5 4 2 3 4 3 2 2 1 1 2 1 2 1 2 1 2 1 H 5 10 5 10 3 1 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 1 1 1 1 1 H 5 10 5 10 0 2 2 2 2 2 2 Iyer - Lecture 19 ECE 313 Spring 2017

  14. Example 2 (Contd) In computing the likelihood ratio, the binomial coefficients cancel, so: 5 k k 2 1 5 k 2 2 3 3 = = k ( ) 2 . k 5 3 6 . 7 1 2 The ML decision rule is: Declare H1whenever X ( ) 1 , or equivalently . 3 X The MAP decision rule is: Declare H1whenever 2 . 0 X = . 1 ( ) . 0 25 X , or equivalently 8 . 0 Iyer - Lecture 19 ECE 313 Spring 2017

  15. Example 2 (Contd) 5 5 For the ML rule: 5 1 1 1 = + + = 10 5 5 . 0 p false alarm 2 2 2 5 4 2 3 1 2 1 2 1 51 = + + = p 5 5 . 0 201 p miss 3 3 3 3 3 243 = ) 8 . 0 ( + ) 2 . 0 ( . 0 26 p p e false alarm miss For the MAP rule: 5 1 = 1 . 0 97 p false alarm 2 5 1 1 = = p . 0 041 p miss 3 243 = ) 8 . 0 ( + ) 2 . 0 ( . 0 227 p p e false alarm miss e p As expected the probability of error ( ) for the MAP rule is smaller than for the ML rule. e p Iyer - Lecture 19 ECE 313 Spring 2017

  16. Example 3 An observation X is drawn from a standard normal distribution (i.e. N(0,1)) if hypothesis H1 is true and from a uniform distribution with support [-a, a], if hypothesis H0 is true. As shown in the figure, the pdfs of the two distributions are equal when |u| = b. a) Describe the maximum likelihood (ML) decision rule in terms of observation X and constant values a and b. b) Shade and label the regions in the figure such that the area of one region is the and the area of the other region is the . c) Express the and for ML decision rule in terms of a and b and the (u), i.e., the CDF of the standard normal distribution. d) Determine the maximum a posteriori probability (MAP) decision rule for a = 2/3, b = 0.6, and the probability of hypothesis H1 being true, p p false alarm miss p p false alarm miss Iyer - Lecture 19 ECE 313 Spring 2017

  17. Example 3 (Contd) Iyer - Lecture 19 ECE 313 Spring 2017

  18. Example 3 (Contd) 2?? ?2 1 3 1 2?? ?2 2 ? =?1(?) 3 = ?0(?)= 2 Iyer - Lecture 19 ECE 313 Spring 2017

  19. Conditional Probability Mass Function Recall that for any two events E and F, the conditional probability of E given F is defined, as long as P(F) > 0, by: Hence, if X and Y are discrete random variables, then the conditional probability mass function of X given that Y = y, is defined by: for all values of y such that P{Y = y}>0. Iyer - Lecture 19 ECE 313 Spring 2017

  20. Conditional CDF and Expectation The conditional probability distribution function of X given Y =y is defined, for all y such that P{Y = y} > 0, by: Finally, the conditional expectation of X given that Y = y is defined by: All the definitions are exactly as before with the exception that everything is now conditional on the event that Y = y. If X and Y are independent, then the conditional mass function, distribution, and expectation are the same as unconditional ones: Iyer - Lecture 19 ECE 313 Spring 2017

  21. Conditional Probability Density Function If X and Y have a joint probability density function f (x, y), then the conditional probability density function of X, given that Y = y, is defined for all values of y such that fY(y) > 0, by: To motivate this definition, multiply the left side by dx and the right side by (dx dy)/dy to get: Iyer - Lecture 19 ECE 313 Spring 2017

More Related Content