Understanding Hypothesis Testing in Statistics

Slide Note
Embed
Share

Hypothesis testing is essential in scientific inquiry, involving the formulation of null and alternative hypotheses at a chosen level of significance. Statistical hypotheses focus on population characteristics and are tested on samples using probability concepts. The null hypothesis assumes no effect, while the alternative hypothesis considers possible treatment effects or differences. These hypotheses must be mutually exclusive and exhaustive, with the alternative hypothesis containing the research predictions.


Uploaded on Jul 30, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Hypothesis Testing: Minding your Ps and Qs and Error Rates Bonnie Halpern-Felsher, PhD

  2. Hypothesis Testing Hypothesis testing is the key to our scientific inquiry. In additional to research hypotheses, need statistical hypotheses. Involves the statement of a null hypothesis, an alternative hypothesis, and the selection of a level of significance. 2

  3. Statistical Hypotheses Statements of circumstances in the population that the statistical process will examine and decide the likely truth or validity Statistical hypotheses are discussed in terms of the population, not the sample, yet tested on samples Based on the mathematical concept of probability Null Hypothesis Alternative Hypothesis 3

  4. Null Hypothesis What is the Null Hypothesis? 4

  5. Null Hypothesis The case when the two groups are equal; population means are the same Null Hypothesis = H0 This is the hypothesis actually being tested H0 is assumed to be true 5

  6. Alternative Hypothesis What is the Alternative Hypothesis? 6

  7. Alternative Hypothesis The case when the two groups are not equal; when there is some treatment difference; when other possibilities exist Alternative Hypothesis = H1Or Ha H1 is assumed to be true when the H0 is false. 7

  8. Statistical Hypotheses The H0 and H1 must be mutually exclusive The H0 and H1 must be exhaustive; that is, no other possibilities can exist The H1 contains our research hypotheses 8

  9. Statistical Hypotheses Can you give an example of a Null and Alternative Hypothesis? 9

  10. Null Hypothesis H0 There is no treatment effect The drug has no effect H1 There is a treatment effect The drug had an (some, any) effect 10

  11. Evaluation of the Null In order to gain support for our research hypothesis, we must reject the Null Hypothesis Thereby concluding that the alternative hypothesis (likely) reflects what is going on in the population. You can never prove the Alternative Hypothesis! 11

  12. Significance Level Need to decide on a Significance Level: The probability that the test statistic will reject the null hypothesis when the null hypothesis is true Significance is a property of the distribution of a test statistic, not of any particular draw of the statistic Determines the Region of Rejection Generally 5% or 1% 12

  13. Alpha Level The value of alpha ( ) is associated with the confidence level of our test; significance level. For results with a 90% level of confidence, the value of is 1 - 0.90 = 0.10. For results with a 95% level of confidence, the value of alpha is 1 - 0.95 = 0.05. Typically set at 5% (.05) or 1% (.01) 13

  14. p-value The p-value, or calculated probability, is the estimated probability of rejecting the null hypothesis (H0) of a study question when that hypothesis is true Probability that the observed statistic occurred by chance alone 14

  15. 15

  16. Obtaining Significance Compare the values of alpha and the p-value. There are two possibilities that emerge: The p-value is less than or equal to alpha (e.g., p < .05). In this case we reject the null hypothesis. When this happens we say that the result is statistically significant. In other words, we are reasonably sure that there is something besides chance alone that gave us an observed sample. The p-value is greater than alpha (e.g., p > .05). In this case we fail to reject the H0. Therefore, not statistically significant. Observed data are likely due to chance alone. 16

  17. Error In an ideal world we would always reject the null hypothesis when it is false, and we would not reject the null hypothesis when it is indeed true. But there are two other scenarios that are possible, each of which will result in an error Type I and Type II Errors 17

  18. Type I Error Rejection of the null hypothesis that is actually true Same as a false positive The alpha value gives us the probability of a Type I error. For example, = .05 = 5% chance of rejecting a true null hypothesis 18

  19. Type I Error Example: H0 = drug has no effect H1 = drug has an effect Reject H0 and instead claim H1 is correct, so claim that the drug has an effect when indeed it does not. Therefore the drug is falsely claimed to have an effect. 19

  20. Controlling Type I Error Alpha is the maximum probability of having a Type I error. E.g., 95% CI, chance of having a Type I is 5% Therefore, a 5% chance of rejecting H0 when H0 is true That is, 1 out of 20 hypotheses tested will result in Type I error We can control Type I error by setting a different level. 20

  21. Controlling Type I Error Particularly important to change level to be more conservative if calculating several statistical tests and comparisons. We have a 5% chance of getting a significant result just by chance. So, if running 10 comparisons, should set a more conservative level to control for Type I error Bonferroni correction: .05/10 = .005 21

  22. Type II Error We do not reject a null hypothesis that is false. Like a false negative E.g., thought the drug had no effect, when it actually did The probability of a Type II error is given by the Greek letter beta ( ). This number is related to the power or sensitivity of the hypothesis test, denoted by 1 22

  23. Controlling Type II Error: Power Power: The power of a test sometimes, less formally, refers to the probability of rejecting the null when it is not correct. Power = P(reject H0|H1 is true) = P(accept H1|H1 is true) As the power increases, the chances of a Type II error (false negative; ) decreases. Power = 1- Power increases with sample size 23

  24. Type I and II Errors True/Reality H0 1- H1 Decision H0 H1 1- 24

  25. Type I and II Errors True/Reality H0 1- H1 Decision H0 H1 1- The probability of making an error where you decide H1 but H0 is true is the , then the probability of being correct, given that H0 is true, is 1 . Similarly, the probability of making an error where you decide H0 yet H1is true is , therefore the probability of making a correct decision given that H1 is true is 1 - 25

  26. Questions???? 26

Related