Cost Analysis for Evaluation: Strategies and Methods

Slide Note
Embed
Share

This document delves into the realm of cost-effectiveness analysis for evaluation purposes, emphasizing the significance of defining measures of effectiveness, distinguishing intermediate versus final outcomes, establishing effectiveness through causal analysis, and exploring different types of research designs. The content also highlights the importance of internal validity in causal analysis and underlines three primary types of research designs: randomized experiments, quasi-experiments, and correlational studies.


Uploaded on Oct 11, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014

  2. Agenda Cost-effectiveness analysis Activity

  3. Cost-effectiveness analysis

  4. Defining measures of effectiveness All cost-effectiveness analyses require comparison of the costs and effects of two or more alternatives For each alternative, measures of effectiveness should be equivalent Measures of effectiveness should have a reliability coefficient .70 Otherwise, more noise than signal (i.e., error)

  5. Intermediate versus final outcomes Very often can only measure intermediate outcomes related to effectiveness Multiple outcomes Conduct separate analysis for each outcome Conduct cost-utility analysis (to weight importance of each outcome)

  6. Methods of establishing effectiveness Effectiveness , by definition, implies that any observed outcomes were caused by the intervention A cause is that which precedes an effect An effect is that which follows a presumed cause NOTE: This is an intentional oversimplification of causation

  7. In causal analysis, the primary concern is internal validity Internal validity is the validity of inferences about whether observed covariation between A (treatment/cause) and B (outcome/effect) reflects a causal relationship from A to B as those variables were manipulated or measured Validity is the approximate truthfulness or correctness of an inference

  8. The authors describe three primary types of design: 1.Randomized experiments 2.Quasi-experiments 3.Correlational NOTE: Correlational designs are not designs in the true sense of research design they are a class of analysis of relationships

  9. Research designs are the strongest source/procedure for making causal claims Consider the basic randomized experimental design: R R X O O

  10. Threats to validity 1. Ambiguous temporal precedence. Lack of clarity about which variable occurred first may yield confusion about which variable is the cause and which is the effect 2. Selection. Systematic differences over conditions in respondent characteristics that could also cause the observed effect 3. History. Events occurring concurrently with treatment that could cause the observed effect 4. Maturation. Naturally occurring changes over time that could be confused with a treatment effect 5. Regression. When units are selected for their extreme scores, they will often have less extreme scores on other variables, an occurrence that can be confused with a treatment effect 6. Attrition. Loss of respondents to treatment or measurement can produce artifactual effects if that loss is systematically correlated with conditions 7. Testing. Exposure to a test can affect test scores on subsequent exposures to that test, an occurrence that can be confused with a treatment effect 8. Instrumentation. The nature of a measure may change over time or conditions in a way that could be confused with a treatment effect 9. Additive and interactive threats. The impact of a threat can be added to that of another threat or may depend on the level of another threats

  11. Discounting effects Many interventions are only one year in length If so, discounting may be ignored Others are several years in duration This requires discounting (i.e., adjusting) costs and effects over time Essentially, addresses the question of when effects occur rather than if they occur

  12. Year A B C 1 100 20 0 2 0 20 0 3 0 20 0 4 0 20 0 5 0 20 100 Total (undiscounted) 100 100 100 Present value (discount rate of 5%)* 100 91 82 Dropouts prevented by three hypothetical programs NOTE: *To estimate present value (PV) use the formula presented in Chapter 5 typically ranging from 3% - 5%

  13. Analyzing the distribution of effects If relevant, conduct cost- effectiveness analysis for subgroups (e.g., gender, race/ethnicity) Effects may vary over levels of subgroups Subgroup analysis should be planned in advance so that sufficient samples of each subgroup are acquired

  14. Combing costs and effectiveness The cost-effectiveness ratio (CER) is simply the cost (C) of an alternative divided by its effectiveness (E) CER = C E The lower the ratio of of costs per unit of effectiveness, the more cost-effective the alternative NOTE: Do not use effectiveness-cost ratios even though essentially the same!

  15. Accounting for uncertainty Three potential sources 1. Uncertainty due to (potential) errors in data (imperfect data or missing data) 2. Uncertainty arising from estimates derived from samples rather than populations 3. Uncertainty brought about by sometimes arbitrary choices in parameters used (e.g., discount rate)

  16. Sensitivity analysis Requires comparing high , medium (average), and low estimates I suggest using the mean (average), lower limit (LL), and upper limit (UL) that is, the 95% confidence interval (CI) for such comparisons This method does not rely on human judgment, but rather the observed distribution of effects

  17. Decision tree and expected value analysis (similar to classical game theory ) Probability = 0.15 170 drop-outs prevented Probability = 0.50 75 drop-outs prevented Probability = 0.25 5 drop-outs prevented Chance node Probability = 1.00 95 drop-outs prevented Decision node

  18. Expected value = 0.15 170 + 0.60 75 + 0.25 5 = 71.8 Cost-effectiveness ratio Program A = $1,053 ($100,000 95) Program B = $1,394 ($100,000 71.8)

  19. Activity

  20. Context Informal science education Science education that takes place outside formal educational structures (e.g., museums, media, etc.) The outcome of interest is students performance on standardized science tests for children in grades 6 12 NOTE: Estimates of ingredient effects were derived from a regression analysis

  21. Download the Excel file Cost- Effectiveness Data Set from the course website Estimate the cost-effectiveness ratios at the ingredient level for each intervention Estimate the overall cost-effectiveness of each intervention (grand mean) Determine the best allocation of resources within each type of intervention Determine the best choice of intervention given the cost-effectiveness of each

Related


More Related Content