NCHRP Research Report 934: Traffic Forecasting Accuracy Assessment Research

Slide Note
Embed
Share

The NCHRP Research Report 934 focuses on analyzing and improving the accuracy, reliability, and utility of project-level traffic forecasts in the U.S. The study aims to address the significant knowledge gap in travel demand modeling related to urban road traffic forecasts. Key objectives include assessing forecast accuracy through statistical analysis, identifying sources of forecast errors, and providing recommendations for enhancing forecasting practices. Challenges such as data availability limitations are also highlighted. The research findings are intended to enhance decision-making processes based on traffic forecasts.


Uploaded on Aug 12, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. May 2019 NCHRP Research Report 934: Traffic Forecasting Accuracy Assessment Research The findings of NCHRP Project 08-110 have been published as NCHRP Research Report 934:Traffic Forecasting Accuracy Assessment Research.

  2. Prepared by Greg Erhardt Jawad Hoque Mei Chen Reginald Souleyrette UNIVERSITYOF KENTUCKY Lexington, KY Steve Weller JACOBS Alexandria, VA Elizabeth Sall URBANLABS Seattle, WA David Schmitt Ankita Chaudhary Sujith Rapolu Kyeongsu Kim CONNETICS TRANSPORTATION GROUP Orlando, FL Martin Wachs UNIVERSITYOF CALIFORNIA, LOS ANGELES Los Angeles, CA 2

  3. The greatest knowledge gap in U.S. travel demand modeling is the unknown accuracy of U.S. urban road traffic forecasts. Hartgen, David T. Hubris or Humility? Accuracy Issues for the next 50 Years of Travel Demand Modeling. Transportation 40, No. 6 (2013): 1133 57. 3

  4. Project Objectives The objective of this study is to develop a process to analyze and improve the accuracy, reliability, and utility of project-level traffic forecasts. -- NCHRP 08-110 RFP Accuracy is how well the forecast estimates project outcomes. Reliability is the likelihood that someone repeating the forecast will get the same result. Utility is the degree to which the forecast informs a decision. 4

  5. 1. Research Approach

  6. Research Question and Approach Question: How accurate are traffic forecasts? Method: Statistical analysis of actual vs. forecast traffic for a large sample of projects after they open. Output: Distribution of expected traffic volume as a function of forecast volume. Question: What are the sources of forecast error? Method: "Deep dives" into forecasts of six substantial projects after they open. Output: Estimated effect of known errors, and remaining unknown error. Question: How can we improve forecasting practice? Method: Derive lessons from this research and review with practitioners. Output: Recommendations for how to learn from past traffic forecasts. 6

  7. Challenges Forecast Accuracy Database: Project, forecast and actual traffic information. 2,611 unique projects The lack of availability for necessary data items is a general problem and probably the biggest limitation to advances in the field. Nicolaisen and Driscoll, 2014 Large-N Analysis Case Studies: 5 projects with model runs Deep Dives 7

  8. 2. Large-N Analysis

  9. How Accurate Are Traffic Forecasts? ?????? Forecast Forecast On average, the actual traffic volume is about 6% lower than forecast. ?????= 100 On average, the actual traffic is about 17% different from forecast. 9

  10. How Accurate Are Traffic Forecasts? ?????? Forecast Forecast ?????= 100 Traffic forecasts are more accurate, in percentage terms, for higher volume roads. 10

  11. Estimating Uncertainty The quantile regression models presented in this research provide a means of estimating the range of uncertainty around a forecast. 11

  12. Large-N Results Some 95% of forecasts reviewed are accurate to within half of a lane. Traffic forecasts show a modest bias, with actual ADT about 6% lower than forecast ADT. Traffic forecasts had a mean absolute percent error of 25% at the segment level and 17% at the project level. 12

  13. Large-N Results Traffic forecasts are more accurate for: Higher volume roads Higher functional classes Shorter time horizons Travel models over traffic count trends Opening years with unemployment rates close to the forecast year More recent opening & forecast years 13

  14. 3. Deep Dive Results

  15. Deep Dives Projects selected for deep dives: Eastown Road Extension Project, Lima, Ohio Indian River Street Bridge Project, Palm City, Florida Central Artery Tunnel Project, Boston, Massachusetts Cynthiana Bypass Project, Cynthiana, Kentucky South Bay Expressway Project, San Diego, California US-41 (later renamed I-41) Project, Brown County, Wisconsin 15

  16. Deep Dive Methodology Collect data: Public Documents Project-Specific Documents Model Runs Investigate sources of errors as cited in previous research: Employment, Population Projections, etc. Adjust forecasts by elasticity analysis Run the model with updated information 16

  17. Deep Dives General Conclusions The reasons for forecast inaccuracy are diverse. Employment, population, and fuel price forecasts often contribute to forecast inaccuracy. External traffic and travel speed assumptions also affect traffic forecasts. Better archiving of models, better forecast documentation, and better validation are needed. 17

  18. 5. Recommendations

  19. 1. Use a range of forecasts to communicate uncertainty Report a range of forecasts. Use quantile regression. If the project were at the low/high end of the forecast range, would it change the decision? 19

  20. 2. Archive your forecasts 1. Bronze Level: Record basic forecast and actual traffic information in a database. 2. Silver Level: Bronze Level + document the forecast in a semi-standardized report. 3. Gold Level: Silver Level + make the forecast reproducible. 20

  21. 3. Periodically Report the Accuracy Reporting provides empirical information on uncertainty. Reporting ensures a degree of accountability and transparency. 21

  22. 4. Use Past Results to Improve Forecasting Method Evaluate past forecasts to learn about weaknesses of existing model Identify needed improvements Test the ability of the new model to predict those project-level changes Do the improvements help? Estimate local quantile regression models Is my range narrower than my peer s? We build models to predict change. We should evaluate them on their ability to do so. 22

  23. Why? 1. Giving a range more likely to be right. 2. Archiving forecasts and data Provides evidence for effectiveness of tools used. 3. Data to improve models Testing predictions is the foundation of science. Together, the goal is not only to improve forecasts, but to build credibility. 23

  24. Questions & Discussion

Related


More Related Content