Role of Impact Evaluations in STIP Programs at USAID

 
Role of impact evaluations
in the growth of STIP
programs at USAID
 
Heather Huntington, PhD
Cloudburst Group
 
Background
 
Democracy, Human Rights and Governance
Democracy Fellow (2012) – USAID’s DCHA/DRG Center
 
 
Land Tenure and Natural Resource Management
Task lead for evaluation portfolio (2014- present) – USAID’s
E3/Land Office
 
Evaluations – USAID
 
Ethiopia: Pastoral rights certification programs (2)
Ethiopia: Farmland rights certification program
Ghana: Local governance and service delivery
Guinea: Community land and artisanal diamond rights
Liberia: Community land rights protection program
Zambia:  Agroforestry and land certification pilot
Zambia: REDD+ pilot, global climate change
 
 
 
Role of IEs in growth of STIP
 
Learning from IEs is still at a nascent stage at AID.
Variation across sectors, Offices and Missions
Current evaluation practices and results still do not provide
compelling evidence of the impacts of many programs.
There is still much work to be done on improving IEs and
the learning from research and evaluation.
 
 Early stage of building a learning
agenda
 
Justification for programming funds
Policy requirement (USAID Evaluation Policy)
Afterthought, separate box to check
Example – Mobile application technologies
Offices/Missions vary in how much they are willing to invest
in IEs
Pressure/support from leadership
USAID internal technical capacity and interest
Dearth of evidence on potential impact of many interventions
Too early for results to drive programming
IEs required for pilots (Evaluation policy)
 
Need to improve R&E and data
driven approaches
 
Many programs (as designed) are not amenable to rigorous evaluations
Not integrated into program design
‘Christmas tree’ programs
Low internal technical capacity
Recruitment is difficult, talent turnover
Coordination is critical - over reliance on contractors
Not a norm
Lack of Institutions, processes for effective R&E and building a
sustainable learning agenda
 
IEs are difficult, time-consuming and expensive
Internal resistance and external (implementing partners)
IEs can make programming more difficult and expensive
 
 
What has been accomplished? (8 years)
 
IEs have become a more important part of USAID’s portfolio of M&E
USAID Policies 
– Evaluation (2011), Research, Development Data Library
Independent, 3
rd
 party evaluations
Significant improvement in rigor, methods
 
Growing R&E and data driven culture
Greater leadership support
Missions and Offices applying best practices, innovative and standardized
methods
Mobile data collection
Geospatial integration
Improved survey instruments
 
 
Continued…
 
Attempts to build internal capacity
Improved knowledge, awareness and training
Focused recruitment (AAAS, Democracy Fellows) and staff with
research/evaluation background
Learning from Baseline data collection
Adapt interventions based on pre treatment data
Sponsor research on baseline data
Improve developing country capacity for R&E
Emphasis on engaging local data collection partners
 
What further steps could be
taken?
 
(1) USAID does not have internal capacity to assess the impact and
effectiveness of programs.
Building USAID internal capacity is critical to developing (1) high
quality evaluations and (2) a sustainable learning agenda regarding
the impact of STIP programs.
Managing rigorous impact evaluations and promoting an associated
learning agenda is a significant time commitment – requires:
(a) very close coordination with technical and program staff, plus
evaluation and implementation teams
(b) deep understanding of programming and evaluation components
 
-Need to take steps to recruit and retain staff with necessary
background and skills and prioritize this role
 
Continued
 
(2) Evaluation designs must be embedded in program designs
at early stage of program development
(3)Rebuild institutional learning capacity
 Lack of organizational mechanisms to integrate findings and
learn from research
Low capacity to absorb and disseminate the results of
evaluations
(4) Need continued support and incentives from senior
leadership
Continued skepticism of IE methods; they are seen as
cumbersome, expensive and inflexible to programming
 
Resources
 
https://www.usaid.gov/evaluation
USAID Evaluation Policy (2011)
Examples of learning integration (education, food security,
policy updates, program design)
USAID Evaluation 5 year report (2016)
 
National Academies Report (2008) - 
Improving
Democracy Assistance: Building Knowledge Through
Evaluations and Research
(
http://pdf.usaid.gov/pdf_docs/Pnadl231.pdf
)
www.usaidlandtenure.net
E3/Land Office evaluation and impact site
Slide Note
Embed
Share

Impact evaluations play a crucial role in the growth and development of Science, Technology, Innovation, and Partnership (STIP) programs at USAID. Through evaluation practices, insights are gained, evidence is collected, and learning is enhanced to improve program effectiveness and impact across various sectors and missions.

  • Impact Evaluations
  • STIP Programs
  • USAID
  • Program Evaluation
  • Program Impact

Uploaded on Feb 21, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Role of impact evaluations in the growth of STIP programs at USAID Heather Huntington, PhD Cloudburst Group

  2. Background Democracy, Human Rights and Governance Democracy Fellow (2012) USAID s DCHA/DRG Center Land Tenure and Natural Resource Management Task lead for evaluation portfolio (2014- present) USAID s E3/Land Office

  3. Evaluations USAID Ethiopia: Pastoral rights certification programs (2) Ethiopia: Farmland rights certification program Ghana: Local governance and service delivery Guinea: Community land and artisanal diamond rights Liberia: Community land rights protection program Zambia: Agroforestry and land certification pilot Zambia: REDD+ pilot, global climate change

  4. Role of IEs in growth of STIP Learning from IEs is still at a nascent stage at AID. Variation across sectors, Offices and Missions Current evaluation practices and results still do not provide compelling evidence of the impacts of many programs. There is still much work to be done on improving IEs and the learning from research and evaluation.

  5. Early stage of building a learning agenda Justification for programming funds Policy requirement (USAID Evaluation Policy) Afterthought, separate box to check Example Mobile application technologies Offices/Missions vary in how much they are willing to invest in IEs Pressure/support from leadership USAID internal technical capacity and interest Dearth of evidence on potential impact of many interventions Too early for results to drive programming IEs required for pilots (Evaluation policy)

  6. Need to improve R&E and data driven approaches Many programs (as designed) are not amenable to rigorous evaluations Not integrated into program design Christmas tree programs Low internal technical capacity Recruitment is difficult, talent turnover Coordination is critical - over reliance on contractors Not a norm Lack of Institutions, processes for effective R&E and building a sustainable learning agenda IEs are difficult, time-consuming and expensive Internal resistance and external (implementing partners) IEs can make programming more difficult and expensive

  7. What has been accomplished? (8 years) IEs have become a more important part of USAID s portfolio of M&E USAID Policies Evaluation (2011), Research, Development Data Library Independent, 3rdparty evaluations Significant improvement in rigor, methods Growing R&E and data driven culture Greater leadership support Missions and Offices applying best practices, innovative and standardized methods Mobile data collection Geospatial integration Improved survey instruments

  8. Continued Attempts to build internal capacity Improved knowledge, awareness and training Focused recruitment (AAAS, Democracy Fellows) and staff with research/evaluation background Learning from Baseline data collection Adapt interventions based on pre treatment data Sponsor research on baseline data Improve developing country capacity for R&E Emphasis on engaging local data collection partners

  9. What further steps could be taken? (1) USAID does not have internal capacity to assess the impact and effectiveness of programs. Building USAID internal capacity is critical to developing (1) high quality evaluations and (2) a sustainable learning agenda regarding the impact of STIP programs. Managing rigorous impact evaluations and promoting an associated learning agenda is a significant time commitment requires: (a) very close coordination with technical and program staff, plus evaluation and implementation teams (b) deep understanding of programming and evaluation components -Need to take steps to recruit and retain staff with necessary background and skills and prioritize this role

  10. Continued (2) Evaluation designs must be embedded in program designs at early stage of program development (3)Rebuild institutional learning capacity Lack of organizational mechanisms to integrate findings and learn from research Low capacity to absorb and disseminate the results of evaluations (4) Need continued support and incentives from senior leadership Continued skepticism of IE methods; they are seen as cumbersome, expensive and inflexible to programming

  11. Resources https://www.usaid.gov/evaluation USAID Evaluation Policy (2011) Examples of learning integration (education, food security, policy updates, program design) USAID Evaluation 5 year report (2016) National Academies Report (2008) - Improving Democracy Assistance: Building Knowledge Through Evaluations and Research (http://pdf.usaid.gov/pdf_docs/Pnadl231.pdf) www.usaidlandtenure.net E3/Land Office evaluation and impact site

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#