Enhancing Validity and Clarity in Assessment Reports through Reader Interpretation

Slide Note
Embed
Share

Discussant comments by Prof. Gavin T. L. Brown focus on the importance of ensuring that assessment reports lead to correct interpretations by intended users. The validity of reports hinges on readers' ability to make appropriate inferences and actions based on the test taker's performance. By addressing key questions like "What do you see?" and "What would you do next?", developers aim to align reader interpretations with the report's intentions. The discussion emphasizes the significance of psychology in assessment research within educational contexts.


Uploaded on Sep 24, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. April 7, NCME coordinated session: Discussant comments University of Auckland (gt.brown@auckland.ac.nz); Ume Universitet (gavin.brown@umu.se) Prof. dr. Gavin T. L. Brown,

  2. Ensuring report designs lead to correct interpretation by intended users Follows up challenge enunciated by Hattie & Brown 2010, p.110- 111 We have argued that the validity of reports is a function of the reader's correct and appropriate inferences and/or actions about the test taker's performance based on the scores from the test. This claim places much reliance on the developer of reports to provide compelling evidence that readers make correct and appropriate inferences and actions based on the reports provided. To address this claim about validity, there is a minimum requirement to provide evidence that readers correctly answer two major questions: What do you see? What would you do next? These two questions focus on the two most critical aspects of validity: that appropriate interpretations and actions from reports are undertaken by readers and that how readers answer these questions is aligned with the intentions of the report developer. Hattie, J. A., & Brown, G. T. L. (2010). Assessment and evaluation. In C. Rubie- Davies (Ed.), Educational psychology: Concepts, research and challenges (pp. 102- 117). Abingdon, UK Routledge.

  3. Its good to see psychology of assessment research being fully part of NCME So whole group of studies to be applauded for bringing users into focus Clarity of curriculum framing as learning progressions or maps Interactive efforts to use end-user input as part of design and validation of designs Detailed and rich work with multiple methods Comparison of teachers with parents across jurisdictions context changes things but what makes NC different? Publishable studies

  4. A high-visibility journal publishing exceptional research across the Education field Why publish with us: CHIEF EDITOR: Outstanding Editorial Board of internationally recognized experts Gold Open Access All content is free to read Rigorous, Transparent Peer Review High-quality and fast peer review (90 days from submission to final decision) Gavin T L Brown The University of Auckland, New Zealand Gavin T L Brown Worldwide visibility Indexed in the Google Scholar, DOAJ, PLUS Google Scholar, CrossRef, CLOCKSS, ERIH DOAJ, CrossRef PLUS , CLOCKSS, ERIH Advanced Article Metrics Extensive Promotion of Impactful Research Contact us: education@frontiersin.org Follow us: @FrontEducation

  5. Bonner, S. M. (2016). Teachers Perceptions about Assessment: Competing Narratives. In G. T. L. Brown & L. R. Harris (Eds.), Handbook of Human and Social Conditions in Assessment (pp. 21-39). New York: Routledge. Reports must deal with these constraints and requirements whether we like it or not. Multiple reports are the solution. Furthermore Those functions interact with teachers and schools teaching and learning theories, beliefs, priorities, policies, practices a. What is the social norm, policy constraints upon teacher action; (Ajzen s Theory of Planned Behaviour); these contexts really matter as seen in the contrast between states and between teachers & parents in the Zenisky paper b. What is teaching, learning, curriculum (Fives & Buehl, teacher beliefs as filters, guides) c. Actual pre-service training and socialisation into teacher practice d. Soft vs hard policy pressures on assessment and teaching practice

  6. Planning curriculum/instruction Diagnose group & individual needs Give feedback to students Strategic planning with colleagues Trend analysis Target resources & teacher support Parent reporting Reporting to Department and Senior Managers Reporting to school governance Teacher appraisal/evaluation School evaluation Good to see emphasis on the formative; improvement orientation in these studies In NZ Ministry of Education officials struggled with the idea tests could be formative Formative Summative

  7. Assessment position relative to instruction Before Potentially formative Assessment position relative to instruction Before During During After After Summative, potentially formative at future time End-of-unit/course test (planned) Parent-student-teacher dialogue/reporting? Potentially formative Pre-test (planned) Progress test (planned) Based on previous interaction or data? Interaction On-the-fly, in-the- moment; (esp. Wylie & Lyons) Test reports need to be different according to when teachers want to use them So never can be one report to rule them all

  8. Strong tendency to cite ones own organizational research is that a good habit? General lack of reference (except Wylie & Lyon) to earlier yet relevant research This is an old problem with much good advice that thankfully is being put into practice now; Jaeger, R. M., Gorney, B., Johnson, R. L., Putnam, S. E., & Williamson, G. (1993). Designing and developing effective school report cards: A research synthesis. Kalamazoo, Michigan: Center for Research on Educational Accountability and Teacher Evaluation, Western Michigan University. Jaeger, R. M., & Putnam, S. E. (1994). Communicating to parents and school board members through school report cards: Effective strategies. Paper presented at the Annual Meeting of the North Carolina Association for Research in Education, Greensboro, NC Hambleton, R. K., & Slater, S. C. (1997). Are NAEP executive summary reports understandable to policy makers and educators? Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing. Graduate School of Education & Information Studies, University of California, Los Angeles. Baker, E. L. (1999). Technology: Something's coming-Something good (CRESST Policy Brief No. 2). Los Angeles: UCLA Graduate School of Education & Information Studies, National Center for Research on Evaluation, Standards, and Student Testing. Narrative leads to correct, visual leads to preference .balance is difficult

  9. Strong use of cyber meetings/webinars Understandable for efficiencies; but equivalent to face-to-face?? Little reflection on potential methodological effects of this approach Just a concern are we sure that we are getting good data? Sample size Authors aware of problems. Lack of central tendencies may simply be too few participants Without large scale deployment and evaluation how can we be sure that the informants are sufficiently representative. Only when reports are in the wild will we really know if teachers interpret and act on reports appropriately (hopefully not too late to make further changes?)

  10. Problems of what terminology is best Wylie & Lyon report about average so easy to be normative instead of standards or criterion oriented O Donnell et al report give us a good start but missing the top level analysis clearly needed Remember efforts NAEP invested in developing its terminology (Below Basic, Basic, Proficient, Advanced) Why re-invent the wheel? Progressions and Maps Lack of vertically integrated curriculum in USA vs UK, Aus, NZ, Linearity vs complexity; teachers like simple but complex is truer

Related


More Related Content