Reliability
Reliability in testing is essential for obtaining consistent and dependable results. Explore threats to reliability, procedures for two administrations, issues to address, and the importance of one administration. Learn about internal consistency, inter-item consistency, reliability coefficients, and inter-rater reliability in testing methods.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Reliability Consistent Dependable Replicable Stable
Threats to reliability Construction Administration (tester, testee, environment) Scoring Interpretation
Two Administrations Test/retest same test - same group Parallel/Alternate forms different versions of test - same group
Two administration procedure: Administer tests (in two sessions) Convert to z scores (if necessary) Correlate (Pearson or Spearman)
Two administration issues: Problems????? Duration between???? Type of variable?????
One Administration One test One group One administration
One administration procedure: Administer test to one group Divide questions to score Split Half first/second or odd/even halves???? Correlate scores from halves Apply Spearman-Brown formula estimate changes in length
Uses of one administration: Internal consistency of items Appropriate for homogenous tests Not appropriate for heterogeneous tests (may have several measures within test)
Inter item consistency Statistical estimation Kuder-Richardson Formula 20 (KR20) = dichotomous questions Cronbach alpha (alpha coefficient) = all questions (factor analysis)
Reliability coefficient 0.00 - 1.00 higher is better score is relative, not absolute
Inter-rater reliability Consensus between raters Percentage of agreement Kappa statistic (2 or many raters)
Project homework Which approach(s) would you use to determine the reliability of your measure? -- Why did you select those approach(s)?