~AEA Public Library

To Agree or Disagree: The Measure and Use of Interrater Reliability 

11-13-2009 17:55

Demonstration Session 544: In this demonstration we will provide a conceptual overview of current and common interrater reliability procedures and discuss ways to apply them in practice. Additionally, we will discuss pertinent concerns surrounding rater training, such as diagnostic methods and obtaining reliability without sacrificing validity. Observational measurement serves an integral role in program evaluation. Due to the unique individual observer inferences, it is pertinent that every rater observe and interpret the same events similarly. This lack of similarity can greatly distort the interpretation of the effectiveness of the target intervention or performance being measured. We aim to help evaluators with a basic understanding of measurement theory and statistics make competent decisions regarding assessing interrater reliability. Understanding the various methods for assessing agreement or disagreement at the conceptual and practical levels helps reduce the influence of subjective judgment. Assessing interrater reliability of observational measures is vital for drawing accurate inferences.

#2009Conference #QuantitativeMethods-TheoryandDesign

Statistics
0 Favorited
34 Views
1 Files
0 Shares
55 Downloads

Related Entries and Links

No Related Resource entered.

Tags and Keywords

Attachment(s)
pdf file
aea_irr_2009.pdf   827 KB   1 version
Uploaded - 11-13-2009