Αρχειοθήκη ιστολογίου

Τετάρτη 9 Αυγούστου 2017

Evaluating Random Error in Clinician-Administered Surveys: Theoretical Considerations and Clinical Applications of Interobserver Reliability and Agreement

Purpose
The purpose of this study is to raise awareness of interobserver concordance and the differences between interobserver reliability and agreement when evaluating the responsiveness of a clinician-administered survey and, specifically, to demonstrate the clinical implications of data types (nominal/categorical, ordinal, interval, or ratio) and statistical index selection (for example, Cohen's kappa, Krippendorff's alpha, or interclass correlation).
Methods
In this prospective cohort study, 3 clinical audiologists, who were masked to each other's scores, administered the Practical Hearing Aid Skills Test–Revised to 18 adult owners of hearing aids. Interobserver concordance was examined using a range of reliability and agreement statistical indices.
Results
The importance of selecting statistical measures of concordance was demonstrated with a worked example, wherein the level of interobserver concordance achieved varied from "no agreement" to "almost perfect agreement" depending on data types and statistical index selected.
Conclusions
This study demonstrates that the methodology used to evaluate survey score concordance can influence the statistical results obtained and thus affect clinical interpretations.

from #ORL-AlexandrosSfakianakis via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-16-0100/2647806/Evaluating-Random-Error-in-ClinicianAdministered

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου