Introduction: The Emergency Severity Index (ESI v.4) is used by many emergency departments in the US to describe patient illness severity and direct resource utilization. ESI training materials developed by the Agency for Healthcare Research and Quality include competency cases that can be used to test participant knowledge following training. To our knowledge, the validity of the competency cases has not been established. We conducted an item analysis of 30 cases used as a post-test to examine validity of the question set.
Methods: The results of the 30-item post-test used during a study examining the use of the ESI by untrained EMS providers and RNs were used for this analysis. A difficulty score, point biserial coefficient, and Cronbach’s coefficient alpha (internal validity) were calculated for each question. Items with a point-biserial coefficient of < 0.25 were considered poor quality questions. Items with a difficulty score of 0-10% (high difficulty) and 90-100% (low difficulty) were also identified and reviewed. A Cronbach’s coefficient alpha value of < 0.50 was defined as unacceptable.
Results: All 94 providers (31 EMT-B, 34 EMT-P, and 29 RNs) completed the 30 questions included in the analysis. Fourteen of the 30 questions (47%) had a point biserial coefficient of < 0.25. Five of the 30 questions (17%) had a difficulty score of 90% or greater (low difficulty), and no test questions had a difficulty score of 10% or less (high difficulty). The Cronbach’s coefficient alpha measuring internal consistency was 0.54.
Conclusions: The 30-item post-test included in the training material had a larger than expected number of poor quality questions. The low Cronbach’s coefficient alpha indicates a low ability of this question set to measure the participant’s use of the ESI. A review of the training material, as well as the inclusion of an emergency nurse cohort as a comparison group, is warranted to determine the validity of our results.