By: Jean Ellen Zavertnik and Ann Holland, NLN Center for Innovation and Simulation in Technology
Is high-stakes assessment in simulation used in your program? By this we mean “an evaluation process associated with a simulation activity that has a major academic or educational consequence” (Meakim et al., 2013, p. S7).
As greater emphasis is placed on high-stakes assessment of simulation performance in nursing education, programs must ensure that assessment methods are fair and reliable (National League for Nursing [NLN], 2012). The NLN Project to Explore the Use of Simulation for High Stakes Assessment (Rizzolo, Kardong-Edgren, Oermann, & Jeffries, 2015) evaluated the process and feasibility of using manikin-based high-fidelity simulation for high-stakes assessment in pre-licensure RN programs. The study produced as many questions as answers. One such question was: What are the best methods to train raters?
This blog post reports on our unique perspectives, derived from participating in a study that tested the effectiveness of a training intervention for faculty evaluators in achieving intra- and interrater reliability of simulation performance. Interrater reliability is the extent to which raters assign the same score to the same variable (McHugh, 2012). Intrarater reliability is the extent to which a rater assigns the same score to separate observations of the same performance variables.
Describing the Study (Ann Holland)
I served as principal investigator of the study titled “The Effect of Evaluator Training on Intra/Inter Rater Reliability in High-Stakes Assessment in Simulation.”
My research team received permission from the NLN to use the Creighton Competency Evaluation Instrument (CCEI) and student performance videos produced for the NLN project. We launched the study in 2015 by creating a training intervention using best practices from the nursing literature. Since we recruited participants from across the country, the evaluator training was delivered online. We created training videos and documents, conducted training webinars, and provided feedback to participants based on their scoring of practice videos.
Preliminary results of the study were presented at the 2017 NLN Education Summit, and articles are now being prepared for publication. Here is a sneak peek at some of the study highlights and lessons learned by the research team.
The Participant Perspective (Jean Ellen Zavertnik)
I benefited as a participant in the study. The evaluator training gave me insight into the importance of using a quality assessment tool. I gained knowledge about evaluating high-stake assessments and the significance of training evaluators to increase intra/interrater reliability.
Here are some takeaways from my point of view as a participant.
Summative evaluation of student performance through high-stakes testing can be a valuable method to assess clinical competency and progression in the program. We believe a quality assessment tool, sufficient evaluator training, and adequate video recording are key to improving intra/interrater reliability and fair appraisal of student performance.
Meakim C., Boese T., Decker S., Franklin, A.E., Gloe, D., Lioce L., . . . Borum J.C. (2013). Standards of best practice: simulation standard I: terminology. Clinical Simulation in Nursing. 9(6S): S3-S11. doi: 10.1016/j.ecns.2013.04.001
McHugh, M. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276-282.
National League for Nursing (NLN). (2012). Fair testing guidelines for nursing education.
Rizzolo, M.A., Kardong-Edgren, S., Oermann, M.H., & Jeffries, P.R. (2015). The National League for Nursing project to explore the use of simulation for high-stakes assessment: Process, outcomes, and recommendations. Nursing Education Perspectives, 36(5), 299-303. doi:10.5480/15-1639
This article has been republished from the NLNTEQ blog with permission.