Education Management Platform Enables Delivery and Comparison of Multiple Evaluation Types

Author Department

Surgery

Document Type

Article, Peer-reviewed

Publication Date

9-2019

Abstract

OBJECTIVE:

The purpose of this study was to determine whether an automated platform for evaluation selection and delivery would increase participation from surgical teaching faculty in submitting resident operative performance evaluations.

DESIGN:

We built a HIPAA-compliant, web-based platform to track resident operative assignments and to link embedded evaluation instruments to procedure type. The platform matched appropriate evaluations to surgeons' scheduled procedures, and delivered multiple evaluation types, including Ottawa Surgical Competency Operating Room Evaluation (O-Score) evaluations and Operative Performance Rating System (OPRS) evaluations. Prompts to complete evaluations were made through a system of automatic electronic notifications. We compared the time spent in the platform to achieve evaluation completion. As a metric for the platform's effect on faculty participation, we considered a task that would typically be infeasible without workflow optimization: the evaluator could choose to complete multiple, complementary evaluations for the same resident in the same case. For those cases with multiple evaluations, correlation was analyzed by Spearman rank test. Evaluation data were compared between PGY levels using repeated measures ANOVA.

SETTING:

The study took place at 4 general surgery residency programs: The University of Massachusetts Medical School-Baystate, the University of Connecticut School or Medicine, the University of Iowa Carver College of Medicine, and Maimonides Medical Center.

PARTICIPANTS:

From March 2017 to February 2019, the study included 70 surgical teaching faculty and 101 general surgery residents.

RESULTS:

Faculty completed 1230 O-Score evaluations and 106 OPRS evaluations. Evaluations were completed quickly, with a median time of 36 ± 18 seconds for O-Score evaluations, and 53 ± 51 seconds for OPRS evaluations. 89% of O-Score and 55% of OPRS evaluations were completed without optional comments within one minute, and 99% of O-Score and 82% of OPRS evaluations were completed within 2 minutes. For cases eligible for both evaluation types, attendings completed both evaluations on 74 of 221 (33%) of these cases. These paired evaluations strongly correlated on resident performance (Spearman coefficient = 0.84, p < 0.00001). Both evaluation types stratified operative skill level by program year (p < 0.00001).

CONCLUSIONS:

Evaluation initiatives can be hampered by the challenge of making multiple surgical evaluation instruments available when needed for appropriate clinical situations, including specific case types. As a test of the optimized evaluation workflow, and to lay the groundwork for future data-driven design of evaluations, we tested the impact of simultaneously delivering 2 evaluation instruments via a secure web-based education platform. We measured the evaluation completion rates of faculty surgeon evaluators when rating resident operative performance, and how effectively the results of evaluation could be analyzed and compared, taking advantage of a highly integrated management of the evaluative information.

PMID

31515199

Share

COinS