Evaluation of a multi-institution mastoidectomy performance instrument
Author and article information
Abstract
Objective: The objective of this work is to obtain validity evidence for an evaluation instrument used to assess the performance level of a mastoidectomy. The instrument has been previously described and had been formulated by a multi-institutional consortium.
Design: Mastoidectomies were performed on a virtual temporal bone system and then rated by experts using a previously described 15 element task-based checklist. Based on the results, a second, similar checklist was created and a second round of rating was performed.
Setting: Twelve otolaryngological surgical training programs in the United States. Participants 66 individuals with a variety of temporal bone dissection experience, from medical students to attending physicians. Raters were attending surgeons from 12 different institutions.
Results: Intraclass correlation (ICC) scores varied greatly between items in the checklist with some being low and some being high. Percentage agreement scores were similar to previous rating instruments. There is strong evidence that a high score on the task- based checklist is necessary for a rater to consider a mastoidectomy to be performed at the level of an expert but a high score is not a sufficient condition.
Conclusions: Rewording of the instrument items to focus on safety does not result in increased reliability of the instrument. The strong result of the Necessary Condition Analysis suggests that going beyond simple correlation measures can give extra insight into grading results. Additionally, we suggest using a multiple point scale instead of a binary pass/fail question combined with descriptive mastery levels.
Cite this as
2017. Evaluation of a multi-institution mastoidectomy performance instrument. PeerJ Preprints 5:e2954v1 https://doi.org/10.7287/peerj.preprints.2954v1Author comment
This is a preprint submission to PeerJ Preprints.
Sections
Additional Information
Competing Interests
The authors declare that they have no competing interests.
Author Contributions
Thomas Kerwin conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper.
Brad Hittle performed the experiments, wrote the paper, reviewed drafts of the paper.
Don Stredney conceived and designed the experiments, performed the experiments, wrote the paper, reviewed drafts of the paper.
Paul De Boeck contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper.
Gregory Wiet conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper.
Human Ethics
The following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers):
This study was approved by both The Ohio State University Office of Responsible Research biomedical institutional review board (IRB) as well as by the IRBs of each local institution involved in the study.
Data Deposition
The following information was supplied regarding data availability:
Funding
This work was supported by The National Institute for Deafness and other Communication Disorders, National Institutes of Health, USA, R01DC011321. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.