OSCEs in the spotlight at Ottawa 2012
The reliability and consistency of OSCEs, and using simulation to enhance interprofessional learning were among topics examined by GMS presenters at the recent Ottawa Conference in Kuala Lumpur.
Professor Debra Nestel conducted two workshops. In conjunction with Dr Cathy Smith and Dr Carol O’Byrne from the University of Toronto and the Pharmacy Examining Board of Canada. “Training standardized patients for high stakes examinations: strategies and tools to achieve ‘exam readiness‘”. Standardization of Simulated/Standardized Patient performance in Objective Structured Clinical Examinations (OSCEs) is vital to the defensibility of the examination. However, little is written on how to achieve this goal. The workshop shared approaches from Austaralia, Canada and the USA.
She also spoke about the progress of the AusSETT program, presenting preliminary evaluation results. (AusSETT is a simulation educator and technician/coordinator training program and part of Health Workforce Australia’s simulated learning environments programs.)
In 2011, Gippsland Medical School undertook a review of the wording of OSCE station materials using all available academic staff, both medical and non-medical. Dr Kathy Brotchie discussed how employing non-clinical academics is an efficient and effective way to improve the OSCE writing process. Instructions for both simulated patients and students benefited from non-medically trained observations aiding consistency of delivery across circuits. The model also provided an opportunity for debate about OSCE processes in an efficient and collegial encounter.
Finally Adjunct Professor George Somers proposed an alternative to stations as the standard unit of measurement in OSCE assessments. The reliability of the examination ‘scale’ (Cronbach Alpha) is estimated using results from the eight stations as items. He discussed the validity of using the stations as items to determine the internal consistency of the examination is examined. While the scenario-based stations do facilitate the blueprinting process by enabling valid representation of the body systems and clinical skill elements (e.g., history, examination and procedures), each station tests a variety of overlapping skills. Therefore station scores tend to be interrelated. This contravenes a basic tenet of Classical Measurement Theory, which demands that the items of a scale should be not interrelated except through their relationship with the latent variable, in this case, the clinical skill of the candidate.Instead, he and his co-authors proposd an alternative scale grounded in Generalisability Theory, which consists of skill subset items and allows for inter-rater error.