Time |
Title |
11:45 – 12:00 |
A mixed method study to evaluate fairness, acceptance, feasibility, and educational impact of the involvement of elementary school children as standardized patients in a summative OSCE |
|
Rabea Krings1, Sabine Feller1, Kai Schnabel1, Sabine Kroiss3, Maja Steinlin2, Sören Huwendiek1
1University of Bern, Switzerland; 2University Children’s Hospital, Inselspital, University of Bern; 3University Children’s Hospital Zurich – Eleonorenstiftung; rabea.krings@iml.unibe.ch
Background: Assessment drives Learning (Cilliers et al. 2010). To enhance the educational effect of learning the handling of children as patients, it is necessary to have appropriate education, but also a realistic assessment method. There is little research concerning children as standardized patients inside this assessment process. Therefore, the goal of the present study is to evaluate perception, acceptance, fairness, and feasibility of an OSCE including children. Furthermore, the educational impact for students was analyzed.
Methods: The regular summative OSCE for fifth year medical students in Bern took place on six consecutive half days in April 2016. 191 students were tested in nine different OSCE stations – elementary school children were engaged as standardized patients for the pediatric station. With regard to Darling and Bardgett (2013), by individual interviews, children were asked afterwards which aspects they liked and disliked in taking part in the Paediatric OSCE. Moreover, the usefulness of children in an OSCE was analyzed with concern to realism, fairness, feasibility, and acceptance. In addition, the educational impact on students’ learning strategies was explored. Raters were interviewed in focus groups or interviews to get a broader picture – especially with regard to fairness aspects.
Results: Children were asked during mini-interviews if they were satisfied that they could act as SP. Most of them were really happy (5-point Likert scale; 1 did not like it at all – 5 liked it very much; Mean = 4.6, SD = .70). In focus groups and interviews, raters’ views were recoded. They all agreed that the pediatric station was perceived as fair, feasible and realistic. Students well accepted the involvement of children SPs and that they perceived it as very realistic and as a good way to show their skills. 53% of the students expected a pediatric station where children SPs were involved. Of these 53%, 30.6% mentioned that this expectation had an effect on their learning.
Discussion:The goals of OSCE examination are to have comparable and reproducible clinical situations to judge not only knowledge but especially clinical feasibilities to handle the situation and last but not least the patient. Thus, including children in testing this setting in pediatrics seems mandatory and worth the significant additional workload in performing this examination.
Conclusions: Results show that the pediatric station was fair, feasible and realistic. From the view of all groups (students, raters, children), all participants were satisfied with the new approach of children SPs in pediatric OSCE stations. Children enjoyed participating in this setting, raters well accepted the process and evaluated the pediatric OSCE station as fair and feasible, and students perceived this station as fair and acceptable.
Take Home Message: Children are special patients and need special treatments – this should also be reflected in OSCE examinations.
References:
Darling JC, Bardgett RJM. Primary school children in a large-scale OSCE: Recipe for disaster or formula for success? Medical Teacher. 2013;35:858-861.
Cilliers FJ, Schuwirth LW, Adendorff HJ, Herman N, van der Vleuten CP (2010): The mechanism of impact of summative assessment on medical students’ learning. Adv Health Sci Educ Theory Pract. 2010 Dec;15(5):695-715.
|
12:00 – 12:15 |
German Translation and Validation of the “Interprofessional Attitudes Scale” |
|
Tina Pedersen1, Eva Cignacco2, Simon Fischer1, Jonas Meuli1, Robert Greif1
1Department of Anaesthesiology and Pain Medicine, Bern University Hospital, University of Bern , Switzerland; 2University of Applied Sciences, Health Department, Bern, Switzerland; tina.pedersen@insel.ch
Background: Interprofessional collaborative practice is an unavoidable and significant factor in today’s health care provision. To assess interprofessional attitudes among health professional students, the Interprofessional Attitudes Scale (IPAS) was developed in the USA(1). This scale consists of 27 items with five subscales: teamwork, roles and responsibilities; patient-centeredness; interprofessional biases; diversity and ethics; and community centeredness. Unfortunately no such scale was available in German.
Research question: The aim of this study was to translate the IPAS into German and thereafter validate the German version of IPAS.
Methods: The first step was to translate the IPAS from English to German according to the ISPOR guidelines, with forward and backward translations(2).
Secondly, cognitive interviews with midwifery students, anaesthesia nurses, and physicians were conducted according to the method of G. B. Willis(4). The goal of these interviews was to rephrase or delete items in the German version, if they did not make sense or were unclear to potential users.
The cognitive interviews were followed by the calculation of the Content Validity Index (CVI) for each item (item-CVI) and for the whole scale (scale-CVI) (4).
To uncover the underlying structure of the items and create meaningful subscales, we performed an explanatory factor analysis following the recommendations by Osborne et al.(5).
Finally, a homogeneity testing calculating Cronbach’s α for single items, for subscales, and for the whole scale was performed.
Results: After the forward and backward translation, the study group discussed the wording of all items until consensus was found. The cognitive interviews resulted in minor rewriting of the translated items to improve understanding. The first item-CVI’s ranged from 0.33 and up to 1.00, while the Scale-CVI achieved a satisfactory result of 0.79. The explanatory factor analysis revealed that three items did not fit in the German version, and the German IPAS was rearranged within three subscales; 1) teamwork, roles and responsibilities; 2) patient-centeredness and 3) health care provision. The subscale interprofessional biases with three items was deleted due to low factor loadings and cross-loadings, and the items of the subscale diversity and ethics were re-arranged into other subscales.
After factor analysis and re-arrangement of the items, the S-CVI reached an index of 0.82. Cronbach’s α was 0.88 for the subscale teamwork, roles and responsibilities; 0.78 for the subscale patient-centeredness; and 0.85 for the subscale health care provision. The Cronbach’s α for the overall scale was finally 0.87.
Discussion: After proper translation of the original English version into German, we found a low Cronbach’s α for the subscale interprofessional biases, which was a comparable finding with the validation trajectory of the English scale (1). Furthermore, this subscale achieved low CVI scores and in the explanatory factor analysis the items in this subscale loaded on a variety of different other factors. For that reason we decided to delete the entire subscale. This resulted in the final German version containing 24 items to assess interprofessional attitudes in health care.
Conclusion: Based on a rigorous validation process the German Scale “Haltung zur Interprofessionalität” (IPAS_German Version) provides a tool to reliably assess attitudes towards interprofessionalism among different health care professions in the German speaking countries.
References:
1. Norris J, Carpenter JG, Eaton J, Guo JW, Lassche M, Pett MA, et al. The Development and Validation of the Interprofessional Attitudes Scale: Assessing the Interprofessional Attitudes of Students in the Health Professions. Academic medicine : journal of the Association of American Medical Colleges. 2015;90(10):1394-400.
2. Wild D, Grove A, Martin M, Eremenco S, McElroy S, Verjee-Lorenz A, et al. Principles of Good Practice for the Translation and Cultural Adaptation Process for Patient-Reported Outcomes (PRO) Measures: report of the ISPOR Task Force for Translation and Cultural Adaptation. Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research. 2005;8(2):94-104.
3. Willis GB. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, California: Sage Publications; 2005.
4. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in nursing & health. 2007;30(4):459-67.
5. Osborne J. C, A. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation. 2005;10(7).
|
12:15 – 12:30 |
Measures to reduce rating differences in (identical) OSCE stations |
|
Noemi Schaffner
Bern University of Applied Sciences, Switzerland; noemi.schaffner@bfh.ch
Background: In the “Bachelor of Science in Nursing” program at the Bern University of Applied Sciences (BFH), clinical exams, like Objective Structured Clinical Examinations (OSCE), are core to promote and ensure a high quality performance in clinical practice. Nursing students pass therefore three OSCEs during their studies. Normally an OSCE at the BFH consists of nine stations, which are conducted parallel on two floors with the goal to assess more students at the same time. Consequently for each station with the same scenario, two examiners and two standardized patients (SP) are needed, each of them responsible for an identical station on a different floor. Systematic measurement error can arise from this approach. This variance can be introduced by the performance of the SPs or the raters. For instance one examiner could tend to rate all students performances more stringent while the other tends to be more lenient. As a result students who are rated by the second rater get better grades than those rated by the first one. It is important to take measures to control the rater effect, starting by monitoring the rater differences.
Project Description: Different measures are taken to ensure the quality of the OSCE and to reduce possible, rater variance. Firstly, examiners and SPs who are responsible for the same OSCE station discuss and review each item on the checklist and ensure therefore standardization. Secondly, every station is statistically analyzed to detect rating differences between raters. By this means stations with statistically significant rater differences are identified. However with this analysis it isn’t possible to identify the problematic items. Therefore we, thirdly, analyze rating differences on each item of the “problematic” station.
Outcome: The results of the analysis are reported to the person responsible for the OSCE. This person informs all raters and SPs who are involved in the station with rating differences. Possible causes which lead to the differences are discussed (different understanding of the item, different performance of SP, ambiguous item wording) and measures are taken to eliminate them.
Challenges: Differences in OSCE outcomes can never only be explained by differences in raters or SPs. They can also simply be explained by differences in student’s performances. There is always a chance that all low-performing students are, by coincidence, assigned to the same floor. Nevertheless it is important to reduce differences due to this specific OSCE procedure.
Discussion: The analysis undertaken to ensure quality of the OSCE and to reduce rating differences are constructive. Raters in particular are eased by the knowledge of possible causes for the differences in their rating and are motivated to decrease discrepancies. Moreover this practice ensures that OSCE items are continually reviewed and improved.
|
12:30 – 12:45 |
Quality Control of SP Performance in high stakes OSCEs using online software on tablet computers |
|
Beate Gabriele Brem, Markus Dahinden, Regina Christen, Sandra Wuest, Kai Philipp Schnabel
Institute for Medical Education, Switzerland; beate.brem@iml.unibe.ch
Introduction: Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated(1). At many sites SP trainer check on SP performance more or less systematically. However, the compilation and systematic analysis of the data is challenging for the most.
Project Description: In our program we constantly strive to improve and simplify our quality control for SP performance. In 2013 we developed a list of 22 items concerning the quality of SP performance. During high-stake exams on average 200 observations were made. The list was filled in on paper and digitized for analysis, which took approximately one full day of work per exam. Moreover, once lists from a different site got lost in the mail. Thus, we were seeking for a way to simplify this process.
Outcome: In summer 2016 we tested a commercially available software called SurveyGizmo(2) on tablet computers. In contrast to similar tools tested before, this software enables a rater to do multiple ratings on the same and on different SPs, as well as allowing many raters to rate the same SP at the same time. Although SurveyGizmo is an online tool, an offline modus is available for rating SPs while not being connected to WiFi. The data can be uploaded later. The usability of the tool is high. When working with different sites, it was easy to explain the use by mail or telephone. The acceptance of the tool was high even with staff not experienced. After the collection of the data, the day that was spent digitizing the data when working with paper, was not necessary, since all data were already digital. We simply had to reverse the encryption of SP names that was applied for data safety reasons.
Conclusion / Discussion: The high usability of the software and the time saved in digitizing the data after collection convinced us to keep working with that tool in the future.
References:
1. Adamo G. Simulated and standardized patients in OSCEs: achievements and challenges 1992 – 2003. Medical Teacher. 2003;25(3):262 – 70.
2. http://www.surveygizmo.com [Oct. 28th 2016].
|
12:45 – 13:00 |
Objective structure Examination – OSCE in Physiotherapy: What is the difference between communication competence and therapeutic climate? – Canceled! |
|
Beatrice Buss
University of Applied Science Bern, Switzerland; beatrice.buss@bfh.ch
Background: The Objective Structured Examination (OSCE) measures the clinical skills in medicine. Checklists or global assessments criteria are used. Since 2006 OSCE, to measure clinical competence, is used by the bachelor physiotherapy course at the Bern University of Applied Sciences. The valuation is determined by global criteria – communication, therapeutic climate, technical skills and clinical decision-making ability. The purpose of this study was to determine if there is a difference in rating between therapeutic climate and communication.
Methods: This study investigated the ratings of the items communication and therapeutic climate. 50 students in the 1th Semester and 49 students in the 5th semester were analyzed. The item communication was rated by an examiner and the other item, therapeutic climate, by a standardized patient. Data were analyzed using the SPSS program (Version 2015).
Results: Statistically significant differences were not found in the ratings of the therapeutic climate and the communicative competence for physiotherapy students in the 1th Semester on 8 stations with a duration from eight minutes (p=0.16 to p=.956). In the 5th Semester there was no statistically significance on 3 Stations with a durance of twenty minutes (p=034 to p= 0.485), and 2 Stations from eight minutes.
Conclusion: In short stations we couldn’t find a difference between communication competence and therapeutic climate. The results were scattered among the single stations. It could be possible that in short stations the student is not able to establish a good relationship with standardized patients. However, in long stations the ratings by examiners were not statistically significant different to the short ones either. The rating of these items are independent and do not affect each other. Therefore the rating from standardized patients is important for student’s feedback.
|