Friday 8 November - Short Presentations
10:50-12:30, Room: 1.18, 2nd floor

Clinical Examination Theme

Chair: Professor Suzanne Chamberlain, General Medical Council

Marking the Objective Structured Clinical Examination: the objective vs subjective debate 

* Harish Thampy, Serena Tolhurst-Cleaver, Heidi Northover 
* Corresponding author: Harish Thampy, University of Manchester,
harish.thampy@manchester.ac.uk

Background 
Since its creation in the 1970s, the OSCE has become the mainstay of clinical competency. As the name implies, it is seen to offer an objective perspective on a candidate’s ability to demonstrate a clinical task. Conventionally, this has been through the use of checklist-based marking rubrics providing a numerical summation of total score. Latterly, domain based rating scales have been increasingly implemented alongside global ratings of overall performance. Regardless of rubric used, conventional OSCE standard setting methods such as borderline regression adopt quantitative perspectives in which examiners’ global rating judgements are analysed against checklist or domain based scores to create a passing cut score.  

Summary of Work  
Despite the well-established literature in this field, there is a recent trend questioning the positivist assumption inherent when using numerical scales to judge clinical competency i.e that there is a ‘true’ level that can be measured using standardised tools that can then be analysed by reproducible, unambiguous statistical methods. Instead, it is argued that educators should shift their view of generating a minimum passing standard as a quantitative cut score to instead describing a passing standard that is made through an examiners’ qualitative professional judgement. In this approach, score variance is no seen as “error” but rather a product of socially-constructed judgements. 

Summary of Results 
In this presentation, the authors describe the implementation of qualitative passing standard judgements within OSCE delivery at a large UK Medical School and describe the challenges faced and suggest strategies to address these.  

Take-home Message  
The historic quantitative perspectives in assessment are increasingly being challenged. The literature is witnessing ongoing debate between embracing subjectivity versus the pursuit of pseudo- objectivity. 

Optimising a standardised approach to OSCE Examiner scoring

* Heidi Northover, Serena Tolhurst-Cleaver, Harish Thampy
* Corresponding author: Heidi Northover, University of Manchester,
heidi.northover@manchester.ac.uk 

Background 
Although the objective structured clinical examination (OSCE) is considered to be one of the most robust methods used for high stakes, summative, clinical assessment in medicine, OSCEs are, in practice, prone to high levels of variance. Ideally, any variation in candidate scores should be reflective of candidate performance rather than examiner variability. The hawk-dove effect, assessor biases such as halo and anchoring effect and contrast bias can lead to variability in examiner behaviour with subsequent loss of standardised scoring behaviour. Similarly, untrained assessors, those who have not undergone regular and recent training updates, as well as those assessors with limited involvement in exam construction, tend to award higher marks than trained assessors. A lack of familiarity with the purpose of rating criteria and a limited understanding of assessment pedagogy, including the need for valid and reliable scoring of the assessment, may contribute to a reduction in reliable examiner scoring. 

Summary of Work 
This presentation outlines a programme of faculty development workshops for examiners involved in OSCEs in Year 3-5 at Manchester Medical School that are intended to improve examiner consistency and behaviours 

Summary of Results  
We delivered three types of workshop; initial examiner training (for new examiners), examiner update training and OSCE writing and development. Furthermore, we monitored the global scores and free text feedback comments provided by examiners in the year 5 Exempting Exams, in order to evaluate the standard and validity of assessor scoring. Finally, we analysed student concerns submitted after year 5 exams. 

Discussion & Conclusion  
This presentation will explore the role of training initiatives and quality assurance mechanisms on improving examiner judgements. We will describe challenges faces and strategies implemented to overcome these. 

Take-home Message  
Assessor training is therefore an important component of a valid OSCE, in order to ensure that examiners have greater confidence with the marking scheme, their role in a valid assessment and understanding of appropriate student standards. Furthermore, training should be updated regularly, and assessors should be encouraged to participate in OSCE question design and development.

Variations on a theme: differences in the design, delivery and technical features of clinical skills assessment in UK medical schools

* Suzanne Chamberlain, William Curnow
* Corresponding author: Suzanne Chamberlain, General Medical Council, UK
suzanne.chamberlain@gmc-uk.org 

Background  
In December 2017 the General Medical Council (GMC) confirmed the introduction of the Medical Licensing Assessment (MLA) for all UK medical students and international medical graduates (IMGs) wishing to practise in the UK. The MLA will comprise two components: an Applied Knowledge Test (AKT) and a Clinical and Professional Skills Assessment (CPSA). The approach for each component is different, with the AKT being a standardised examination provided by the GMC, and the CPSA being provided by each UK medical school, and the GMC for IMGs, and quality assured by the GMC using a set of requirements. 

Summary of work  
In March-June, 2019, GMC staff met with representatives of 36 medical schools to understand, among other matters, the detail of each school’s final assessment of clinical skills, which is delivered in the penultimate or final year of their programme. The meetings formed part of the early conversations with medical schools about their readiness to meet the requirements for the CPSA. Details about the design, delivery and technical features of each school’s CPSA were captured during the meetings. 

Summary of findings  
CPSAs encompassed a number of models including single OSCEs, sequential OSCEs, and OSLERs or MOSLERs. In some schools, this formed one component of the overall clinical skills assessment requirement. Significant variability was noted across schools’ CPSAs in terms of the design and delivery features, such as the number of stations/cases and testing time, and technical features such as approaches to scoring students’ performances and setting the passing standard. 

Discussion and conclusion  
The meetings have given the GMC insight to the breadth of clinical skills assessment design in UK medical schools, and the factors that shape each school’s approach to assessment. Much of the observed variability appears to be explained by differences in programme structure and organisation, local issues of scale, feasibility and resource, and, most importantly, the overarching assessment strategy of each school which specifies where, when and how clinical skills are assessed throughout the programme. This presentation will outline the breadth of clinical skills assessment design and practice across the UK and discuss the implications for quality assuring in this diverse landscape. 

Take-home message
There is significant variability across UK medical schools in the design, delivery and technical features of clinical skills assessments.

Cross-institutional OSCE Quality Assurance as to improve EU assessment strategies: the state of the Art

* Thomas Kropmans, Eirik Søfteland, Anneke Wijnalda, Rosemary Geoghegan, Neil Harrison, Angela Marie Kubacki, Nick Stassens
* Corresponding author: Thomas Kropmans, National University of Ireland Galway, Ireland thomas.kropmans@nuigalway.ie 

Background 
Cross institutional OSCE quality assurance is rarely reported. More and more OSCE retrieve, store and analyse their data electronically using an OSCE Management Information System. Methods of Quality Assurance for OSCE is recommended in AMEE Guide 49 and as such implemented in Qpercom’s OSCE Management Information System Qpercom Observe. 

Summary of Work 
Data of 10 Northern, Central and Western EU Universities from primarily penultimate year OSCEs were analysed after retrieving written informed consent of all participants e.g. co-authors. Universities are willing to share, analyse, discuss and publish their data in a mutual publication soon to be submitted. Descriptive data like number of stations failed/passed, Global rating Scores, mean, standard deviation, Cronbach’s Alpha, SEM 68% and 95% Confidence Intervals, R2 and intergrade difference were compared between Universities using classical psychometric analysis and generalizability theory analysis. 

Summary of Results 
There is a wide variation in station names, numbers and students numbers. A substantial disbalance is observed between students failing on their ‘raw scores’ and those failing on their ‘Global Rating Scores’. Cut-off scores vary between 50 and 75% between western and northern EU universities. Main outcome analysed is Cronbach’s Alpha, SEM 68 - 95%CI, G-coefficient and relative and absolute SEM 68 - 95% CI. 

Discussion & Conclusion 
The SEM calculated out of Cronbach’s Alpha is overestimating reliability compared to the G-coefficient and relative and absolute SEM based on the G-theory analysis. Biggest concern however is the substantial discrepancy between Global Rating Scores and Pass/Fail scores within and between Universities. 

Take-home Message 
Sharing quality assurance data between Universities appears to be a sensitive issue and is highly required to develop EU wide assessment strategies.

The opinion of the fifth-year pharmacy students about Objective Structured Practical Examination (OSPE)

Justyna Dymek, Anna Gołda, *Tomasz Kowalski, Wioletta Polak, Agnieszka Skowron
* Corresponding author: Tomasz Kowalski, Department of Social Pharmacy, Faculty of Pharmacy, Jagiellonian University Medical College, Poland
tomek.kowalski@doctoral.uj.edu.pl  

Background 
The education of pharmacy students based on the European Qualifications Framework (EQF) requires the implementation of modern teaching methods that ensure that in education process student will achieve knowledge, skills, and social competences necessary for future pharmacist. This entails also the need to change the methods of student’s assessment. The traditional forms of examination testing knowledge are replaced by examinations also assessing skills and communication skills. Such possibilities are created by carrying out the Objective Structured Practical/Clinical Examination (OSPE/OSCE). In the academic year 2018/2019 the pharmacy students at the Faculty of Pharmacy UJ CM, for the first time in Poland, took the OSPE exam. Aim: The aim of this study was to obtain the opinion of the fifth-year pharmacy students about OSPE exam. 

Summary of Work 
The team of the Department of Social Pharmacy UJ CM have developed and implemented the OSPE exam at the end of the course on Pharmaceutical Care for fifth-year pharmacy students. In view of the fact that this type of examination was carried out for the first time at the Faculty of Pharmacy in Poland, each student after completion of the exam was asked to fill in an anonymous questionnaire to collect his/her opinion on OSPE. 

Summary of Results 
108 students joined the survey. According to 70% of respondents, OSPE allowed to assess their skills and learn the issues that need to be supplemented. According to students’ opinion stations with a simulated patient reflect the real professional situation. Almost 75% of respondents considered this form of the exam as better than the test form used so far, while as many as 69% of respondents admitted that the exam was stressful for them. Over 60% of students indicated the station with the identification and resolution of drug-related problems as the one on which they will get the worse grade. About 44% of respondents indicated the same station as the most stressful and 66% of respondents felt that the Pharmaceutical Care course did not prepare them to identify and solve DRP. The opposite situation was observed in the case of a pharmaceutical interview station with a simulated patient. Students usually referred to it as a station where they think they will be best assessed. 

Discussion & Conclusion 
Students are positive about the new form of the exam. From the organizational side, the exam met the expectations and needs of the respondents. OSPE has allowed students to identify areas that require further expansion of knowledge and skills. The team of the Department of Social Pharmacy received information about the necessary changes, which allow to improve the teaching process in the Pharmaceutical Care course. 

Take-home Message 
The attitude of students to the new form of the exam was positive. The exam enabled self-assessment of students and identification of areas requiring further education. At the same time, the exam allowed the examiners to learn about areas that should be to place greater emphasis on while conducting classes in Pharmaceutical Care.

© Copyright 2019 The European Board of Medical Assessors (EBMA) | Disclaimer