Saturday 9 November - Academic Short Presentations (EBMA)
10:50-12:30, Room: 01.18, Ground Floor
Programmatic assessment Theme
Chair: Dr Lubberta de Long, University of Utrecht
Integrated and programmatic assessment: a practice format
*Corresponding author: Birgitte Schoenmakers, KU Leuven, Belgium
birgitte.schoenmakers@kuleuven.be
Background
In a rapidly changing educational landscape assessment and evaluation need to be revised. The training of general practitioners has been seeded on the performance and functioning in daily practice. This so-called ‘competence –based learning’ has been further developed into the more sophisticated concept of ‘complex learning’ or ‘real life learning’. Education starts from a professional workplace and is built around realistic situations. Students progressively acquire competences while integrated support and guidance gradually decrease. At the end of the curriculum, the student is able to perform independently in the targeted areas of competence. Therefore, following assessment and evaluation the acquired competencies need adjustment. The assessment following each individual course (learning activity) is abandoned and the focus shifts to testing integrated competencies in programmatic assessment context.
Summary of work
We translated the key features of the new educational insights to the assessment process. Since this teaching model focuses on the integration of complex skills, the assessment is disconnected from the individual course. Knowledge, attitude, and skills are assessed based on a real-life situation and test items address aspects of the various learning activities (courses). Assessment is not limited to one phase or one learning activity but is a continuous process in which the student's progress determines the test level (both formative and summative). First, Seminars and classes were preceded and followed by an assignment. Assignments were reading tasks, clinical tasks, management and audit tasks etc. The results of these assignments were discussed and fed back. Second, formative tests were offered twice during one study phase. Third, an integrated summative exams was organized at the end of the study phase. All assessment and evaluation reports were labelled according to learning objectives and linked to the portfolio. Learning progression was there for visible across the different domains and levels of assessment.
Summary of results
Students appreciated the efforts made to guide them intensely through the learning process. They believed the assignments were useful, the formative assessments efficient and the integrated exam instructive. Students struggled with the workload and the overlap between the assignments. Teachers appreciated the link between seminars/classes and work floor and the efforts made by the students to complete the assignments. Teachers struggled with the workload and the integration of assignments in their courses.
Discussion & Conclusion
Integrated and programmatic testing require thorough preparation of infrastructure, students, teachers and administration. Take-home Message Integrated and programmatic testing: Be well prepared and go slow!
Programmatic assessment of clinical skills - initial experience of frequent look, lower stakes, longitudinal mini-OSCEs
* Neil Rice, Paul Kerr, Sarah Bradley
* Corresponding author: Neil Rice, University of Exeter, United Kingdom
n.e.rice@exeter.ac.uk
Background
Clinical and communication skills are traditionally and almost universally assessed by OSCE style assessments in UK medical schools. These exams are high-stakes and high stress for students and for faculty. If the high cost and logistical difficulties of running OSCEs were not enough, university regulations increasingly require the provision of resit examinations for students who do not meet the standard in a first sitting. Yet it is rare that OSCE results throw up too many surprises either in terms of students who fall short of the required standard, or those who are high flyers. Applied medical knowledge has long been assessed by progress testing at the University of Exeter Medical School. The frequent look, rapid remediation programmatic assessment philosophy enabling students and faculty to take a longitudinal, feedback driven view for continual successful development. We are in the process of expanding our programmatic assessment strategy to include our assessment of clinical skills. This study describes our experiences to date.
Summary of Work
Until the 2018/19 academic year, students’ progression decisions in clinical skills were based on a series of independent stand-alone “competency” style pass/fail assessments, along with an end of year OSCE at years two and four of the five year undergraduate programme. In 2018/19, to coincide with a curriculum review in year 1, we introduced termly four station mini-OSCE style assessments which enabled students’ performance to be measured cumulatively promoting early remediation where required. Students will continue taking termly OSCEs as they progress through the course, eliminating the need for high-stakes “hurdle” exams.
Summary of Results
Data collected from the mini-OSCE style assessments, and the model used to aggregate data to inform progression decisions will be presented. Opinions of stakeholders will be discussed.
Discussion & Conclusion
The principles of programmatic assessment are relevant in the assessment of clinical skills. Whilst it is obviously crucial to reliably assess the clinical competence of medicine students, the high-stakes end of course OSCE assessment is not necessarily the only, or the best tool to do this. Assessing clinical skills longitudinally in the context of continual professional development may provide more reliable and valid measures of competence.
Competency oriented validity argumentation
* Mostafa Dehghani Poudeh, Aeen Mohammadi, Nikoo Yamani, Narges Saleh, Azadeh Rooholamini, Afagh Zarei,
* Corresponding author: Mostafa Dehghani Poudeh, Tehran University of Medical Sciences, Iran
mft2084@gmail.com
Background
Programmatic assessment has been suggested as a recent approach to assessment in which accurate judgements about the learners’ competence are mainly based on various data points, throughout the course of study. However yet, methods and instruments, rather than the whole program have been the focus of the validity approaches. Perhaps it is because of the absence of an agreed-upon basis for evaluating the validity of programs. In workplace based assessments, various competencies are evaluated simultaneously through different methods. For instance, a mini-CEX assesses interviewing, professionalism etc. while some of which are also assessed with other methods. Therefor the assessment programs end to an aggregation of different scores. It has some disadvantages. First, it is not much reasonable to sum the scores of diverse parts of an exam. Moreover, if an examinee fails in one competency, for example in professionalism, while he is judged as competent in other ones, does he fail at all? For this reason we can calculate the total score of each competency from different instruments. In this way, declared weak points of an examinee will be more plausible and can be eliminated with a remediation plan With the foregoing, the concentration of validity studies can shift from instruments to competencies. In other words the strategies used for combining the similar parts of different assessments, are the targets of validity and particularly Kane’s framework. As a result, one should provide evidences at each level of Kane’s framework pros or cons to the accuracy of the outcomes concerning the competencies.
Methods
For scoring level, the assessments’ domains are evaluated against the blueprints. We can also investigate the coverage of the Miller’s competency levels in the assessments. Are all levels of a given competency assessed? The aggregation policies are subject to evaluations as well. For generalization purpose, the sources of variance (facets) including the time, location, raters and even the methods are the targets of validation enquiry. The main question is whether the differences in participants are resulted from their difference in the levels of their competencies or because of other facets? At the extrapolation level we may compare the results yielded from different assessment methods regarding the assumed competencies. For example the comparison could be accomplished for the results of the communication part of the mini-CEX, OSCE, and MSF evaluations. And the last but not the least, the implicating level might be explored by assessing the entrustability judgements about the examinees concerning the EPAs correspondent to each competency.
Take Home Message
There is a need to determining the applicability of the above mentioned way of validating an assessment program. In other words the main research question will be “how good is the competency as the basis for validating a program of assessment?”.
Do we have a full picture? The acquiring of information in high-stakes programmatic decision- making: a mixed method intervention study
* Lubberta de Jong, Harold Bok, Lonneke Schellekens, Wim Kremer, Herman Jonker, Cees van der Vleuten
* Corresponding author: Lubberta de Jong, Utrecht University, The Netherlands
L.H.deJong@uu.nl
Background
In high-stakes programmatic decision-making assessors aggregate a multitude of low-stakes assessments into a robust and holistic final decision about the students’ performance (1). Through an iterative process of acquiring, organizing and integrating information they select performance relevant information (both quantitative and qualitative) from the portfolio to base their decision upon. To ensure validity of the high-stakes procedure, the assessors should have access to sufficient information to get a full picture (i.e. saturation of information) (2). However, the role of narrative information in saturation of information is not fully clear yet. Therefore, in this study we aim to further explore the concept of saturation of information by investigating the explanatory mechanism of the acquiring of information in the relation to adaptive strategies and the role of varying quality of narrative information.
Summary of Work
In this study members of the Clinical Competency Committee at the Faculty of Veterinary Medicine Utrecht University were asked to assess four portfolios in a quasi-experimental setting. These portfolios were authentic and modified with a varying quality of narrative feedback and reflection. During the quasi-experiment, the actions of the assessors on the screen were recorded and after each portfolio the assessor was asked to fill in a Self-Completion Questionnaire (SCQ) containing both closed and open-ended questions. Based on the principles of the stimulated recall, the results from the screen recordings and SCQs were used as input for the semi-structured interview after each session. The SCQ’s and semi-structured interviews were analyzed using template analysis. The quantitative and qualitative data were integrated in NVivo.
Summary of Results
The preliminary results from the initial template showed that assessors had an overall approach in acquiring of information independent of the contents of a specific portfolio. This approach was influenced by several factors, e.g. relevance of different sources. Based on the specific information in a portfolio, assessors adjusted their approach (i.e. coping mechanism) to optimize the acquiring of information. Further results will be discussed during the presentation.
Discussion and conclusion
This study emphasizes the importance of the users in investigating the validity of a high-stakes programmatic assessment. In this, validity is a complex phenomenon were both psychometric as well as qualitative approaches are relevant in completing the puzzle.
References
1. van der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical teacher, 34(3), 205-214.
2. Schuwirth, L. W., & van der Vleuten, C. P. (2012). Programmatic assessment and Kane’s validity perspective. Medical education, 46(1), 38-48.
Developing OSCE as a part of the programmatic assessment in medical faculty- from low to high stake assessment
* Katarzyna Naylor, Kamil Torres
* Corresponding author: Katarzyna Naylor, Department of Didactics and Medical Simulation, Medical University of Lublin, Poland
katarzyna.zielonka@umlub.pl
Background
The 2012 Polish Bill on learning objectives in medical studies aimed to introduce more practical aspects into the curricula, and to focus on skills and competencies. Additionally, EU funding in 2016 enabled investing in simulation technology and teacher training concerning simulation techniques in 12 Polish medical universities. Those new teaching approaches also demanded new types of assessment.
Summary of work
The research aims to present the process of introducing the OSCE examination into the medical curriculum of the Medical University of Lublin.
Summary of results
The first OSCE, in 2015 was a formative assessment. It was included in the elective course Basic Clinical Skills for the second-year medical students. It was to provide an objective assessment of the skills acquired, as a reliable tool in assessing technical skills. The content was developed according to the Newble’ approach and resulted in five OSCE stations. Passing rate was 69% in the first attempt. In 2017, BCS became an obligatory module included in the first year of medical studies and OSCE, developed through the past two years, become a summative assessment, and achieved passing rate of 78% in the first attempt.
In 2018, the Practical Clinical Training module was introduced with OSCE as its summative assessment commencing the final year of studies (6th year). The OSCE contained three stations: technical procedure, decision tree, and communication station. There were six options of a task on each station. 96% of participants passed in the first attempt. The examination was implemented by the CMS examiners. In 2019, OSCE for the final year was further developed into six stations: surgery, gynaecology, emergency medicine, paediatrics, internal medicine, and family medicine. The trained examiners/clinicians, responsible for teaching a given module, assessed participants. Faculty of the Department of Didactics and Medical Simulation coordinated the examination. The tasks had a form of short scenarios. Each station lasted for 10 minutes and participants rotated according to provided schedule. Nearly 97% passed OSCE in the first attempt.
Discussion and conclusion
Medical simulation, as a separate teaching methodology, requires distinct assessment methods. The goal is also to avoid the disadvantages of traditional test-oriented evaluation. Furthermore, an exam is a key element of the educational process. When choosing its form, all aspects of educational activities leading to the preparation of competent specialists are fundamental; not only knowledge and technical skills, but also communication skills, the ability to think critically and reflect on experienced activities and decisions. When constructing OSCE, we adhered to the theory of constructive alignment; the assessment was to support learning, and to reflect the implemented teaching methodology. Students opinions and analysis of the result of implemented OSCEs constituted a basis for its further improvement.
Take home message
When designing programmatic assessment, it is necessary to develop it continuously accordingly to students’ opinions, obtained results, and examiners feedback.
Competences assessment for final year health professional students
* Anthony Serracino-Inglott, Lilian M. Azzopardi
* Corresponding author: Anthony Serracino-Inglott, University of Malta, Malta
anthony.serracino-inglott@um.edu.mt
Background
The course leading to a degree in pharmacy offered at the University of Malta covers 330 ECTS credits spread over 11 semesters. In the final year of the course, the students cover two study units: 1) a synoptic experiential-based study unit in Pharmacy Practice (40 ECTS credits) that is aimed to apply scientific knowledge into practice in a community pharmacy setting, 2) a problem-based learning study unit in Clinical Pharmacy (20 ECTS) aimed to develop a comprehensive, holistic approach to patient care and medicines management.
Summary of work
The assessment model developed for the synoptic study unit includes continuous assessment (20%) that is accrued from an experiential portfolio and two written multiple-choice examination papers (80%) of two hours each. The experiential portfolio assesses the competences developed by the students in terms of reflecting on practice, identifying practice standards, appreciating ethical considerations and identifying legal and regulatory requirements. The first written examination (paper 1) is an open-book and assesses the competence of students to apply knowledge appropriately in a timely way and to interpret information available in primary references. The second examination paper (paper 2) assesses numbers sense and accuracy in calculating doses and ability to mobilise scientific knowledge to practice implications. In the clinical pharmacy study unit, continuous assessment is based on case discussions (10%) and a written multiple-choice patient-cases based examination paper of 2 hours duration (90%). The paper assesses the ability of the students to make professional judgement when participating in multidisciplinary healthcare teams for therapeutic decision making, to prepare patient monitoring plans and long-term follow-up.
Summary of results
In the June 2019 examination session, 25 students sat for the final examinations. The average marks obtained in the two study units were: 72% for Pharmacy Practice and 69% for Clinical Pharmacy. For Pharmacy Practice, Paper 1: 8 questions were answered correctly by all the students and 71 questions were answered correctly by 20 or more students, for Paper 2: 9 questions were correctly answered by all students and 40 questions were answered correctly by 20 or more students. In the clinical pharmacy examination paper, there were 9 questions that were answered correctly by all students and 46 questions were answered correctly by 20 or more students.
Discussion and Conclusion
The performance of the 2019 student cohort in the two study units was very similar. There is a difference in the performance between paper 1 and paper 2 of the synoptic study unit which may indicate a better achievement of the students to search for and interpret information as opposed to applying scientific knowledge to practice.
Take-home message
The assessment model of the two didactic study units in the final year of the pharmacy programme provides a structured approach to evaluate attainment of competencies by the students in aspects of evidence-based patient-focus practice of pharmacy. Supporting the students to develop confidence in application of scientific knowledge and in decision-making regarding pharmacotherapy optimisation and patient follow-up.