Saturday 9 November - Academic Short Presentations (EBMA)
10:50-12:30, Room 1.19, 2nd floor

Technology Theme

Chair: Dr Harish Thampy, University of Manchester

Electronic Testing of Real-Time Clinical Competencies

Jamie Read, Roxanne Treloar,
* Corresponding author: Jamie Read,Peninsula Medical School, University of Plymouth, United Kingdom
james.read@plymouth.ac.uk 

Background 
Electronic solutions to assessments are receiving increasing attention. At Plymouth University we have introduced electronic assessment of students for their clinical competencies. This has created opportunities to review our assessment delivery model and staff involvement in assessment design. 

Summary of Work 
Through the introduction of iPad based testing we have engaged with a large number of stakeholders, including students, patients, professional services, Technology Enhanced Learning teams and clinicians. We have collaboratively developed new assessment forms for students based on the feedback received from this group. These forms have been piloted alongside the current paper forms on a variety of programmes such as Medicine and Dentistry. We set the following objectives to answer: 
1. Do students find electronic assessment acceptable? 
2. Do examiners find electronic assessment acceptable? 
3. Does the introduction of electronic assessment impact on the quality of data? 
4. Does the introduction of electronic assessment impact on the quality of feedback? 
5. Does the introduction of electronic assessment impact on the administration of the data?  
To answer these questions, students and staff members were invited to provide feedback to the clinical skills team. The quality of data including the identification of missing data was reviewed by the assessment team and changes to process were highlighted. Feedback quality was reviewed by the module lead and through student feedback.

Summary of Results 
1. Students have been positive about the introduction of electronic assessment. Comments included the benefits of not having to retain multiple paper forms and the ability to have assessment forms readily available on one device. 
2. Examiners found the forms easier to interpret, with the benefit of being able to complete them on a variety of different devices. They also valued the receipt of copies of the assessments for their records. 
3. Missing data has been reduced to zero, as the forms cannot be submitted if they are incomplete. 
4. Feedback has been clearer due to fewer issues with handwriting recognition and the introduction of mandatory fields, resulted in fewer missed aspects of assessments as the forms cannot be submitted with incomplete data. 
5. The administrative load of checking and uploading the data for analysis following the assessment has reduced significantly.

Discussion and Conclusion 
The introduction of electronic assessment has been met positively. There were some initial concerns from staff members that the inability to draw diagrams within the feedback, and the need to use an on-screen keyboard would reduce feedback quality. There were also concerns that the timings of assessments would be affected due to the use of on-screen keyboards rather than handwriting, particularly for high-stakes time pressured assessments such as the ISCEs. However, this has not proved to be the case. We would recommend that early staff engagement to address these concerns, with regular testing is considered by those who are thinking about introducing a similar assessment model in the future.  

Take-home Message 
Introduction of iPad based testing has promoted increased quality of feedback, avoided missed assessment and reduced administrative workload.

How we design and implement a feedback app to improve narrative feedback giving in undergraduate clerkships, using participatory design

* Johanna CG Jacobs, Hester EM Daelmans
* Corresponding author: Johanna Jacobs, Amsterdam UMC, VU University Faculty of Medicine, The Netherlands
a.jacobs@amsterdamumc.nl 

Background 
The complexity of the workplace learning environment in medical education is recognized worldwide and has several unique characteristics, both in undergraduate clerkships and in postgraduate training. (1,2) Narrative feedback and longitudinal assessment are important in workplace learning. In our undergraduate curriculum a digital portfolio is used to collect feedback. From our feedback workshops for clinical teachers we know that the ‘how’ and ‘what’-questions remain challenging for feedback givers. Our aim is to design a feedback app assisting feedback givers in undergraduate clerkships, and implement this tool in different clerkships, both in hospitals and community-based education. 

Summary of Work 
We conducted this project in VU University Faculty of Medicine, Amsterdam University Medical Center, the Netherlands. We designed a feedback app to improve feedback giving in the clerkships of the undergraduate medical curriculum. The app is meant to assist the supervisors in clinical clerkships, in general practice, and in community-based clerkships. After a needs assessment, faculty development experiences and quality assurance evaluations, and with a ‘participatory design’ approach a first version of the feedback app was constructed. In several meetings and individual interviews with potential users (residents, medical specialists, one GP, Skillslab teachers, interns, IT specialists, faculty developers), we constructed a first version of the app in collaboration with a third party.(3) Subsequently, we started in 2019 the implementation project. Two pilots in clinical clerkships and one in general practice are scheduled. Following a plan-do-check-act approach, we will collect feedback on the app and subsequently enter a next phase in this design process. 

Summary of Results 
In several meetings with different subgroups of all stakeholders, we discussed the requirements for the first version of the feedback app. An important result of these discussions was the consensus on which structure of the app should be most appropriate for the content and the users, in a logical manner and easy to use in daily practice. They advocated that the structure of should be tailored to the ‘clinical process’ and start with history taking, physical examination, clinical reasoning, therapeutic interventions, and include professional behaviour, empathy, compassion, etcetera. Conceptually, the first version of our feedback app resembles a ‘congress guidebook app’ with eight tiles. In the actual version these tiles are: 1. General aspects, 2. Professional behaviour, 3. History Taking, 4. Physical Examination, 5. Status Report , 6. Hand-over, 7. Technical Skills, and 8. Background information. 

Discussion & Conclusion 
We designed a feedback app in a project with several stakeholders, as an extra ‘tool’ to improve feedback-giving in workplace based learning, in undergraduate medical curricula. In 2019 the implementation project started with several pilots. Next to self-directed learning of medical students, as described in the R2C2 model (1), we think a feedback app provides a new direction to faculty development activities and contribute to our efforts to improve work-based assessment and narrative feedback. 

Take home message 
Using participatory design, we started a challenging project with potential users to design and implement a feedback app.

E-assessments: the use of fill in the blank questions to assess biomedical understanding in undergraduate medical assessments

* Claire Stewart, Kenneth Langlands
* Corresponding author: Claire Stewart, University of Buckingham, United Kingdom, 
Claire.stewart@buckingham.ac.uk  

Background 
The creation of assessments of high reliability and validity that are also tractable to digital marking is highly desirable. To date, this has been achieved (to a certain extent) by the use of selected-response questions and, more recently, by the use of very short answer questions. There are, however, concerns that it is difficult to create items for early-years testing without significant cueing (Sam, 2018). Additionally, very short answer questions do not necessarily evaluate higher-levels of biomedical understanding. Our approach has been to employ short constructed-answer question, although this is compromised by assessor variability and assessment efficiency is reduced by an exhaustive marking process (Powell & Gillespie, 1990; Ventouras 2010). There exists, therefore, a need for a question style that tests learning meaningfully and is adaptable to an automated marking platform. Fill in the blank questions are known to be effective in testing higher levels of learning, necessitating analysis, synthesis and evaluation in medical students (Mooney, 1998). 

Method 
We replaced a sub-set of constructed response with fill in the blank questions in two diets of early-years undergraduate MBChB summative assessments. Items were in the form sentences describing, for example, a physiological process with four missing words. In one case students were presented with a list of options to choose from. Items were quality-controlled by peer review, which evaluated cueing and higher-levels of cognitive testing. Post-hoc psychometric analysis of item performance was undertaken using SPSS. We also determined total marks scored per item as difficulty indices tend not take into account partial marks achieved. 

Results 
Twelve questions have been evaluated to date. The spread of marks achieved was comparable with other question styles, and eight of the items had item-total correlations (ITCs) greater than 0.2. Only one question had an ITC less than 0.17, as a consequence of poor question design. Other than issues with spelling, marking was considerably faster and more accurate than for constructed-response questions. 

Conclusion 
Fill in the blanks are a valid question style in the assessment of early-years biomedical understanding in medical schools. Moreover, this style of question eliminates the element of guessing seen with selected response questions. Importantly, these questions are easily digitally graded, with implications for both marking consistency and efficiency. We acknowledge that this represents a very small study and more research is ongoing to fully evaluate this style of question. 

Take home message 
Fill in the blank questions offer a valid method for assessing biomedical understanding in early years undergraduate medical education. These questions can be used effectively in a digital platform improving marking consistency and exam efficiency.

References  
Powell, J L & Gillespie, C 1990, “Assessment: All tests are nor created equally”, Paper presented to the Annual Meeting of the American Reading Forum. Sarasota, FL.  
Mooney, G A, Bligh, J G, & Leinster, S J 1998 “Some technique for computer based assessment in medical education”, Medical Teacher, Vol 20, Iss. 6, pp.560-566 
Sam, A H, Field, S M, Collares, C F, Van der Vleuten, C P M, Wass, V J, & Melville, C 2018 “Very short answer questions: reliability, discrimination and acceptability” Medical Education 52 (4), pp. 447-455 
Ventouras, E; Triantis, D; Tsiakas, P; Stergiopoulos, C 2010 “Comparison of examination methods based on multiple choice questions and constructed response questions using personal computers” Computers and Education, 54(2), pp. 455-461.

Experience using Peerwise in the early years of medical school

* Phil Smith, Liz Metcalf
* Corresponding author: Prof Phil Smith, Cardiff School of Medicine, Wales
SmithPE@cf.ac.uk 

Background 
PeerWise is a free on-line resource for authoring single-best answer questions for peers to answer, score and comment upon. Writing questions provides learning and generates a question bank highly relevant to that cohort. Virtual badges and league tables further incentivise engagement. 

Summary of Work 
Since 2013, we have introduced PeerWise to Years 1 and 2 medical students in Cardiff, correlated engagement with summative examination results and obtained feedback from student focus groups. 

Summary of Results 
A typical year group (n=300) writes 1500–2000 questions annually and provides 200–300 000 answers. About 5% of students write 90% of questions. Summative assessment performance correlates well with PeerWise question writing but not with questions answered. Students report the formative question bank is particularly valuable. 

Discussion & Conclusion 
We have successfully introduced PeerWise into Years 1 and 2. In contrast, students beyond Year 2 have not engaged, as they can access large formative banks of clinical questions cheaply elsewhere. We invite content experts to scrutinise and comment upon questions within their specialist fields, and also promptly intervene if students leave careless or offensive (anonymous) comments. 

Take-home Message 
We strongly recommend PeerWise to other medical schools for their early years.

Low fidelity simulation in a high fidelity world 

*Alex Scott, Adelaida Gartner-Jaramillo
Corresponding author: Dr Alex Scott, Frimley Park Hospital, United Kingdom
alexander.scott3@nhs.net 

Introduction 
All aspects of medical training have experienced an exponential acceleration in the application of technology for learning needs. Research promotes the use of high fidelity models and ever more complex training methods with organisations keen to adopt and implement new technology. Models are utilised to minimise potential risks to patients through bedside learning and refine established technique. Simulation practice can also be used to develop non-technical skills pertinent to safe clinical practice. Simulation training can be employed from early stages of undergraduate education through to use in professional postgraduate exams giving a large scope of use in a multiplicity of environments. 

Methods 
Forty Foundation Year 1 Doctors were taught clinical skills utilising Low fidelity part task training models. Four clinical skills were selected from pre-determined postgraduate curricula. Self assessment pre and post procedure were recorded with qualitative feedback sought as a secondary measure. 

Results 
Global increases are seen across 4 sampled clinical skills. Participants self-reported increased confidence and competence. A high value was placed upon trainees perceived value in training. 

Conclusion 
Fidelity has been shown to play an integral role in simulation. The authors conclude that simple part task trainers, low fidelity models, still have a valuable part to play in medical education. They remain cost effective, adaptable and accessible training tools in the era of increasing complexity. Simulation provides a safe space to develop both technical and non-technical aspects. Low fidelity simulation can be used to underpin the learning objectives of trainees through effective feedback in real time, access to repetitive practice and remain a feasible training tool for trainers and trainees alike. High fidelity simulation should not be excluded completely however appears to be best suited to defined roles in more complex moulage. 

Take Home Message 
Technology has the ability to improve and evolve medical education. With the potential for increased feedback, self and peer assessment along with pragmatic assessment, simulation has firmly entrenched itself in medical education. Care should be taken however not to disregard lower fidelity models as they still provide proven effective learning, enable the teaching of non-technical skills and facilitate knowledge delivery.

Gamification in medical education - the use of an ‘Escape Room’ to assess and improve confidence of prescribing in medical students

* Joely Leeder, Priyanka PatelAbstract presenter details (this will also be the corresponding author)
* Corresponding author: Joely Leeder, Frimley Park NHS Foundation Trust, United Kingdom
joely.leeder@nhs.net 

Background 
An escape room is a team-based, time-limited game where the participants are required to solve problems and decipher clues, in order to achieve a specific goal. The use of escape rooms in medical education is a relatively new concept that has yet to be integrated into UK medical school curriculums. 

Summary of work 
A time-restricted escape room was created for medical students to assess confidence in prescribing, and as a learning tool. Seven tasks were created depicting a patient’s journey through hospital. The following topics were covered: antimicrobials, venous thromboembolism, fluids, analgesia, hyperkalaemia, antiemetics, laxatives and acute kidney injury. Two teams of medical students had one hour to complete the tasks, with the winners being the first team to finish and decipher the final code. Each task was located within either an envelope or a locked box. On successful completion of each task, as determined by the facilitators, students would gain access to the next task either by the use of a code or key. Students could seek guidance from the facilitators throughout the game. The students were also provided with access to local guidelines and the BNF. They were expected to prescribe new medications, carry out opiate conversion calculations, and consider drug interactions and contraindications. Following completion of the game each task was discussed, including the rationale behind the answers. Students were also given the opportunity to ask questions at this point to enhance their learning. 

Summary of results 
In this initial pilot, five students were asked to rate their confidence levels (1 = not confident, 5 = very confident) in prescribing within the eight areas assessed, pre and post-game. The average confidence levels for each area were then calculated. 

Antimicrobials: 1.6 (pre-game) → 3.6 (post-game) 
VTE: 1.4 → 3.8 }
Fluids: 1.6 → 3.8 
Pain: 2.4 → 3.8 
Renal disease: 1.4 → 3.8 
Management of hyperkalaemia: 3.2 → 4.6 
Antiemetics: 2.0 → 4.4 
Laxatives: 1.6 → 3.8 

The students were also asked to provide feedback on the usefulness of this session and teaching method used. Most students had only received prescribing teaching in lecture format and appreciated the ‘practical nature’ which simulated ‘being on the ward with real life problems’. They enjoyed how interactive the session was, and found it an ‘engaging’ and ‘non-intimidating’ way to learn. 

Discussion and take home messages 
To our knowledge, this is the first escape room that has been created for the sole purpose of assessing and improving prescribing skills; few have been utilised for educational purposes, most being used as team building exercises with healthcare professionals. 

We have demonstrated that an escape room can be used as an interactive and engaging teaching or assessment tool, and can increase confidence levels in a particular topic, while also promoting the skills required for effective teamwork. Escape rooms have the potential to be used in a variety of settings in medical education, with the potential to incorporate different healthcare professionals.

© Copyright 2019 The European Board of Medical Assessors (EBMA) | Disclaimer