Poster presentations
Chair: Iain Robinson, Co-chair: Kevin Brandom | 9th March 2021, 12:30 -15:30 (UK’s time)
The Role of Data Presentation in Supporting Student Learning when Using Programmatic Assessment
* Matthew Cripps, Kathryn Fox, Luke Dawson
* Corresponding Author: Matthew Cripps, University of Liverpool, School of Dentistry (United Kingdom), m.cripps@liverpool.ac.uk
Background
Providing feedback and coaching is an essential part of a programmatic assessment (PA) (1). However, data suggest that student acceptance of PA can be challenging due to the confusion over the formative and summative roles (2). The University of Liverpool, School of Dentistry uses a PA approach in the undergraduate curriculum and has developed a technology supported tool called LiftUpp. Within LiftUpp the scoring system focuses on establishing the independence of the learner’s performance (3) across a 6-point developmental indicator scale. Individual judgements are triangulated longitudinally and displayed on a student interface. The display of data was a simple colour coded matrix showing the observation and the total count of each developmental indicator (DI)– with DI’s of: 1&2 (no independence) being coded ‘red’; 3&4 (support needed) being coded ‘blue’; and, 5&6 (independence) being coded ‘green’.
Summary of Work
Recently it became clear that the current feedback mechanisms had students comparing “scores” and leading to undue stress, with some students pleading with their tutors not to give them ‘2’s’. In addition, this approach was driving a hidden curriculum with students developing their own ‘stories’ over the DI’s and their meaning.
To promote a more developmental style of feedback, student partnership was established to inform interfaces that promoted reflection. The first stage was to develop a mechanism for standard setting through mapping threshold DI levels for every item and linking each one to the expected performance of the student in each year (4). The second stage was to refine the interface to display either meets or exceeds requisite standard of independence as a ‘blue’ box, or a ‘white box’ for those DI’s that fell below the requisite threshold. Feedback was provided for every clinical session for all items that had been observed, along with written comments where appropriate. Crucially, the DI numbers were no longer displayed so that learners focused on the verbal and written feedback encouraging reflection and future development
Summary of Results
Results from focus groups have been positive with statements such as:
"I think that is an improvement...you don’t get hung up on specific numbers, I think everyone is pleased with that.”
Furthermore, there has been an improvement in our National Student Survey Scores (NSS) in the area of feedback and assessment with an on average 15% increase in satisfaction on 2018 results, suggesting students are more satisfied with the feedback being provided and seemingly less stressed.
Discussion and Conclusions
For PA, it is essential that the feedback provided has the right educational impact on the students. Our experience, and data, suggest that moving from a situation that focused students on ‘scores’ to one that focused them on developmental need has been beneficial.
References
1. van der Vleuten CPM, et al. Medical Teacher. 2012;34(3):205–14.
2. Heeneman S, et al. Medical Education. 2015 Apr 28;49(5):487–98.
3. Crossley J et al. Medical Education. 2011 Jun 1;45(6):560–9.
4. Govaerts M, van der Vleuten CPM. Medical Education. 2013 Dec;47(12):1164–74.