Friday 8 November - Short Presentations
10:50-12:30, Room: 1.27, 2nd floor

Knowledge Theme 

Chair: Mr Neil Rice, University of Exeter Medical School

The attitudes of academic teachers towards guidelines presenting good practices in writing multiple choice questions – an interview study

* Martyna Piszczek, Piotr Przymuszała, Katarzyna Piotrowska, Magdalena Cerbin-Koczorowska, Ryszard Marciniak
* Corresponding author: Martyna Piszczek, Students’ Scientific Club of Medical Education, Poznan University of Medical Sciences, Poland, martynapiszczek@onet.pl 

Background  
Although multiple choice questions (MCQs) seem to be an easy and efficient tool for students’ assessment, they are usually thought to be poorly written. It is a common belief that academic teachers are not trained enough in creating them. Therefore, low-quality MCQs cannot be regarded as objective in terms of knowledge evaluation. It is a problem especially in medical education where the knowledge students possess will have a direct impact on peoples’ health and lives in future practice.  

Summary of Work  
The authors decided to create and distribute a short guideline on writing good MCQs. The next step was investigating attitudes of academic teachers towards it. The document is a brief review of the most important MCQs’ features. It takes the teacher step by step through the process of creating a good multiple choice question. It also provides a checklist that allows the creator fast proofreading of new and already existing questions. Phenomenological approach was chosen to describe experiences of 10 teachers who agreed to participate in semi-structured in-depth interviews.  

Summary of Results
Obtained results show that academic teachers discern the need of improving the quality of MCQs. They find the tool helpful in writing new questions, emphasizing its briefness, appreciating the clear and readable structure of the guide. The respondents confirm usefulness of the document in acknowledging their own mistakes and see a constant need of MCQs’ standardization. Results also show that academic teachers are open to guidelines useful in every-day work and they also notice that inadequate knowledge on writing good MCQs is a big challenge of medical education and adequate reactions to combat it are required.  

Discussion and Conclusion  
Academic teachers are aware that low quality of MCQs is an important issue. It is necessary to constantly improve and refresh the knowledge on writing MCQs, even after some years of experience in creating them. Participation in a single workshop or training course seems to be deficient. A distribution of short documents with guidelines is considered as a good way to quickly acquire or systematize basic knowledge.  

Take-home Message  
Inadequate knowledge on good practices of MCQs writing and item-writing flaws as well as control of their implementation still constitute a big problem in medical education. However, academic teachers are open to a tool with guidelines on MCQs writing and emphasize its usefulness in preparation of new questions and acknowledging flaws of already existing ones. 

Relationship of medical student’s year level and critical thinking score in Lambung Mangkurat University

* Pandji Winata Nurikhwan, Lena Rosida, Eka Yudha Rahman
* Corresponding author: Pandji Winata Nurikhwan, Medical Faculty of Lambung Mangkurat University, Banjarmasin, Indonesia, pandji.winata@gmail.com 

Background 
Critical thinking skills are considered as a skill that is essential and important for a doctor to have for taking clinical decisions. Thus, facilitating and supervising the development of critical thinking skills of a medical student is necessary. This study aimed to compare the level of critical thinking with the education year level.  

Summary of Work 
This study is a cross-sectional study. The population and sample in this study were students of the Medical Faculty of Lambung Mangkurat University. The sample was divided into first-year, second year, third-year, fourth-year preclinical, and clinical students. Data is obtained through a Diagnostic Thinking Inventory (DTI) questionnaire which measure flexibility of thinking (FoT) and structure of memory (SoM) domain. The paired T-test statistical test and ANOVA were conducted (alpha = 95%)  

Summary of Results 
The total of subjects was 654 students, 85 from clinical students, 147 from first-year students, 119 from second-year students, 143 from third-year students, and 160 from fourth-year students. There is a significant result on the SoM results' t-test academic phase compared to the clinical (p = 

Factors associated on First Year Students’ High Stakes Score in Medical Faculty Lambung Mangkurat University

* Pandji Winata Nurikhwan, Alfi Yasmina, Didik Dwi Sanyoto
* Corresponding author: Pandji Winata Nurikhwan, Medical Faculty of Lambung Mangkurat University, Banjarmasin, Indonesia, pandji.winata@gmail.com 

Background 
In the ever-evolving world of medicine that changes rapidly, having self-directed learning (SDL) is important for medical professionals since it promotes the concept of life-long learning. It is necessary for medical students to have a good SDL skill and being able to implement it successfully. Their performance in doing so could be assessed through their learning achievement such as their score in exam written test. The purpose of this study is to determine influence factor of student entry track, self-directed learning readiness, motivation, and learning environment on first year medical education students’ score achievement in Medical Faculty Lambung Mangkurat University. 

Summary of Work 
This study used cross-sectional approach to gather the data by using questionnaires consist of 10 independent variable (gender, living status student admission path, SDL Fisher’s, Situational Motivation Scale and 5 subscale of Dundee Ready Education Environment Measure (DREEM) questionnaires), and one dependent variable (written score test). The population of this research is the first-year students of the Medical Education Program in Lambung Mangkurat University. The data will be analyzed through logistic regression analysis using SPSS. 

Summary of Results 
There are 162 student who filled the questionnaires. Learning Environment for Teacher Subscale, Academic Subscale, Atmosphere Subscale) were advanced for multivariate test. The equation is Y = 0.392 – 0502academic + 0.821atmosphere (Hosmer and Lemeshow test = 0.890) with academic (p= 0.24, OR 0.605 (0.391 – 0.937)) and Atmosphere (p= 0.006, OR 2.273 (1.259 – 4.104) subscale from DREEM. 

Discussion and Conclusion 
Academic and atmosphere in student learning environment are significantly associate with written test score. 

Take-home Message 
Lambung Mangkurat University need to develop their learning environment in terms of academic, learning, teaching, and social environment.

Formative Progress Testing Predicts Longitudinal Learning In Summative Dr Serial Progress Testing in a Two Year Healthcare Programme 

* Balwinder Bajaj, Dr Steve Capry
* Corresponding author: Balwinder Bajaj Swansea University Medical School, UK, B.P.S.
Bajaj@swansea.ac.uk 

Background 
Physician Associates are a new addition to the UK healthcare workforce. They are trained to the medical model in diagnosis and management. Physician Associate (PA) courses in the UK admit mostly healthcare or science graduates to a 2 year Diploma or Masters course followed by a certifying national exit examination and clinical practice. Assessment validity in these courses requires testing of core knowledge and practical skills. Single best answer multiple choice questions and objective structured clinical skills assessments are generally employed. Our institution has experience in the use of Progress testing in a four year postgraduate Medicine programme. We now describe our experience of this for a shorter two year PA programme.

Summary of work 
The Progress test consists of a 200 item question paper with a standard format of stem, lead in question and five options and a single best answer. Anchor topics are included to improve reliability. The paper is blue printed, standard set (using the Anghoff method at National examination outcome level) and emended according to the established assessment cycle (Fowell et al., 1999). Progress test data from the two years of the course (from two consecutive cohorts) is presented. Following an initial formative Progress test at the start of year 1, two summative Progress tests per year follow during the PA programme. Evaluation of longitudinal cohort performance has been analysed and compared with a formative Progress test undertaken at the start of year 1 of the course.

Summary of results 
We have previously studied inter and intra-cohort performance and demonstrated the use of progress testing in PA students shows longitudinal progression in performance through the two years of the course, and this is again confirmed with the increase in the mean cohort score from progress test 1 to 4. The Gaussian distribution of test scores is confirmed. The data demonstrates the predictive nature of student performance in the initial formative Progress test and the longitudinal acquisition of core knowledge during serial testing. We demonstrate a tailing off of the mean score at progress test 4 and an increase in standard deviation (indicative of scatter of scores) as the course becomes increasingly clinical in design. The correlation of the formative test score with progress tests 1-4 decreases on longitudinal comparisons.

Conclusions 
The use of Progress testing allows assessment of acquisition of core knowledge in individual learners within a cohort and between cohorts. Whilst formative Progress testing at the outset of the course correlates with mean test scores between Progress tests 1-4 over the two years of sequential testing, it cannot predict, with any accuracy, their performance at the end of the course and prior to sitting the national exit examination.   

Take home Message 
Progress testing has been shown to be an effective assessment technique for postgraduate healthcare student assessment. Its true content validity will emerge as longitudinal testing of our students goes on to the national exit examination. Its use with other assessment tools will allow demonstration of its predictive validity.

Distractor analysis: Qualitative and Quantitative analysis of Multiple Choice Question distractors

* Mennatallah Hassan Rizk, Hanaa Saeed El-hoshy, Menna Rizk
* Corresponding author: Mennatallah Hassan Rizk, Alexandria Faculty of Medicine, Egypt m_hassan200430@alexmed.edu.eg

Background
Single best-answer multiple-choice questions (MCQs) are considered one of the frequently used written assessment methods in medical education. Such type of questions consists of a stem, two or more options from which examinees must choose the correct option (the distractors). Among those choices, a single correct or best response is present (the key). Having effective distractors is one crucial aspect, at which many MCQs fail. Teachers often spend a great deal of time constructing the stem and less time on developing plausible distractors to be chosen instead of the correct answer by examinees. High quality MCQs, however, also need the options to be well written. Non-functioning distractors (NFDs) are options that are selected infrequently (<5%) by examinees. As such, these options should be removed from the item or be replaced with a more plausible option. 

Summary of work 
The aim of the study was to examine the efficiency of MCQ distractors for end of Module exams 2018/2019 of basic medical sciences module at Alexandria Faculty of Medicine. We performed quantitative analysis through calculation of distractor efficiency and the number of NFDs. In addition, qualitative analysis of exam papers was done to identify the most frequent flaws related to distractor writing. 

Summary of results 
We found that, the proportion of items with distractor efficiency 100% (All 3 distractors were functionally effectively) were 35% (14 / 40 item), and the proportion of non-functioning distractors (as a % of all distractors) was 34.2% (41 / 120 distractors), suggesting that teachers have some difficulty in developing plausible distractors.  As regard the flaws related to distractor writing, qualitative analysis revealed that the most frequent flaws were at the following order; Distractors are non- homogeneous in content and the options containing irrelevant / trivial information. 

Conclusions 
Both qualitative and quantitative distractor analysis are crucial to ensure high quality performance of multiple choice question exams. 

Take home message
Distractors writing need more time and skill for construction as NFDs affect item difficulty and discrimination levels.

Workplace assessment at different scenarios clerkship

* Elaine Assis, Silmar Gannam, Denise Ballester
* Corresponding author: Elaine Assis, University of City of São Paulo – UNICID, São Paulo Brazil
elaine.assis@gmail.com 

Assessing clinical skills is demanding and time-consuming for medical educators. What skills must be evaluated? How many times a student should be assessed? What determines someone has acquired the skills to progress? What grading system should be used? These are some of the many questions that make workplace assessment, especially at clerkship a challenge. For many years, at our school, each internship had its own way to evaluate student´s performance. Some had a written exam, others had a global unstructured subjective evaluation and a few did both types of assessment. The students had high grades only, which didn’t discriminate the good from the poor students. In this matter, we were worried if the students were actually acquiring the skills they needed to become doctors. To break this stalemate, we developed a standardized evaluation composed of two assessments: one for clinical knowledge and another for clinical skills. Knowledge was assessed either by a 50 to 80 multiple-choice questions or a 10 short clinical cases written exam. To assess clinical skills, a structured global rating tool with 11 items was developed and adapted with the help of the medical educators in each clerkship. This tool was created on a Google form® and should be filled up once a week by two medical educators and the student being assessed, total of eight evaluations. Three of the eleven items were about professionalism and qualified as satisfactory or unsatisfactory. If a student was not satisfactory in one of these, his grade on that assessment was zero and the rest of evaluation was used only as feedback. The other items evaluated different skills such as history taking, time management, physical examination and so on. These were qualified as above expected, satisfactory and unsatisfactory and depending on the student proficiency level had different weight on the final grade. Educators were trained to use this new tool and to do feedback. After the implementation of this new assessment model, we noticed an augment of unsatisfactory grades and a better discrimination of students who had or not acquired the expected skills. The number of clinical skills assessment increased from none or two to a minimum of eight. Feedback, which was rarely done, became a routine in the different clerkships. Both students and educators recognize the impact of the feedback in the improvement of students ‘skills. The written exam also improved the quality of knowledge assessment. The participation of educators in constructing and adapting this new assessment promoted their engagement and belief in the assessment process. We concluded that both engagement of the educators and an online, easy to use and short assessment tool was essential for the success of the implementation of a new evaluation process. To standardize both clinical and knowledge assessments improved the discrimination of good and poor students and helped to perform good quality feedback. For a good quality evaluation at clerkship, both clinical and knowledge assessments should be standardized and constructed with the participation of the medical educators who will perform them.

© Copyright 2019 The European Board of Medical Assessors (EBMA) | Disclaimer