Scenario-Based, Single Best, Multiple-Choice Questions (SB-SB-MCQs) in Basic Medical Sciences: An Exploratory Study about the Staff Awareness, Knowledge and Difficulties Encountered

Abstract

Background: Multiple Choice Questions (MCQs) are well known and widely used assessment tool. They can be used to measure the different levels of educational outcomes: knowledge, understanding, judgment and problem solving. Traditional MCQs (stand-alone MCQs) often are used as a tool for facts recall. Advantages of using Scenario-based questions (SBQs) include more focus on learning objectives and ability to assess higher levels of learning. Shifting to scenario-based questions can increase the level of difficulty and measure higher levels of cognition. Purpose: This study explores the current knowledge and overall awareness of the undergraduate teaching staff about the use and difficulties of scenario-based single-best answer, multiple-choice questions (SB-SBA-MCQs) in assessments of the basic medical sciences. Method: We used an e-Likert scale questionnaire to explore this issue. The questionnaire covered the current knowledge, experience of staff in writing (SB-SBA-MCQs), courses or postgraduate degrees they attended and difficulties they face or anticipate in writing (SB-SBA-MCQs). Results: The majority (86%) are familiar with courses or workshops related to MCQs writing and assessment in general, a small minority have not attended any. The majority (86%) had some experience in writing MCQs. Only, a small percentage have not tried writing this type of MCQs. Nearly 60% think it takes time to construct, the majority (96%) of those researched are in support of shifting to scenario-based MCQs in basic medical sciences. Conclusion: The study has shown most of the teachers of basic medical sciences are aware of and with good knowledge in (SB-SBA-MCQs). It also highlighted the importance and need for regular training courses and workshops on the same.

Share and Cite:

Salih, M. and Abdelbagi, O. (2022) Scenario-Based, Single Best, Multiple-Choice Questions (SB-SB-MCQs) in Basic Medical Sciences: An Exploratory Study about the Staff Awareness, Knowledge and Difficulties Encountered. Journal of Biosciences and Medicines, 10, 79-85. doi: 10.4236/jbm.2022.109007.

1. Introduction

To deliver effective medical science education, the assessment system must have the ability to examine students’ knowledge, learning attitude and practical skills [1]. The assessment of competency changes students learning behavior and improves information gaining from the teachers [2] [3]. Multiple-choice questions (MCQs) have been known and in use as a tool of assessment for a long time in both under and postgraduate medical examinations [4] [5]. They are consistent, fair, unbiased, cost effective, trustworthy, and easily differentiate between the high and low achievers [6]. A multiple-choice question consists of a problem or story, known as the stem, a lead-in question, and a number of suggested answers, known as alternatives or options. The answers consist of correct one known as SBA (Single Best Answer) and the other suggested answers are known as distractors [7]. Scenario-based questions can be used as early as in the first-year undergraduate. This will ensure the students will use and apply their knowledge so far and try to understand the scenario and hence choose the best single answer [8]. In this way, their practice will simulate the real-life of clinical practice when they graduate where they must apply their knowledge whenever they encounter a clinical event. MCQs are used in assessment of different Bloom’s Taxonomy levels; from factual recall to more complex levels such as evaluation and reasoning [9]. Moreover, because they facilitate a wider and more varied content, they are considered a suitable format of assessment especially for certificate and licensing assessments [9] [10]. On the other hand, they have a downside face as they are time-consuming, not easy to construct, and only well-trained staff are capable of writing them ( [5] [9] [11]). Since student learning is driven by tests, careful test construction is an important skill for educators to develop [12]. Staff needs continuous training courses to improve MCQS writing skills, the training must be regular and in repeated programs [13]. Assessment must be of high quality and valid, this requires the founding of strict procedures to check the test quality before and after conduction of the test.

Assessment is deemed to promote the educational skills of teachers as expressed by almost (75%) of directors of medical education centers, deans and academic chairs [14]. Guidelines and examination committees encourage excellent practices and promote educators skills and knowledge [15].

This a study to explore the awareness of the staff, involved in teaching of basic medical sciences, about their practice of scenario-based single best answer (SBA) MCQs and the difficulties they encounter. An electronic Likert-scale format questionnaire is used for data collection. The number of staff who responded and filled in the questionnaire is 101.

2. Methodology

This study is meant to explore the awareness of the university basic medical sciences’ staff about the scenario-based (SB) SBA (single best answer) MCQs as a tool of assessment; their knowledge and difficulties they encounter, if any, in writing this type of questions.

An electronic Likert-scale format questionnaire is used as a tool for data collection. The survey planet platform was used as a mean to facilitate data collection. The rationale for using the Likert Scale because it uses a universal method of collecting data and easy to understand and draw conclusions and results from the responses. The questionnaire consisted of twelve questions; background of staff member (medical/non-medical), previous experience dealing with scenario-based MCQs, courses or degrees attended in relation with MCQs and personal thought about shifting towards SB SBA MCQs.

The data was analyzed manually because of its small size.

3. Results

Two thirds of the research population are male university staff and the rest are females. More than half of the population researched are currently working in different universities in the kingdom of Saudi Arabia, a little more than one third are in the Sudan. The rest are sited in other different countries. Three quarters have a medical background, and the rest are non-medical.

Almost one third are anatomist, nearly one-fifth (20%) are pathologists, more than one-tenth (15%) are physiologists and few from each of the other basic medical sciences.

More than half of the population are at the Assistant professor level, one-tenth (10%) holding full professorship in their specialties, less than one-tenth (8%) are at the Associate professor level and the rest are at the lecturer and senior lecturer stages.

Two thirds of the staff researched have a wealth of experience in teaching for nine or more years. Less than one-fifth (18%) have 5 to 8 years and 15% less than 5 years.

More than two thirds of the population have a degree in medical education (or other related subjects). These ranges from a diploma to a PhD and less than one third do not have a degree in medical education or any other related subjects.

The majority (86%) are familiar with courses or workshops related to MCQs writing and assessment in general, a small minority have not attended any.

The majority (86%) had some experience in writing MCQs. Only, a small percentage has not tried writing this type of MCQs.

Overall, two-thirds think constructing this type of MCQs is easy and currently are doing it, another group (nearly 15%) also think it is easy but are not currently involved in writing them. Around 20% think it is difficult or not necessary.

Nearly 60% thinks it takes time to construct, one-fifth relates the difficulty to language barrier, one-third lack the knowledge to do and nearly one-fifth simply declares it as unnecessary.

The majority (96%) are in support of shifting to scenario-based MCQs in basic medical sciences

4. Discussion

Analysis of the results of the interview of the staff involved in teaching of the basic medical sciences revealed positive facts regarding construction and usage of scenario-based SBA (MCQs) in basic medical sciences and few negative points. The majority (86%) are familiar with and attended courses or workshops related to MCQs writing and assessment in general, a small minority have not attended any. Raza and H. Zainab (2019) found that faculty to faculty feedback can improve item writing considerably and regular training will perfect the faculty writing skills [16]. Rajaraman reported question writing should be upgraded by training of faculty on writing and a re-analysis is to be assessed after training [17]. The majority are used to this type of questions in contrast to a small minority (<15%) who lack previous experience. These finding are more or less similar to the findings of Abdulghani et al. (2015) where he pointed out that faculty needs a long duration of training courses to correct flaws in MCQS writing and the training must be continuous and in repeated fashions [13]. Regarding the difficulty issue of writing scenario-based questions, two thirds of the staff who responded to the interview think constructing scenario-based type of MCQs is easy and they are used to and nearly 20% are facing difficulties constructing this type of questions. Similar finding reported by Karthikeyan et al., who found that item writing could be affected by institutional and individual barriers, faculty development and quality assurance process. However, they failed to point-out specific challenges [18]. In the time factor, most of those interviewed pointed out that it is time consuming to construct good type of questions. Some have related this to lack of expertise or language barrier. These findings are nearly similar to the results of Karthikeyan et al., 2019 [19] who think allocation of reasonable time for item writing, beside bonding the experienced writers with new writers for advisership could enhance writer commitment. Others (Bligh and Brice, 2009) have indirectly highlighted the issue of time needed in the construction of the scenario-based questions as medical educators have many duties among them is writing high quality items [20]. Most of the participants (96%) are in support of shifting to scenario based MCQs in basic medical sciences, this in agreement with P. Lal, who found that shifting to case scenario is mandatory and it sound good in the era of reasoning in constructing exam question [21].

A renovation in the assessment methods is needed for assessing the learners understanding of the different anatomy subjects. This will ensure the learning of students will happen in a higher cognitive domain far away from the routine factual recall. It is done over a suitable theoretical basis of assessment which meets the expectation of all those involved in the educational process.

Eventually, the students will gain the necessary competencies in knowledge, skills and attitude to prepare them for practicing in the different medical fields. Consequently, this will positively influence the whole educational process [22]. The most commonly used tool of assessing learners in medical and health professions programs for anatomy are MCQs [23]. Case-scenario MCQs compared to stand-alone MCQs was found to offer prospects for integration of sub-specialties in assessment in line with PBL.

They are consistent and practical in assessing students’ cognitive skills. The critical and logical thinking is encouraged by different levels of item difficulties. The higher students’ scores in the CS-MCQ examination suggest improved understanding of the subject and/or well-written question.

Increasing the number of scenario questions will certainly mean wider course content is included in the examination. [24]

Clinical scenario questions present students with information on the clinical presentation, complications, laboratory and radiological investigations. The students have to interpret and manage specific clinical conditions which will necessitates the test items to be written in a high cognitive format [25].

5. Conclusions

The majority (86%) had some experience in writing MCQs and many of this majority are currently involved in writing MCQs.

Nearly 60% think it takes time to construct, one-fifth relate the difficulty to language barrier, one-third lack the knowledge to do and nearly one-fifth simply declare it as unnecessary. Most of those responded are in favor and support of shifting to scenario-based MCQs in basic medical sciences.

Author Contributions

Both authors contributed equally in all the steps of the research and preparation of the paper draft.

Limitation of Current Study

Small number of respondents (101).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Ghasemzadeh, I., Aghamolaei, T. and Hosseini-Parandar, F. (2015) Evaluation of Medical Students of Teacher-Based and Student-Based Teaching Methods in Infectious Diseases Course. Journal of Medicine and Life, 8, 113-117.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5348950/
[2] Khan, S., Ahmed, R.R., Streimikiene, D., Streimikis, J. and Jatoi, M.A. (2022) The Competency-Based Training & Assessment, and Improvement of Technical Competencies and Changes in Pedagogical Behavior. E&M Economics and Management, 25, 96-112.
https://doi.org/10.15240/tul/001/2022-1-006
https://www.researchgate.net/publication/359601380_The_Competency-based_Training_Assessment_and_Improvement_of_Technical_Competencies_and_Changes_in_Pedagogical_Behavior
[3] Cilliers, F.J., Schuwirth, L.W.T. and Van Der Vleuten, C.P.M. (2012) A Model of the Pre-Assessment Learning Effects of Assessment Is Operational in an Undergraduate Clinical Context. BMC Medical Education, 12, Article No. 9.
https://doi.org/10.1186/1472-6920-12-9
[4] Fuchs, A.H. and Trewin, S.A. (2007) History of Psychology: Robert Yerkes’ Multiple-Choice Apparatus, 1913-1939. The American Journal of Psychology, 120, 645-660.
https://www.jstor.org/stable/20445429
https://doi.org/10.2307/20445429
[5] Shumway, J.M. and Harden, R.M. (2003) AMEE Guide No. 25: The Assessment of Learning Outcomes for the Competent and Reflective Physician. Medical Teacher, 25, 569-584.
https://doi.org/10.1080/0142159032000151907
[6] Brown, G.T.L. and Abdulnabi, H.H.A. (2017) Evaluating the Quality of Higher Education Instructor-Constructed Multiple-Choice Tests: Impact on Student Grades. Frontiers in Education, 2, Article No. 24.
https://doi.org/10.3389/feduc.2017.00024
[7] Brame, C. (2013) Writing Good Multiple Choice Test Questions.
https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions
[8] Smith, P.E.M. and Mucklow, J.C. (2016) Writing Clinical Scenarios for Clinical Science Questions. Clinical Medicine Journal of the Royal College of Physicians of London, 16, 142-145.
https://doi.org/10.7861/clinmedicine.16-2-142
[9] Epstein, R.M. (2007) Assessment in Medical Education. The New England Journal of Medicine, 356, 387-396.
https://doi.org/10.1056/NEJMra054784
[10] Kara, O.A.M.Y.A.H. (2014) Toward a Media History of Documents (Vol. 7, pp. 107-115).
https://www.mendeley.com/catalogue/f7cd1cfc-f7e7-32b5-aaaf-f59366815139/?utm_source=desktop&utm_medium=1.19.8&utm_campaign=open_catalog&userDocumentId=%7B5fee6e77-8cc2-4f2e-8dca-736203d2db18%7D
[11] Baig, M., Ali, S.K., Ali, S. and Huda, N. (2014) Evaluation of Multiple Choice and Short Essay Question items in Basic Medical Sciences. Pakistan Journal of Medical Sciences, 30, 3-6.
[12] Segers, M., Gijbels, D. and Thurlings, M. (2008) The Relationship between Students’ Perceptions of Portfolio Assessment Practice and Their Approaches to Learning. Educational Studies, 34, 35-44.
https://doi.org/10.1080/03055690701785269
[13] Abdulghani, H.M., et al. (2015) Faculty Development Programs Improve the Quality of Multiple Choice Questions Items’ Writing. Scientific Reports, 5, Article No. 9556.
https://doi.org/10.1038/srep09556
[14] Sherbino, J., Frank, J.R. and Snell, L. (2014) Defining the Key Roles and Competencies of the Clinician-Educator of the 21st Century: A National Mixed-Methods Study. Academic Medicine, 89, 783-789.
https://doi.org/10.1097/ACM.0000000000000217
[15] Gierl, M.J. and Lai, H. (2016) A Process for Reviewing and Evaluating Generated Test Items. Educational Measurement: Issues and Practice, 35, 6-20.
https://doi.org/10.1111/emip.12129
[16] Raza, A. and Zainab, H. (2019) The Good Teacher Attributes—A Cross Sectional Study on Teaching Evaluation at Rehman Medical College, Peshawar. The Professional Medical Journal, 26, 881.
https://doi.org/10.29309/TPMJ/2019.26.06.3445
http://www.theprofesional.com/index.php/tpmj/article/view/3445
[17] Rajaraman, V. (2016) Big Data Analytics. Resonance, 21, 695-716.
https://doi.org/10.1007/s12045-016-0376-7
[18] Karthikeyan, S., O’Connor, E. and Hu, W. (2019) Barriers and Facilitators to Writing Quality Items for Medical School Assessments—A Scoping Review. BMC Medical Education, 19, 14-17.
https://doi.org/10.1186/s12909-019-1544-8
[19] Karthikeyan, S., O’Connor, E. and Hu, W. (2019) Motivating Assessment Item Writers in Medical Programs: A Qualitative Study. BMC Medical Education, 20, Article No. 334.
https://doi.org/10.21203/rs.2.13984/v1
[20] Bligh, J. and Brice, J. (2009) Further Insights into the Roles of the Medical Educator: The Importance of Scholarly Management. Academic Medicine, 84, 1161-1165.
https://doi.org/10.1097/ACM.0b013e3181ace633
[21] Lal, P. (2016) From Rote to Reasoning: The Paradigm Shift Re-quired in Medical Entrance Examination and Beyond! MAMC Journal of Medical Sciences, 2, 1-5.
https://doi.org/10.4103/2394-7438.174849
[22] Ghosh, S.K. (2016) Teaching Anatomy: It’s Time for a Reality Check. Academic Medicine, 91, 1331.
https://doi.org/10.1097/ACM.0000000000001339
[23] Craig, S., Tait, N., Boers, D. and McAndrew, D. (2010) Review of Anatomy Education in Australian and New Zealand Medical Schools. ANZ Journal of Surgery, 80, 212-216.
https://doi.org/10.1111/j.1445-2197.2010.05241.x
[24] Vuma, S. and Sa, B. (2017) A Comparison of Clinical-Scenario (Case Cluster) versus Stand-Alone Multiple Choice Questions in a Problem-Based Learning Environment in Undergraduate Medicine. Journal of Taibah University Medical Sciences, 12, 14-26.
https://doi.org/10.1016/j.jtumed.2016.08.014
[25] Salisbury, A. (2014) High Cognitive Test Item Development and Implementation. All Graduate Plan B and Other Reports. 368.
https://doi.org/10.1007/978-3-642-41714-6_41633
https://digitalcommons.usu.edu/gradreports/368

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.