CyberPatientTM—An Innovative Approach to Medical Education

Abstract

Background: Variety of tools has been used to teach history-taking skills to novice learners. Standardized Patient (SP) is the gold standard for medical education. We hypothesized that the use of online simulation platforms CyberPatientTM (CP) is as effective as SP. Methods: In this prospective randomized controlled trial study, the educational effectiveness of CP was compared to SP in improving history taking skills. Twenty-two incoming students at University of British Columbia (UBC) were randomly divided in to two (SP and CP) groups. SP Group (n = 11) practiced their history taking skills with the standardized patients and CP Group (n = 11)—with CyberPatients. The content for both groups included 3 cases of GI pathology and the study time was 60 minutes. Assessment method included Objective Structured Clinical Examination (OSCE) before and after interventions. Data were analysed in a two-way between/within ANOVA and Wald test was used to deal with the violation of the ANOVA assumptions. Economic benefits were assessed as Cost-effectiveness (calculated as Cost/Effect Ratio) and Cost-Value Proposition (Cost-Vale Relationship). Results: Results of this study indicated that both groups had significant (SP group p = 0.006 and CP group p = 0.0001) improvement in the knowledge domain of history taking. The history taking knowledge variable in both groups manifested a significant main effect of time indicating that students did better after interventions, F (1, 15.1) = 10.5, p = 0.011. The groups performed at a similar level after intervention. Moreover, results show that the use of the CP is more cost-effective and has a better cost/value proposition for medical education. Conclusion: We conclude that CyberPatientTM is as effective as using standardized patients in delivery of practical knowledge for novice medical students, however, CyberPatientTM is more economically rewarding.

Share and Cite:

Farahmand, S. , Meneghetti, A. , Shi, K. , Pachev, G. , Ramezani, J. , Zeinoddini, S. , Mehrnoush, V. , Hosseinzadeh, S. , Kapur, H. and Qayumi, A. (2020) CyberPatientTM—An Innovative Approach to Medical Education. Creative Education, 11, 926-941. doi: 10.4236/ce.2020.116067.

1. Introduction

The traditional academic training of medical students including reading text books and attending lectures prior to clinical rotations has been changed by focus on clinical skills and early engagement with patients as was recommended in Flexner’s Report ( Flexner, 2002). However, the integration of increased clinical exposure in medical schools has not been easy and requires additional investments and resources ( Prince & Boshuizen, 2006).

One of the very first practical steps that medical students need to learn is how to perform a medical interview. This has been described as the most robust diagnostic tool for a physician ( Enelow et al., 1996). Acquisition of history-taking ability that leads to deductive reasoning is considered to be one of the first challenges in the application of theory into the practice of medicine ( Lichstein, 1990).

Different methods have been used for practicing interview skills such as traditional approaches of lectures and reading materials, online courses, role playing and feedback, simulated and real patients ( Keifenheim et al., 2015).

Among the variety of educational methods, “simulated patients”, primarily introduced by Barrows et al. ( Barrows H & Abrahamson, 1964) is now used widely for teaching and assessment purposes as a gold standard ( Swartz et al., 1997). However, use of this method comes with costs of dedicated staff, training of SPs, dedicated space and other resources that increase the financial burden on medical education ( Cleland et al., 2009) as described and confirmed by others ( Kelly & Murphy, 2004). Another issue with implementation of this method as described by Colliver et al. includes limited time of interaction between the students and the SPs ( Colliver & Williams, 1993).

Introduction of new technologies has been proposed to support medical education ( Huang et al., 2007). In particular, virtual reality may provide a suitable virtual clinical environment for students to learn and practice their history-taking skills with more reliable, standardized and cost-effective on-line tools ( Danforth et al., 2009). In a meta-analysis, Consorti et al. concluded that virtual patients are effective in improving students’ ability of clinical reasoning in specific topics ( Consorti et al., 2012). Supplementing virtual cases to the traditional methods has also shown to improve data collection and interpretation abilities of students as it pertains to history-taking skills of undergraduate students ( Vash et al., 2007). Others have also proven that virtual patient is a cost-effective tool for self-directed learning ( Kandasamy & Fung, 2009).

Recently, at the University of British Columbia (UBC) a digitally enhanced online simulation platform called CyberPatientTM 2.0 is fully developed. This platform is equipped with animated online history-taking and physical examination components ( Medical Education/CyberPatient, n.d.). It is designed to enhance experiential learning and support competency-based education in a virtual clinical environment, where students can practice their history taking, physical examination as well as decision-making abilities in the continuum of care using virtual patients.

We hypothesize that CyberPatientTM 2.0 improves history-taking abilities of novice learners who have not been exposed to the clinical learning environment.

The objective of this project was three-fold:

· To determine if digitally enhanced online simulation platform (CyberPatientTM 2.0) improves history-taking skills of novice medical students.

· To determine if CyberPatientTM 2.0 is as effective as standardized patients in improving history-taking skills of novice learners.

· To determine if CyberPatientTM 2.0 is more cost effective than standardized patients in improving history-taking skills of novice learners.

2. Methodology

This study used a prospective randomized controlled trial to compare a digitally enhanced online simulation platform (CyberPatientTM 2.0) to live simulation (standardized patients) for training of history-taking skills. The dependent variables were history-taking knowledge and communication skills. The groups and times of testing served as the independent variables.

For sample size calculation, estimates of effect-size were guided by a previous study by Qayumi et al. ( Qayumi et al., 2004), and calculation were carried out using an online calculator ( Power/Sample Size Calculator, n.d.). The calculations indicate that about 10 students for each group are needed to detect 1.0 effect-size difference, with Alpha of 0.05 for the between-group comparisons, while considering a desired power of 0.70.

2.1. Participants

Following the approval of the project by UBC Behavioural Research Ethics Board, participants (summer of 2019 incoming students) were invited into the study by UBC Undergraduate Medical Education. The participation criteria included all novice learners who have been accepted to UBC medical school but has not attended any classes yet. Therefore, the experiment was conducted one week prior to the start of the class of 2019. The exclusion criteria included students with any previous clinical experience or trainings in healthcare related areas. In concertation of inclusion and exclusion criteria twenty-two novice (freshman) medical students were recruited and they gave written consent to participate in this study.

2.2. Instruments

All students were evaluated by an Objective Structured Clinical Examination (OSCE) station. The standardized patients in the OSCE station were previously trained on history-taking in gastrointestinal pathologies before the event. The pre and post test cases were identical.

Performance of the students was recorded by examiners using a checklist created based on Medical Council of Canada objectives and approved by the study committee who are UBC faculty members (Appendix 1: OSCE Checklist). The same checklist was used for the pre and post-test. Data gathered from the OSCE examiners’ checklists in both groups of CyberPatient™ and standardized patient were categorized into two main sections: history-taking knowledge with 53 items (scored 0 or 1 with the highest sum of 53) ; and the second part being the soft communication skills consisting of 9 items (scored 1-4 for a maximum score of 36).

In addition to the checklist filled by the OSCE examiners, participants completed a survey, soliciting their opinions and comments on the technology, methods of learning, and assessment (Appendix 2: Satisfaction survey). For each of the 5 questions, the participants had the choice of selecting a rating of 1-5, with 1 being the lowest and 5 being the highest score.

2.3. Procedure

On the day of the study, the students were scheduled to go through a 2.5-hour process in 2 different groups with the second group starting 75 minutes after the first one. After arrival and registration, all students had a 45 minute lecture on the art of history-taking in a video format as part of the introduction to the content of the task and logistics of the event. Subsequently, all students were evaluated by the first Objective Structured Clinical Examination (Pre-OSCE) station that was designed only for history-taking. Time for the OSCE was 15 minutes and evaluators (UBC faculty members) were blindly assigned to the groups.

After completion of the Pre-OSCE, students were randomly divided into two groups. The first group (#11) were subjected to live simulation using standardized patients (known as SP group) and the second group were subjected to CyberPatient™ 2.0® (known as CP group). The online platform used in the CP group was designed using animated avatars that simulate patients and students interact with those avatars as they do with real patients. Immediately after the exam, student groups were isolated and escorted to designated locations for the study time and access to the study content. The content for both groups included 3 cases of GI pathology (1, 2 & 3) and the study time was 60 minutes.

During study time, students in SP Group used trained Standardized Patients to practice their history-taking skills and students in CP Group used CP 2.0 via computers in a computer lab to reach the same information about the same three cases that were chosen by the committee as content of this study.

In the study sessions, the first five minutes were allocated for the groups to get oriented to the session. After orientation in the SP Group, all students had the opportunity to interview one on one the SPs, rotating from one SP to another and spending 20 minutes with each SP (total of 60 minutes). In Group B every student had a computer with the same three cases and had the opportunity to practice their interviewing skills for the same amount of time (20 minutes/case, total of 60 minutes). Study sessions were designed as independent learning, student-centered sessions to provide equal conditions for both groups. Therefore, to avoid variability during the study session, students were supervised but they did not receive any help from faculty and staff, and they were totally isolated from external information such as phone, Internet and intergroup communications.

After the completion of the study sessions, students were escorted to the same assessment rooms for Post-OSCE assessment. In the Post-OSCE the same checklist was used to assess students and the judges were blinded to the students’ group. All pre and post coded checklists were collected by the volunteers and data were entered to an MS Excel sheet for decoding and statistical analysis.

2.4. Analyses

2.4.1. Statistical Analysis Plan

Data analyses were performed using R version 3.5.3 environment for statistical computing ( Bunn & Korpela, 2008). Descriptive statistics included means and standard errors. Data were analysed in a two way between/within ANOVA for each dependent variable. Significant effects were followed up with paired contrasts. Wald test was used to deal with the violation of the ANOVA assumptions.

In this study, for the calculation of cost-effectiveness, we calculated the cost as per current UBC expenditure figures and rules ( INDIRECT COSTS, BUDGETING + FINANCENo Title, n.d.; UBC 2018/2019 Budget, 2019). For the effect, we used data obtained in our experiment for both CP and SP groups as percent change between pre and post (CP = 28% and SP = 16%).

2.4.2. Cost Value Analysis

Cost estimates For analysing the SP expenses, the direct costs of training and participation of faculty instructors and SPs (the sum titled as subtotal) were added to the indirect costs (25% of the subtotal according to UBC) ( INDIRECT COSTS, BUDGETING + FINANCENo Title, n.d.).

As per UBC rules, a minimum of two hours of training and a minimum of four hours of role playing is required for a session with SP. The remuneration of training and role play time is from $20 to $25/hour per SP ( Standardized Patients-UBC Faculty of Medicine, n.d.) depending on the complexity of the case making the cost of each SP $132 (average cost $22/hour × 6 h). Regarding the Faculty members the total sum was $540 (2 h of training + 4 h of supervision at a rate of $90/hour). Therefore, with the direct or subtotal cost of $672 ($132 + $540) and the indirect cost of $168 (25% of $672) the total cost of one SP per session (Subtotal + Indirect cost) was estimated to be $840. Since each session is usually organized for a group of 10 students, cost of using one SP per student per session was calculated to be $84.

In this study cost-effectiveness is calculated as cost obtained in our calculation divided by the outcome that was obtained in our experiment (SP = 16% and for CP 28%). This Cost Effectiveness Analysis (CEA) ratio = Cost/Effect has been used by others ( McEwan, 2012).

Value estimates Students as well as faculty members were interviewed for rating the specific values of CP versus SP and their opinions were registered as a score that is presented in Table 1.

Data analyses

3. Results

3.1. Results of the Experiment

Descriptive statistics for the two dependent variables are listed in Table 2.

The 2-way Between-Within analysis of variance of history-taking soft skills results failed to detect significant differences between groups or at the pre and post-test occasions.

By contrast, the same analyses on the history-taking knowledge variable manifested a significant main effect of time of testing indicating that students did better after the educational session, F (1, 15.1) = 10.5, p = 0.011.

Follow-up pair contrasts indicated that both groups had significant improvement in the knowledge domain of history-taking (the SP group, t (20) = 3.10, p = 0.006 as well as the CP t (20) = 5.04, p = 0.0001 showed significant increase). On the contrary, the comparison between the groups was not statistically significant (see Table 3 and Figure 1).

Table 1. Discerption and ranking of values expressed by students and faculty for SP and CP group.

Table 2. Means and standard errors for the history taking knowledge and soft skill by groups and assessment times.

Abbreviations: OSCE, Objective Structured Clinical Examination; N, number; SE, standard error; SP, standardized patient; CP, cyberpatient; Hx, history taking.

Table 3. Contrasts of pre and post-test performance by study group for the history taking knowledge variable.

Abbreviations: SE, standard error; df, degree of freedom; SP, standardized patient; CP, cyberpatient.

Figure 1. Total and by group change of OSCE scores for the history-taking soft skills and knowledge variables.

Students’ opinion about the experiment, and satisfaction with delivery methods revealed that majority of students were satisfied with both methods however they rated the quality of introductory video very high; the content materials—high; delivery method and time of study session—high (74%) by the CP group and high (60%) by the SP group.

3.2. Cost-Effectiveness

The cost for one SP trained to perform one pathology/case in one session (as indicated in section 2.4.2) has been estimated to be at $84. At UBC there are at least 8 SP sessions for each student per year and classes of 300 students per year. The cost of SP stations for medical schools can be calculated as: 84 × 300 × 8 = $201,600 (for 8 cases). Since the CP platform has 120 cases at this time and if we make a hypothetical assumption that the university will be capable of having 120 SP cases per student per year, the cost will be: 84 × 300 × 120 = $ 3,024,000. In comparison the cost of CP for 120 cases and unlimited sessions will be $188 /student /year that would equal to $56,400 per year for the entire class in a medical school, a savings of over 98%.

Results of calculation reviled a CEA ratio = 12,600 for SP and a CEA ratio = 2014 for CP. If the hypothetical assumption of 120 cases is used, the CEA ratio for CP would not change (2014). However, the CEA ratio for SP will increase from 12,600 to 189,000.

3.3. Cost Value Proposition

Table 1 depicts the list of values with estimated rank and score by students and faculty members for both methods.

The Cost for SP method per student/year for 120 cases and 8 sessions is estimated to be about $ 201,600 and the cost for CP is $188. Scored Value (Table 1) is 32 for SP and 42 for CP. The Cost values relationship, in the Cost/Value proposition graph (Figure 2) clearly demonstrates that CP is much cheaper and

Figure 2. Cost/Value proposition shows that CP has the lowest cost and highest value as a teaching technique for medical schools in comparison to SP that has a high value but high cost (Abbreviations: SP, standardized patient; CP, cyberpatient).

just as effective and can be useful for students, health professionals and others.

4. Discussion

The objective of this study was to determine if a digitally enhanced online simulation platform (CyberPatient™ 2.0) improves history-taking skills of novice medical students and if it is as effective as standardized patients in improving history-taking skills.

Standardized patients are a valuable addition to medical education ( Cleland et al., 2009). SPs have been used to train students with different levels of history-taking skills with better outcomes in comparison to the traditional methods ( Haist et al., 2004) and they have also been successfully implemented into curricula for improving soft skills in medical students ( Halbach & Sullivan, 2005). Therefore, SP at this time is considered to be the best method of learning clinical skills. In addition, we assumed that talking with a live person will be better than to a computer. Based on this assumption, we hypothesized that the SP would be better than CP. Results of this study showed that both groups had significant improvement in the knowledge domain of history-taking (SP group, p = 0.006 and CP group p = 0.0001). As it is evident in Figure 1 the mean post value is higher for the CP group in comparison to the SP group. However, this difference was not statistically significant.

Results of our interviews with students revealed that novice students had difficulties such as being shy, afraid of asking the wrong question, and communicating with a live person in the SP group. In the CP group all students indicated that they did not have any problem communicating with avatars in the computer. In general, the personal interviews with the students showed that the opinion of the students using the CyberPatient™ was very high, in fact as high as the group working with the standardized patients. In addition, these students indicated that they were intimidated by a live person due to lack of knowledge and experience whereas the participants in the CP group felt more comfortable dealing with a patient in cyber space.

It was also the opinion of the students that they could ask questions of the cyber patient repeatedly without hesitation which improved memorization. Other CP values identified by students and faculty are listed in Table 1.

In recent years, virtual patients have been utilized to achieve numerous educational goals ( Berman et al., 2016). In one of the broadest usages of virtual patients, they were included in a comprehensive coverage of nationally accepted curricula for the Pediatrics Clerkship ( Fall et al., 2005). Further detailed studies have shown appreciable success of students when paper-based cases were replaced by interactive virtual patients ( Courteille et al., 2018). Results of this study are also in line with the results of a previous study using CP 1.0 where the efficacy of CP 1.0 over traditional textbook learning was proven ( Qayumi et al., 2004). These studies further confirm the value and the role of virtual technology in support of medical education particularly when it pertains to the delivery of clinical skills.

At this time of financial constrains for medical education for organizations around the globe, it is important to consider the cost of newly proposed medical education modalities. Given that the outcomes for CP versus SP appear to be similar, we deemed it important to analyze the cost-effectiveness and value proposition of CP relative to SP.

Cost-effectiveness analysis and cost-value proposition are two indicators that can provide information for the value and effectiveness of the educational methods in relation to cost. Cost-effectiveness analysis (CEA) ( McEwan, 2012) is calculated as the total cost divided by the outcome. The cost-value proposition is the relationship between the overall cost to the identified value.

The cost of SPs have been reviewed by many investigators and it varies between US $32 - $50 per SP/student ( Grand’maison et al., 1992; Reznick et al., 1992). Reznick et al. ( Reznick et al., 1993) described the guidelines for Estimating the Real Cost of an OSCE where SPs are used for OSCEs. In their manuscript they emphasize on the role of direct and indirect cost of training and using SPs ( Reznick et al., 1993). This type of comparative analysis has been done by others ( Fletcher & Wind, 2013; Kelly & Murphy, 2004). Most investigations compared the SP group to peer role playing or case study education ( Bosse et al., 2015; Gillette et al., n.d.).

5. Conclusion

Results of this study demonstrated that CyberPatient™ as a method for delivery of practical knowledge is as effective as using the standardized patients. However, it is important to note that the use of the CP is more cost-effective and has a better value/cost proposition for medical education and healthcare organization involved in training of students and medical professionals.

Although we were not able to test the impact of repeat performance, given the much lower costs involved, it is reasonable to assume that students will be able to repeat their experience with CP perhaps in an independent setting, leading to further gains in education. In a time of pressure on the scheduling and use of face-to-face education, an online and independently scheduled educational resource is even more important.

It can be concluded that CP is an effective and less costly tool for students and practitioners to vigorously exercise their clinical skills before encountering a clinical assessment such as OSCE or entering medical practice. It is also evident that CP will open new avenues for educational research in the future. As an example, further research is required to analyze CP in relation to the following factors:

· Schedule, flexibility and accessibility for online distance education of practical skills;

· Potential for network effect of students and faculty for improvement of clinical skills;

· Potential for delivery of a hybrid model with other methods of education;

· Potential for a greater engagement level on efficacy and cost-effectiveness;

· Potential for the use of CP as a continuous and low-cost formative assessment tool;

· Potential for the use of CP as part of the summative assessment.

Acknowledgements

We would like to thank the University of British Columbia (UBC) medical students, the faculty members as well as the volunteers who participated in this project. And a special appreciation to UBC Faculty of Medicine for their great support through the implementation of this study.

Declaration of Interest Statement

CyberPatient™ is a University of British Columbia/Vancouver Coastal Health (UBC/VCH) educational product and it has been developed by UBC/VCH spin off company. Due to this potential conflict, the research was organized and completed under the supervision of a research committee without any conflicts of interests.

Farahmand, S., Meneghetti, A., Shi, K., Pachev, G., Ramezani, J., Zeinoddini, S., Mehrnoush, V., Hosseinzadeh, S., Kapur, H., & Qayumi, A. K. (2020). CyberPatient™—An Innovative Approach to Medical Education. Creative Education, 11, 926-941. https://doi.org/10.4236/ce.2020.116067

References

  1. 1. Barrows, H. S., & Abrahamson, S. (1964). The Programmed Patient: A Technique for Appraising Student Performance in Clinical Neurology. Journal of Medical Education, 39, 802-805. [Paper reference 1]

  2. 2. Berman, N. B., Durning, S. J., Fischer, M. R., Huwendiek, S., & Triola, M. M. (2016). The Role for Virtual Patients in the Future of Medical Education. Academic Medicine, 91, 1217-1222. https://doi.org/10.1097/ACM.0000000000001146 [Paper reference 1]

  3. 3. Bosse, H. M., Nickel, M., Huwendiek, S., Schultz, J. H., & Nikendei, C. (2015). Cost-Effectiveness of Peer Role Play and Standardized Patients in Undergraduate Communication Training Approaches to Teaching and Learning. BMC Medical Education, 15, 183. https://doi.org/10.1186/s12909-015-0468-1 [Paper reference 1]

  4. 4. Bunn, A., & Korpela, M. (2008). An Introduction to dplR. Industrial and Commercial Training, 10, 11-18. https://doi.org/10.1108/eb003648 [Paper reference 1]

  5. 5. Cleland, J. A., Abe, K., & Rethans, J. J. (2009). The Use of Simulated Patients in Medical Education: AMEE Guide No. 42. Medical Teacher, 31, 477-486. https://doi.org/10.1080/01421590903002821 [Paper reference 2]

  6. 6. Colliver, J. A., & Williams, R. G. (1993). Technical Issues: Test Application. Academic Medicine: Journal of the Association of American Medical Colleges, 68, 454-460. https://doi.org/10.1097/00001888-199306000-00003 [Paper reference 1]

  7. 7. Consorti, F., Mancuso, R., Nocioni, M., & Piccolo, A. (2012). Efficacy of Virtual Patients in Medical Education: A Meta-Analysis of Randomized Studies. Computers and Education, 59, 1001-1008. https://doi.org/10.1016/j.compedu.2012.04.017 [Paper reference 1]

  8. 8. Courteille, O., Fahlstedt, M., Ho, J., Hedman, L., Fors, U., Von Holst, H., Fellander-Tsai, L., & Moller, H. (2018). Learning through a Virtual Patient vs. Recorded Lecture: A Comparison of Knowledge Retention in a Trauma Case. International Journal of Medical Education, 9, 86-92. https://doi.org/10.5116/ijme.5aa3.ccf2 [Paper reference 1]

  9. 9. Danforth, D. R., Procter, M., Chen, R., Johnson, M., & Heller, R. (2009). Development of Virtual Patient Simulations for Medical Education. Journal for Virtual Worlds Research, 2, 4-11. https://doi.org/10.4101/jvwr.v2i2.707 [Paper reference 1]

  10. 10. Enelow, A. J., Forde, D. L., & Brummel-Smith, K. (1996). The Interview in Clinical Medicine. In Interviewing and Patient Care (4th ed., pp. 3-11). Oxford: Oxford University Press, Inc. [Paper reference 1]

  11. 11. Fall, L. H., Berman, N. B., Smith, S., White, C. B., Woodhead, J. C., & Olson, A. L. (2005). Multi-Institutional Development and Utilization of a Computer-Assisted Learning Program for the Pediatrics Clerkship: The CLIPP Project. Academic Medicine, 80, 847-855. https://doi.org/10.1097/00001888-200509000-00012 [Paper reference 1]

  12. 12. Fletcher, J. D., & Wind, A. P. (2013). Cost Considerations in Using Simulations for Medical Training. Military Medicine, 178, 37-46. https://doi.org/10.7205/MILMED-D-13-00258 [Paper reference 1]

  13. 13. Flexner, A. (2002). Extracted from: The Carnegie Foundation for the Advancement of Teaching, Bulletin Number Four, 1910. Bulletin of the World Health Organization, 80, 594-602. [Paper reference 1]

  14. 14. Gillette, C., Stanton, R. B., Rockich-Winston, N., Rudolph, M., & Anderson, H. G. (n.d.). Cost-Effectiveness of Using Standardized Patients to Assess Student-Pharmacist Communication Skills. [Paper reference 1]

  15. 15. Grand’maison, P., Lescop, J., Rainsberry, P., & Brailovsky, C. A. (1992). Large-Scale Use of an Objective, Structured Clinical Examination for Licensing Family Physicians. Canadian Medical Association Journal, 146, 1735-1740. [Paper reference 1]

  16. 16. Haist, S. A., Griffith, C. H., Hoellein, A. R., Talente, G., Montgomery, T., & Wilson, J. F. (2004). Improving Students’ Sexual History Inquiry and HIV Counseling with an Interactive Workshop Using Standardized Patients. Journal of General Internal Medicine, 19, 549-553. https://doi.org/10.1111/j.1525-1497.2004.30204.x [Paper reference 1]

  17. 17. Halbach, J. L., & Sullivan, L. L. (2005). Teaching Medical Students about Medical Errors and Patient Safety: Evaluation of a Required Curriculum. Academic Medicine Journal of the Association of American Medical Colleges, 80, 600-606. https://doi.org/10.1097/00001888-200506000-00016 [Paper reference 1]

  18. 18. Huang, G., Reynolds, R., & Candler, C. (2007). Virtual Patient Simulation at U.S. and Canadian Medical Schools. Academic Medicine, 82, 446-451. https://doi.org/10.1097/ACM.0b013e31803e8a0a [Paper reference 1]

  19. 19. Indirect Costs, Budgeting + Finance (n.d.). No Title. UBC Research + Innovation “Support + Resources” Indirect Costs, Budgeting + Finance. https://research.ubc.ca/support-resources/indirect-costs-budgeting-finance [Paper reference 2]

  20. 20. Kandasamy, T., & Fung, K. (2009). Interactive Internet-Based Cases for Undergraduate Otolaryngology Education. Otolaryngology—Head and Neck Surgery, 140, 398-402. https://doi.org/10.1016/j.otohns.2008.11.033 [Paper reference 1]

  21. 21. Keifenheim, K. E., Teufel, M., Ip, J., Speiser, N., Leehr, E. J., Zipfel, S., & Herrmann-Werner, A. (2015). Teaching History Taking to Medical Students: A Systematic Review. BMC Medical Education, 15, Article No. 159. https://doi.org/10.1186/s12909-015-0443-x [Paper reference 1]

  22. 22. Kelly, M., & Murphy, A. (2004a). An Evaluation of the Cost Designing, Delivering and Assessing an Undergraduate Communication Skills Module. Medical Teacher, 26, 610-614. https://doi.org/10.1080/01421590400005475 [Paper reference 2]

  23. 23. Lichstein, P. R. (1990). The Medical Interview. In H. K. Walker, W. D. Hall, & J. W. Hurst (Eds.), Clinical Methods: The History, Physical, and Laboratory Examinations (3rd ed., Chapter 3). Boston, MA: Butterworths. https://www.ncbi.nlm.nih.gov/books/NBK349 [Paper reference 1]

  24. 24. McEwan, P. J. (2012). Cost-Effectiveness Analysis of Education and Health Interventions in Developing Countries. Journal of Development Effectiveness, 4, 189-213. https://doi.org/10.1080/19439342.2011.649044 [Paper reference 2]

  25. 25. Medical Education/CyberPatient (n.d.). https://www.cyberpatient.ca [Paper reference 1]

  26. 26. Power/Sample Size Calculator (n.d.). Rollin Brant Professor (Emeritus) Department of Statistics University of British Columbia. https://www.stat.ubc.ca/~rollin/stats/ssize/n2.html [Paper reference 1]

  27. 27. Prince, K. J. A. H., & Boshuizen, H. P. A. (2006). From Theory to Practice in Medical Education. In Professional Learning: Gaps and Transitions on the Way from Novice to Expert (pp. 121-139). Berlin: Springer. https://doi.org/10.1007/1-4020-2094-5_7 [Paper reference 1]

  28. 28. Qayumi, A. K., Kurihara, Y., Imai, M., Pachev, G., Seo, H., Hoshino, Y., Cheifetz, R., Matsuura, K., Momoi, M., Saleem, M., Lara-Guerra, H., Miki, Y., & Kariya, Y. (2004). Comparison of Computer-Assisted Instruction (CAI) versus Traditional Textbook Methods for Training in Abdominal Examination (Japanese Experience). Medical Education, 38, 1080-1088. https://doi.org/10.1111/j.1365-2929.2004.01957.x [Paper reference 2]

  29. 29. Reznick, R. K., Smee, S., Baumber, J. S., Cohen, R., Rothman, A., Blackmore, D., & Bérard, M. (1993). Guidelines for Estimating the Real Cost of an Objective Structured Clinical Examination. Academic Medicine, 68, 513-517. https://doi.org/10.1097/00001888-199307000-00001 [Paper reference 2]

  30. 30. Reznick, R., Smee, S., Rothman, A., Chalmers, A., Swanson, D., Dufresne, L., Lacombe, G., Baumber, J., Poldre, P., Levasseur, L., Cohen, R., Mendez, J., Patey, P., Boudreau, D., & Berard, M. (1992). An Objective Structured Clinical Examination for the Licentiate: Report of the Pilot Project of the Medical Council of Canada. Academic Medicine, 67, 487-494. https://doi.org/10.1097/00001888-199208000-00001 [Paper reference 1]

  31. 31. Standardized Patients—UBC Faculty of Medicine (n.d.). https://www.med.ubc.ca/about/careers/standardized-patient-program [Paper reference 1]

  32. 32. Swartz, M. H., Colliver, J. A., Bardes, C. L., Charon, R., Fried, E. D., & Moroff, S. (1997). The Validity of Standardized Patient Assessment Using Faculty-Physician Global Ratings as the Gold-Standard Criterion. In Advances in Medical Education (pp. 725-727). Berlin: Springer. https://doi.org/10.1007/978-94-011-4886-3_219 [Paper reference 1]

  33. 33. UBC 2018/2019 Budget (Issue April 2018) (2019). https://vpfinance.ubc.ca/files/2018/06/2018_2019_Operating_Budget.pdf [Paper reference 1]

  34. 34. Vash, J. H., Yunesian, M., Shariati, M., Keshvari, A., & Harirchi, I. (2007). Virtual Patients in Undergraduate Surgery Education: A Randomized Controlled Study. ANZ Journal of Surgery, 77, 54-59. https://doi.org/10.1111/j.1445-2197.2006.03978.x [Paper reference 1]

Appendix

Appendix 1. Examiner’s OSCE Checklist

OSCE exam patient information and checklist – Appendicitis CP 2.0 project – Aug 17, 2019

Appendix 2. Students’ Satisfaction Survey

Study session group: ……………………………….

*Please note that for rating the questions 1 has the lowest score and 5 has the highest score.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Barrows, H. S., & Abrahamson, S. (1964). The Programmed Patient: A Technique for Appraising Student Performance in Clinical Neurology. Journal of Medical Education, 39, 802-805.
[2] Berman, N. B., Durning, S. J., Fischer, M. R., Huwendiek, S., & Triola, M. M. (2016). The Role for Virtual Patients in the Future of Medical Education. Academic Medicine, 91, 1217-1222.
https://doi.org/10.1097/ACM.0000000000001146
[3] Bosse, H. M., Nickel, M., Huwendiek, S., Schultz, J. H., & Nikendei, C. (2015). Cost-Effectiveness of Peer Role Play and Standardized Patients in Undergraduate Communication Training Approaches to Teaching and Learning. BMC Medical Education, 15, 183.
https://doi.org/10.1186/s12909-015-0468-1
[4] Bunn, A., & Korpela, M. (2008). An Introduction to dplR. Industrial and Commercial Training, 10, 11-18.
https://doi.org/10.1108/eb003648
[5] Cleland, J. A., Abe, K., & Rethans, J. J. (2009). The Use of Simulated Patients in Medical Education: AMEE Guide No. 42. Medical Teacher, 31, 477-486.
https://doi.org/10.1080/01421590903002821
[6] Colliver, J. A., & Williams, R. G. (1993). Technical Issues: Test Application. Academic Medicine: Journal of the Association of American Medical Colleges, 68, 454-460.
https://doi.org/10.1097/00001888-199306000-00003
[7] Consorti, F., Mancuso, R., Nocioni, M., & Piccolo, A. (2012). Efficacy of Virtual Patients in Medical Education: A Meta-Analysis of Randomized Studies. Computers and Education, 59, 1001-1008.
https://doi.org/10.1016/j.compedu.2012.04.017
[8] Courteille, O., Fahlstedt, M., Ho, J., Hedman, L., Fors, U., Von Holst, H., Fellander-Tsai, L., & Moller, H. (2018). Learning through a Virtual Patient vs. Recorded Lecture: A Comparison of Knowledge Retention in a Trauma Case. International Journal of Medical Education, 9, 86-92.
https://doi.org/10.5116/ijme.5aa3.ccf2
[9] Danforth, D. R., Procter, M., Chen, R., Johnson, M., & Heller, R. (2009). Development of Virtual Patient Simulations for Medical Education. Journal for Virtual Worlds Research, 2, 4-11.
https://doi.org/10.4101/jvwr.v2i2.707
[10] Enelow, A. J., Forde, D. L., & Brummel-Smith, K. (1996). The Interview in Clinical Medicine. In Interviewing and Patient Care (4th ed., pp. 3-11). Oxford: Oxford University Press, Inc.
[11] Fall, L. H., Berman, N. B., Smith, S., White, C. B., Woodhead, J. C., & Olson, A. L. (2005). Multi-Institutional Development and Utilization of a Computer-Assisted Learning Program for the Pediatrics Clerkship: The CLIPP Project. Academic Medicine, 80, 847-855.
https://doi.org/10.1097/00001888-200509000-00012
[12] Fletcher, J. D., & Wind, A. P. (2013). Cost Considerations in Using Simulations for Medical Training. Military Medicine, 178, 37-46.
https://doi.org/10.7205/MILMED-D-13-00258
[13] Flexner, A. (2002). Extracted from: The Carnegie Foundation for the Advancement of Teaching, Bulletin Number Four, 1910. Bulletin of the World Health Organization, 80, 594-602.
[14] Gillette, C., Stanton, R. B., Rockich-Winston, N., Rudolph, M., & Anderson, H. G. (n.d.). Cost-Effectiveness of Using Standardized Patients to Assess Student-Pharmacist Communication Skills.
[15] Grand’maison, P., Lescop, J., Rainsberry, P., & Brailovsky, C. A. (1992). Large-Scale Use of an Objective, Structured Clinical Examination for Licensing Family Physicians. Canadian Medical Association Journal, 146, 1735-1740.
[16] Haist, S. A., Griffith, C. H., Hoellein, A. R., Talente, G., Montgomery, T., & Wilson, J. F. (2004). Improving Students’ Sexual History Inquiry and HIV Counseling with an Interactive Workshop Using Standardized Patients. Journal of General Internal Medicine, 19, 549-553.
https://doi.org/10.1111/j.1525-1497.2004.30204.x
[17] Halbach, J. L., & Sullivan, L. L. (2005). Teaching Medical Students about Medical Errors and Patient Safety: Evaluation of a Required Curriculum. Academic Medicine Journal of the Association of American Medical Colleges, 80, 600-606.
https://doi.org/10.1097/00001888-200506000-00016
[18] Huang, G., Reynolds, R., & Candler, C. (2007). Virtual Patient Simulation at U.S. and Canadian Medical Schools. Academic Medicine, 82, 446-451.
https://doi.org/10.1097/ACM.0b013e31803e8a0a
[19] Indirect Costs, Budgeting + Finance (n.d.). No Title. UBC Research + Innovation “Support + Resources” Indirect Costs, Budgeting + Finance.
https://research.ubc.ca/support-resources/indirect-costs-budgeting-finance
[20] Kandasamy, T., & Fung, K. (2009). Interactive Internet-Based Cases for Undergraduate Otolaryngology Education. Otolaryngology—Head and Neck Surgery, 140, 398-402.
https://doi.org/10.1016/j.otohns.2008.11.033
[21] Keifenheim, K. E., Teufel, M., Ip, J., Speiser, N., Leehr, E. J., Zipfel, S., & Herrmann-Werner, A. (2015). Teaching History Taking to Medical Students: A Systematic Review. BMC Medical Education, 15, Article No. 159.
https://doi.org/10.1186/s12909-015-0443-x
[22] Kelly, M., & Murphy, A. (2004a). An Evaluation of the Cost Designing, Delivering and Assessing an Undergraduate Communication Skills Module. Medical Teacher, 26, 610-614.
https://doi.org/10.1080/01421590400005475
[23] Lichstein, P. R. (1990). The Medical Interview. In H. K. Walker, W. D. Hall, & J. W. Hurst (Eds.), Clinical Methods: The History, Physical, and Laboratory Examinations (3rd ed., Chapter 3). Boston, MA: Butterworths.
https://www.ncbi.nlm.nih.gov/books/NBK349
[24] McEwan, P. J. (2012). Cost-Effectiveness Analysis of Education and Health Interventions in Developing Countries. Journal of Development Effectiveness, 4, 189-213.
https://doi.org/10.1080/19439342.2011.649044
[25] Medical Education/CyberPatient (n.d.).
https://www.cyberpatient.ca
[26] Power/Sample Size Calculator (n.d.). Rollin Brant Professor (Emeritus) Department of Statistics University of British Columbia.
https://www.stat.ubc.ca/~rollin/stats/ssize/n2.html
[27] Prince, K. J. A. H., & Boshuizen, H. P. A. (2006). From Theory to Practice in Medical Education. In Professional Learning: Gaps and Transitions on the Way from Novice to Expert (pp. 121-139). Berlin: Springer.
https://doi.org/10.1007/1-4020-2094-5_7
[28] Qayumi, A. K., Kurihara, Y., Imai, M., Pachev, G., Seo, H., Hoshino, Y., Cheifetz, R., Matsuura, K., Momoi, M., Saleem, M., Lara-Guerra, H., Miki, Y., & Kariya, Y. (2004). Comparison of Computer-Assisted Instruction (CAI) versus Traditional Textbook Methods for Training in Abdominal Examination (Japanese Experience). Medical Education, 38, 1080-1088.
https://doi.org/10.1111/j.1365-2929.2004.01957.x
[29] Reznick, R. K., Smee, S., Baumber, J. S., Cohen, R., Rothman, A., Blackmore, D., & Bérard, M. (1993). Guidelines for Estimating the Real Cost of an Objective Structured Clinical Examination. Academic Medicine, 68, 513-517.
https://doi.org/10.1097/00001888-199307000-00001
[30] Reznick, R., Smee, S., Rothman, A., Chalmers, A., Swanson, D., Dufresne, L., Lacombe, G., Baumber, J., Poldre, P., Levasseur, L., Cohen, R., Mendez, J., Patey, P., Boudreau, D., & Berard, M. (1992). An Objective Structured Clinical Examination for the Licentiate: Report of the Pilot Project of the Medical Council of Canada. Academic Medicine, 67, 487-494.
https://doi.org/10.1097/00001888-199208000-00001
[31] Standardized Patients—UBC Faculty of Medicine (n.d.).
https://www.med.ubc.ca/about/careers/standardized-patient-program
[32] Swartz, M. H., Colliver, J. A., Bardes, C. L., Charon, R., Fried, E. D., & Moroff, S. (1997). The Validity of Standardized Patient Assessment Using Faculty-Physician Global Ratings as the Gold-Standard Criterion. In Advances in Medical Education (pp. 725-727). Berlin: Springer.
https://doi.org/10.1007/978-94-011-4886-3_219
[33] UBC 2018/2019 Budget (Issue April 2018) (2019).
https://vpfinance.ubc.ca/files/2018/06/2018_2019_Operating_Budget.pdf
[34] Vash, J. H., Yunesian, M., Shariati, M., Keshvari, A., & Harirchi, I. (2007). Virtual Patients in Undergraduate Surgery Education: A Randomized Controlled Study. ANZ Journal of Surgery, 77, 54-59.
https://doi.org/10.1111/j.1445-2197.2006.03978.x

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.