Validity Evidence for Assessments on a UK Graduate Entry Medical Course

Abstract

Graduate entry medical courses (GEC) have been introduced into the UK to increase the supply of doctors and to widen participation. In addition to evaluation against these outcomes, the educational process should also be evaluated. One aspect of process is assessment and different types of validity evidence for the assessments used should be provided. This paper provides validity evidence for the assessments on a UK GEC, focusing on the 2010/11 assessment diet. The types of validity evidence provided are content, internal structure, relationship with other variables and consequences. Students’ GEC assessment results are used to determine whether or not students should progress to Year 3 on the traditional course. 66% of the learning outcome/body system combinations in the assessment specification for Years 1 & 2 of the traditional course were assessed in one assessment diet. Short answer questions performed best in terms of difficulty and discrimination. The reliability of three modules was just outside the recommended range of 0.7 to 0.9. GEC performance is at least as good a predictor of final year performance as Year 1/2 performance on the traditional course. Across the six written modules for 2010/11, 12 scores (5%) were in the borderline range. Judgement regarding the validity of interpretations made from GEC assessment results is left to the reader since such judgements should not be made by those providing the validity evidence. Similar studies should aim to use benchmarks to enable results to be more objectively evaluated.

Share and Cite:

Taylor, C. & Zvauya, R. (2013). Validity Evidence for Assessments on a UK Graduate Entry Medical Course. Creative Education, 4, 15-19. doi: 10.4236/ce.2013.46A003.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] American Educational Research Association, American Psychological Association & National Council on Measurement in Education (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
[2] Auewarakul, C., Downing, S. M., Jaturatamrong, U., & Praditsuwan, R. (2005). Sources of validity evidence for an internal medicine student evaluation system: An evaluative study of assessment methods. Medical Education, 39, 276-283. doi:10.1111/j.1365-2929.2005.02090.x
[3] Calvert, M. J., Ross, N. M., Freemantle, N., Xu, Y., Zvauya, R., & Parle, J. V. (2009). Examination performance of graduate entry medical students compared with mainstream students. Journal of the Royal Society of Medicine, 102, 425-430. doi:10.1258/jrsm.2009.090121
[4] Cook, D. A., & Beckman, T. J. (2006). Current concepts in validity and reliability for psychometric instruments: Theory and application. The American Journal of Medicine, 119, 166.e7-166.e16. doi:10.1016/j.amjmed.2005.10.036
[5] Dean, S. J., Barratt, A. L., Hendry, G. D., & Lyon, P. M. A. (2003). Preparedness for hospital practice among graduates of a problem based, graduate-entry medical program. Medical Journal of Australia, 178, 163-166.
[6] Department of Health (2004). Medical schools: Delivering doctors of the future. London: Department of Health.
[7] Downing, S. M. (2003). Validity: On the meaningful interpretation of assessment data. Medical Education, 37, 830-837. doi:10.1046/j.1365-2923.2003.01594.x
[8] GMC (2009). Tomorrow’s doctors. London: General Medical Council.
[9] Hamdy, H., Prasad, K., Anderson, M. B., Scherpbier, A., Williams, R., Zwierstra, R., et al. (2006). BEME systematic review: Predictive values of measurements obtained in medical schools and future performance in medical practice. Medical Teacher, 28, 103-116. doi:10.1080/01421590600622723
[10] Harvill, L. M. (1991). Standard error of measurement. Educational Measurement: Issues and Practice, 10, 33-41. doi:10.1111/j.1745-3992.1991.tb00195.x
[11] Hatala, R., Ainslie, M., Kassen, B. O., Mackie, I., & Roberts, J. M. (2006). Assessing the mini-clinical evaluation exercise in comparison to a national specialty examination. Medical Education, 40, 950-956. doi:10.1111/j.1365-2929.2006.02566.x
[12] Lowry, R. (2012). Significance of the difference between two correlation coefficients. URL (last checked 24 April 2012). http://vassarstats.net/rdiff.html
[13] Manning, G., & Garrud, P. (2009). Comparative attainment of 5-year undergraduate and 4-year graduate entry medical students moving into foundation training. BMC Medical Education, 9, 76. doi:10.1186/1472-6920-9-76
[14] Mathers, J., Sitch, A., Marsh, J. L., & Parry, J. (2011). Widening access to medical education for under-represented socioeconomic groups: Population based cross sectional analysis of UK data, 2002-6. British Medical Journal, 342, d918. doi:10.1136/bmj.d918
[15] McLaughlin, K., Coderre, S., Woloschuk, W., & Mandin, H. (2005). Does blueprint publication affect students’ perception of validity of the evaluation process? Advances in Health Sciences Education, 10, 15-22. doi:10.1007/s10459-004-8740-x
[16] Powis, D., Hamilton, J., & Gordon, J. (2004). Are graduate entry programmes the answer to recruiting and selecting tomorrow’s doctors? Medical Education, 38, 1147-1153. doi:10.1111/j.1365-2929.2004.01986.x
[17] Price, R., & Wright, S. R. (2010). Comparisons of examination performance between “conventional” and Graduate Entry Programme students; the Newcastle experience. Medical Teacher, 32, 80-82. doi:10.3109/01421590903196961
[18] Shaw, S., Crisp, V., & Johnson, N. (2012). A framework for evidencing assessment validity in large-scale, high-stakes international examinations. Assessment in Education: Principles, Policy & Practice, 19, 159-176. doi:10.1080/0969594X.2011.563356
[19] Shehmar, M., Haldane, T., Price-Forbes, A., Macdougall, C., Fraser, I., Peterson, S., et al. (2010). Comparing the performance of graduate entry and school-leaver medical students. Medical Education, 44, 699-705. doi:10.1111/j.1365-2923.2010.03685.x
[20] Smith, R. M. (1993). The triple-jump examination as an assessment tool in the problem-based medical curriculum at the University of Hawaii. Academic Medicine, 68, 366. doi:10.1097/00001888-199305000-00020
[21] Streiner, D., & Norman, G. (2003). Health measurement scales: A practical guide to their development and use (3rd ed.). Oxford: Oxford University Press.
[22] Vander Vleuten, C. (1996). The assessment of professional competence: Developments, research and practical implications. Advances in Health Sciences Education, 1, 41-67. doi:10.1007/BF00596229
[23] Varkey, P., Natt, N., Lesnick, T., Downing, S., & Yudkowsky, R. (2008). Validity evidence for an OSCE to assess competency in systems-based practice and practice-based learning and improvement: A preliminary investigation. Academic Medicine, 83, 775-780. doi:10.1097/ACM.0b013e31817ec873

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.