The Usefulness of Global Student Rating Items under End Program Evaluation Surveys in Quality Improvements: An Institutional Experience in Higher Education, Saudi Arabia
Abdullah Al Rubaish
.
DOI: 10.4236/ib.2011.34047   PDF    HTML     5,601 Downloads   8,788 Views   Citations

Abstract

Program evaluation survey (PES) by students in higher education is one of the range of evaluations of academic programs. Others are course evaluation, teaching skills evaluation; and surveys of facilities and services. The present study employs the available PES data collected in colleges of Dentistry and Medicine, University of Dammam (UD), Saudi Arabia. Our PES relates to students’ experience at the end of their academic program. The present paper analyses these data and discusses the usefulness of global item results vis-à-vis individual item results in quality improvements of higher education. The respective percentage of participating students was 100 and 65. The PES results revealed that in view of poorly graded global items results, there is need of focus on global item results, leading to continuing improvements in all the areas covered in the questionnaire.

Share and Cite:

A. Rubaish, "The Usefulness of Global Student Rating Items under End Program Evaluation Surveys in Quality Improvements: An Institutional Experience in Higher Education, Saudi Arabia," iBusiness, Vol. 3 No. 4, 2011, pp. 353-358. doi: 10.4236/ib.2011.34047.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] A. Al Rubaish, L. Wosornu and S. N. Dwivedi, “Using Deductions from Assessment Studies towards Furtherance of the Academic Program: An Empirical Appraisal of Institutional Student Course Evaluation,” iBusiness, Vol. 3, No. 2, 2011, pp. 220-228.
[2] A. Al Rubaish, “On the Contribution of Student Experience Survey Regarding Quality Management in Higher Education: An Institutional Study in Saudi Arabia,” Journal of Service Science & Management, Vol. 3, No. 4, 2010, pp. 464-469.
[3] A. Al Rubaish, “A Comparative Appraisal of Timings for Program Evaluation Survey and Related Institutional Results in Saudi Arabia: Quality Management in Higher Education,” Journal of Service Science & Management, Vol. 4, No. 4, 2011, pp. 184-190.
[4] A. Al Rubaish, L. Wosornu and S. N. Dwivedi, “Appraisal of Using Global Student Rating Items in Quality Management of Higher Education in Saudi Arabian University,” iBusiness, (not published).
[5] P. Gravestock and E. Gregor-Greenleaf, “Student Course Evaluations: Research, Models and Trends,” Higher Edu- cation Quality Council of Ontario, Toronto, 2008.
[6] L. P. Aultman, “An Expected Benefit of Formative Student Evaluations,” College Teaching, Vol. 54, No. 3, 2006, pp. 251-285. doi:10.3200/CTCH.54.3.251-285
[7] T. Beran, C. Violato and D. Kline, “What’s the ‘Use’ of Students Ratings of Instruction for Administrators? One University’s Experience,” Canadian Journal of Higher Education, Vol. 35, No.2, 2007, pp. 48-70.
[8] L. A. Braskamp and J. C. Ory, “Assessing Faculty Work: Enhancing Individual and Institutional Performance,” Jossey-Bass, San Francisco, 1994.
[9] J. P. Campbell and W. C. Bozeman, “The Value of Student Ratings: Perceptions of Students, Teachers and Administrators,” Community College Journal of Research and Practice, Vol. 32, No. 1, 2008, pp. 13-24. doi:10.1080/10668920600864137
[10] W. E. Cashin and R. G. Downey, “Using Global Student Rating Items for Summative Evaluation,” Journal of Educational Psychology, Vol. 84, No. 4, 1992, pp. 563-572. doi:10.1037/0022-0663.84.4.563
[11] M. R. Diamond, “The Usefulness of Structured Mid- Term Feedback as a Catalyst for Change in Higher Education Classes,” Active Learning in Higher Education, Vol. 5, No. 3, 2004, pp. 217-231. doi:10.1177/1469787404046845
[12] L. C. Hodges and K. Stanton, “Translating Comments on Student Evaluations into Language of Learning,” Innovative Higher Education, Vol. 31, No. 5, 2007, pp. 279-286. doi:10.1007/s10755-006-9027-3
[13] J. W. B. Lang and M. Kersting, “Regular Feedback from Student Ratings of Instruction: Do College Teachers Improve Their Ratings in the Long Run?” Instructional Science, Vol. 35, No. 3, 2007, pp. 187-205. doi:10.1007/s11251-006-9006-1
[14] H. W. Marsh, “Do University Teachers Become More Effective with Experience? A Multilevel Growth Model of Students’ Evaluations of Teaching over 13 Years,” Journal of Educational Psychology, Vol. 99, No. 4, 2007, pp. 775-790. doi:10.1037/0022-0663.99.4.775
[15] R. J. Menges, “Shortcomings of Research on Evaluating and Improving Teaching in Higher Education,” In: K. E. Ryan, Ed., Evaluating Teaching in Higher Education: A Vision for the Future, Vol. 83, 2000, pp. 5-11
[16] A. R. Penny and R. Coe, “Effectiveness of Consultations on Student Ratings Feedback: A Meta-Analysis,” Review of Educational Research, Vol. 74, No. 2, 2004, pp. 215- 253. doi:10.3102/00346543074002215
[17] R. E. Wright, “Student Evaluations of Faculty: Concerns Raised in the Literature, and Possible Solutions,” College Student Journal, Vol. 40, No. 2, 2008, pp. 417-422.
[18] F. Zabaleta, “The Use and Misuse of Student Evaluation of Teaching,” Teaching in Higher Education, Vol. 12, No. 1, 2007, pp. 55-76. doi:10.1080/13562510601102131
[19] A. S. Aldosary, “Students’ Academic Satisfaction: The Case of CES at KFUPM,” Journal of King AbdulAziz University, Vol. 11, No. 1, 1999, pp. 99-107.
[20] M. Yorke, “Student Experience Surveys: Some Methodological Considerations and an Empirical Investigation,” Assessment & Evaluation in Higher Education, Vol. 34, No. 6, 2009, pp. 721-739. doi:10.1080/02602930802474219
[21] W. J. McKeachie, “Students Ratings: The Validity of Use,” American Psychologist, Vol. 51, No. 11, 1997, pp. 1218-1225. doi:10.1037/0003-066X.52.11.1218
[22] M. Theall and J. Franklin, “Looking for Bias in All the Wrong Places: A Search for Truth or a Witch Hunt in Student Ratings of Instruction?” In M. Theall, P. C. Abrami and L. A. Mets, Eds., The Student Ratings Debate: Are They Valid? How Can We Best Use Them? Vol. 109, 2001, pp. 45-46.
[23] NCAAA, “Handbook of Quality Assurance and Accreditation in Saudi Arabia, Part 2 Internal Quality Assurance Arrangements,” Monograph, University of Dammam, September 2007, p. 19.
[24] P. C. Abrami, “Improving Judgements About Teaching Effectiveness Using Teacher Rating Forms,” John Wiley & Sons, Inc., New york, 2001.
[25] C. S. Nir and L. Bennet, “Using Student Satisfaction Data to Start Conversations About Continuous Improvement,” Quality Approaches in Higher Education, Vol. 2, No. 1. 2011, pp. 17-22.
[26] W. E. Cashin, “Students do Rate Different Academic Fields Differently,” In M. Theall and J. Franklin, Eds., Student Ratings of Instruction: Issues for Improving Practice, Vol. 43, 1990, pp. 113-121.
[27] R. Gob, C. Mc Collin and M. F. Rmalhoto, “Ordinal Methodology in the Analysis of Likert Scales,” Qualilty & Quantity, Vol. 41, No. 5, 2007, pp. 601-626. doi:10.1007/s11135-007-9089-z
[28] K. R. Sundaram, S. N. Dwivedi and V. Sreenivas, “Medical Statistics: Principles & Methods,” BI Publications Pvt. Ltd., New Delhi, 2009.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.