Evaluation, an Effective Tool for Performance Assessment of Workshops and Events towards Policy Direction in University Education in Ghana

Abstract

This paper aims to report on the results of completed questionnaires by faculty from the School of Basic and Biomedical Sciences, the University of Health and Allied Sciences, Ho, Ghana after a workshop to address portions of findings/observations and recommendations from the Ghana Tertiary Education Council (GTEC) visitation panel to the University that affected the School. The workshop also had a training session on identified soft and technical skills to equip faculty for their teaching task and also enhance their performance upon return. Participants were requested to appraise if the intended objectives of the workshop were achieved, and determine if the knowledge and experience gained from the training session will enhance their knowledge and be applicable in their work. The study also sought to assess the Resource Person’s performance, and evaluate if the training materials of the workshop were related, relevant, and well-delivered. All 46 faculties who attended the workshop completed the questionnaire. The quantitative primary data collected was subsequently summarized and analyzed using the XLSTAT package. The results of the study showed that participants understood the objective and could measure its achievement. Generally, participants were satisfied with all the parameters of the workshop as required standards that made the workshop successful and agreed that the workshop was beneficial and will help them positively in their performance as faculty to attain better results. It was recommended that management should include pre and post evaluation systems in planning training and development workshops to ensure that set targets for training and development strategies set to improve faculty performance towards achieving the vision of the University are achieved.

Share and Cite:

Damalie, I.F., Agbanu, P., Fiadzomor, P. and Akrong, S. (2022) Evaluation, an Effective Tool for Performance Assessment of Workshops and Events towards Policy Direction in University Education in Ghana. Open Access Library Journal, 9, 1-11. doi: 10.4236/oalib.1109277.

1. Introduction

Evaluation over the years has evolved to become an acceptable corporate tool for measuring the performance of projects, workshops, programmes, and any other activities including social events and training sessions for planning purposes to see and review their performances before, during, and after workshops and training events to measure outcomes in terms of achieving desired results or otherwise in term of objective set [1].

According to Saarlas et al. (1994) [2], evaluation is a necessary component of all training, including workshops by providing information about the teaching and learning that occurred during the workshop and also documenting the extent to which objectives were achieved after a workshop. Saarlas et al. (1994) [2] concluded that evaluation can also be beneficial to both organizers and participants in identifying areas for improvement by determining whether expected outcomes were achieved, measuring the effectiveness of the training, and holding learners and trainers accountable for practicing the new skill and knowledge acquired.

Calhoun (2021) [3] also noted that evaluation is a process that critically examines the organization and performance of a programme involving collecting and analyzing information about the activities, characteristics, and outcomes to make judgments to improve its effectiveness and inform future decisions.

Evaluation measures the effectiveness of the training and holds Learners and Trainers accountable for practicing the new skill and knowledge acquired to improve employee retention rate since it draws employees into a more rational and emotional commitment to their jobs by making them feel more confident in their work by up-levelling their skill [4]. According to Menezes (2022) [5], evaluation is about determining how successful the mediation has been and the identification of areas for improvement. Evaluation plays an important part in policy and plan-making processes, most especially in the public sector to ensure that resources are not wasted and the objective of the programmes and events are met. Menezes (2022) [5] concluded that it is important to periodically measure events to adjust activities for effectiveness and efficiency.

Evaluation when carried out well, can help identify areas for improvement, help realize the goal, and also helps to improve environmental education and aid others to learn and prepare well for similar events [6].

According to the Africa Association for Evaluation, Evaluation enables events and programme organizers to validate programme success or progress by using the information collected to communicate to others about the impact of the events. In an organizational setting, the evaluation will help to improve work performance, public relations, and staff morale by attracting, and retaining support from current staff, and future job seekers, and also help employees to feel more confident in their work to up-level their skill sets after an event or training session.

Calhoun (2021) [3] noted that evaluation is a necessary component of all training including workshops and in an academic setting, evaluation is important information about achieving the objectives of teaching and learning that occurred during a period and also documents the extent to which long-term objectives were achieved after a session. According to Calhoun (2021) [3], reviewing the performance of the facilitators and resource persons said results can only be achieved if constructive evaluation is carried out before during, and most especially after the workshop.

Evaluation provides a systematic method to study a programme, practice, intervention, or initiative to understand how well it achieves its goals. The Hub concluded that evaluation helps to determine what works well and what could be improved in a programme or initiative [6].

This paper seeks to evaluate an off-campus training workshop held for faculty from the School of Basic and Biomedical Sciences, the University of Health and Allied Sciences, Ho, Ghana to address portions of findings/observations and recommendations from the Ghana Tertiary Education Council (GTEC) visitation panel to the University that affected the School combined with a training session on identified soft and technical skills to equip faculty for the task and also enhance their performance upon return.

The University of Health and Allied Sciences is one of the Public Universities in Ghana with the mandate of training health professionals. The Ghana Tertiary Education Council (GTEC), as part of its overseeing mandate, visits all Universities in the country once their period of accreditation expires to ensure that they delivered on their mandate before reaccrediting them for another stated period.

2. Literature Review

Training evaluation is the systematic process of analyzing training programmes to ensure that it is delivered effectively and efficiently. Training evaluation identifies training gaps and discovers opportunities for improving training programmes. By collecting feedback, trainers and professionals can assess whether training programmes can achieve their intended outcome and if the training materials and resources used are aligned with or met the company and industry standards [7].

Evaluation is a structured process that aims to create and synthesize information about interventions to make judgments regarding resultant changes, the desirability of an intervention, and the degree of fit between intended and unintended outcomes if the performance of both participants and resource persons is measured [8].

Kusek [9] stated that evaluation is the structured process that aims to create and synthesize information about interventions to make judgments regarding resultant changes, the desirability of an intervention, and the degree of fit between intended and unintended outcomes if the performance of both participants and resource persons is measured.

According to Hansen and Vedung [10], evaluation can also be a tool to measure the cost-effectiveness in terms of the preparation, performance, and choice of the resource persons of programmes and plans. Evaluation is especially important in public sector organizations because they are required, for accountability or legislative reasons, to demonstrate the benefits of their actions to the public.

Alexander (2006b) [11] also noted that Cost-benefit analysis can be made using the evaluation method for many years because of its ability to measure the incidence of benefits and costs generated by a plan in monetary terms, although planning practitioners found the monetary value, the cost-benefit analysis could provide a view of the complex nature of planning issues and later evolved into cost-effectiveness analysis and then as a tool for the formulation of policies and analysis.

Laurian et al. (2010) [12] mentioned that in planning practice, monitoring and evaluation are often absent or incompletely explained in plans. Also, there is often disagreement over how to measure and define the success (or failure) of plans. This makes it difficult to establish clear causal linkages between plans and outcomes in terms of delivery, cost, and transfer of knowledge. Talen (1996b, 2017) [13] however noted that this is perplexing as it is extremely difficult to determine plan effectiveness, impacts, and outcomes without proper evaluation.

Talen (2017) [13] stipulated that another notable difference is the use of evaluation in practice. According to Talen [13], pre, mid, and post-evaluations have remained relatively underused and overlooked in planning practice, although there is evidence of increasing interest in the subject. Talen [13] identified, several factors that impede evaluation in planning practice including lack of resources like constraints of time, staff capacity, financial resources, political realities, organizational culture, and poorly developed evaluation methods whiles planners are often engaged in the “front-loading” of resources whereby substantial resources are dedicated toward the development of the plans with limited resources directed toward the evaluation of plans once implemented. Oliveira and Pinho (2011) [14] also stated that this includes developing appropriate monitoring and indicator frameworks for allocating sufficient resources to carry out these tasks. On the other hand, the field of programme evaluation is well established, with some organizations regularly setting aside resources for evaluating through established and routine evaluation procedures and policies connected to budgetary processes.

Evaluation has long been considered an integral component of the planning standard, at least in theoretical terms, Evaluation can be used to enhance the quality and implementation of plans, improve the planning process, and demonstrate the effectiveness of plans [1]. Evaluation should play a critical role in ensuring that plans reflect the highest quality of thought and practice [15]. Evaluation can provide an objective and systematic approach to study plans, improve the plan preparation process, and assess whether plans achieved their stated goals and objectives. Through an evaluation, we can empirically document the deficiencies, or strengths, in plans and identify specific weaknesses that undermine implementation and plan effectiveness [16].

Evaluation is the structured process that aims to create and synthesize information about interventions to make judgments regarding resultant changes, the desirability of an intervention, and the degree of fit between intended and unintended outcomes if the performance of both participants and resource persons is measured [17].

In public sector organizations, evaluation is founded on the principles that government interventions need to have demonstrable benefits and that decision-makers must be held accountable for their actions. In the realm of planning, evaluation is used to assess plans, the planning process, and the outcomes generated by plans, while taking into consideration the institutional context within which planning operates [16]. I will note that the evaluation of plans, workshops, and associated programmes is well established in the circumstance in which the activity was done.

3. Research Design

3.1. Problem Statement

There have been several views on the relevance of evaluation whether before, during, or after an event or a programme. Some scholars have argued that to improve future events, participants at any event must be taken through an evaluation either before, during, or after the event.

The Problem Statement is to appreciate if evaluation is necessary to ascertain the success rate and cost-effectiveness of the performance of a faculty workshop organized by the University of Health and Allied Sciences, Ho, Ghana.

3.2. Purpose of the Paper

The purpose of this paper was to evaluate the organizational strategies, in terms of the success rate and cost-effectiveness of the workshop. The moral is also to measure the performance of the Resource Persons, and also solicit the experiences in terms of benefits or challenges of participants during the workshops for future decisions and replication by other similar event organizers. The paper also seeks to see if the evaluation of the workshops will help to improve on policy formulation and direction in organizing workshops in University education in Ghana.

3.3. Research Method

A quantitative descriptive research technique was used to survey the views of participants. The sample of this study was selected through non-probability convenience sampling using all participants. Responses were collected and data were gathered through a self designed and self-administered close ended questionnaire (evaluation form). The evaluation form was administered to all 46 faculties who participated in a workshop organised by the School of Basic and Biomedical Sciences of the University of Health and Allied Sciences, Ho. To meet the objectives of the study questions on the administered forms, a five-point Likert scale of closed-ended responses were provided to choose from.

A descriptive survey design was adopted for this study because, according to Cohen, Manion, & Morrison (2007) [18], this design is appropriate for examining opinions and practices that exist. A survey also permits the researchers to gather information from a large sample of people relatively quickly and inexpensively [19].

The survey research was deemed most appropriate since the study collected and examined the opinions of participants after a workshop for evaluation. The drawback regarding descriptive surveys however is that it is time-consuming and the response rates can be very low. Robson (2002) [20] in order not to fall victim to this drawback, the evaluation forms were given and collected on the same day that they were distributed. The researcher also explained the objectives and benefits of the exercise to the participants to make them respond to the instruments immediately. The evaluation forms were self-administered among the participants after the workshop by the researcher who also doubles up as the School Officer, so the response rate was 100%. The primary data collected was subsequently summarized and analyzed using the XLSTAT package.

3.4. The Population and Sample of the Study

The target population for the study was 46 faculty made up of both males and females from the School of Basic and Biomedical Sciences of the University of Health and Allied Sciences, Ho in Ghana who participated in a workshop. The choice of the population was based on the notion that they were those involved in the workshop. The population therefore also served as the sample size for the study.

3.5. Organization of the Study

The study has been organized into eleven sections. The first section contains the introduction and background to the study. The second section dealt with related literature on the study followed by the problem statement, the purpose of the study in that order. The 5th part of the study dealt with the methodology used by explaining the design and instrument used in carrying out the study. The next step of the study was on the population and the sample size with its sampling procedures employed in gathering and analyzing the data. The concluding parts presented a table of results on the analyzed data, followed by the discussion of the results on the findings, conclusions drawn, limitations and challenges encountered during the study and finally, recommendations on lessons learned for further study and future actions in that order.

3.6. Data Analysis of the Results

Data collected from the field were analysed in a tabular form to presenting responses provided by the participants in the form of mean and standard deviation (Table 1).

4. Findings and Discussion

Evaluation over the years has evolved to become an acceptable common tool for measuring the performance of projects, workshops, programmes, and any other activities including social events and training sessions for planning purposes to see and review performances before, during, and after workshops and training events to measure outcomes in terms of effectiveness, efficiency, and cost.

As indicated in the table, a standard deviation (SD) of 4.2 with a mean of 0.368 of the participants was recorded that the objectives of the workshop were well-defined, and accomplished. This confirms Readings Ks’ [1] statement that evaluation is a necessary component of all training including workshops and especially in an academic setting to measure the achievement of the objectives of teaching and learning that occurred during the period and also documents the extent to which long-term objectives were achieved after a session.

Table 1. Mean and standard deviation of responses given by participants.

As to whether participation and interaction were encouraged during the session of the workshop, a record of 4.5 SD with a Mean of 0.510 also indicated that the workshop was relevant to their work. Again, the findings confirmed McDavid and Hawthorn (2006) [17]. statement that evaluation is the structured process that aims to create and synthesize information about interventions to make judgments regarding resultant changes, the desirability of an intervention, and the degree of fit between intended and unintended outcomes if performance of both participants and resource persons are measured.

The results also recorded an average SD of 4.0 for participants agreeing that the Resource Person was knowledgeable about the training topics well-chosen and that the training experience and materials will be useful to their work, upholding Hansen and Vedung (2010) [10] statement that evaluation can also be a tool to measure the cost-effectiveness in term of the preparation, performance, and choice of the Resource Persons of programmes.

Concerning whether the time allotted for the training was sufficient and suitability of the meeting room and adequacy of the facilities, comfort, and sufficiency of the food, an approximate average score of 4.0 were recorded, indicating that participants generally agreed with these statements. As confirmed by Alexander (2006b) [11], cost-benefit analysis can be made using the evaluation method for many years because of its ability to measure the incidence of benefits and costs generated by a plan in monetary terms and then as a tool for the formulation of policies and analysis for future events.

Although a majority of the respondents generally agreed that the resource person was well prepared, the training objectives were met, and also the time allotted for the training was sufficient, some participants were neutral in their responses towards those items whiles some respondents also disagreed that the time allotted for the training was sufficient.

Overall, participants generally agreed with a standard deviation of 4.3 that the different aspects of the workshop were well met. As stated by Lyles W. (2012) [16], through an evaluation, we can empirically document the deficiencies, or strengths, in plans and identify specific weaknesses that undermine implementation and plan effectiveness.

5. Conclusions

The results of the study showed that participants understood the objective and could measure its achievement, thus identifying an appropriate objective is a key to organizing a training workshop for faculty development. Respondents also stated that the workshop materials will be very useful to them in the delivery of their work and also agreed that the knowledge gained was useful and will directly apply to their work. This could mean that the knowledge gained by participants was directly work-related so will bring a change in attitude and benefit students through teaching upon return. This also reveals that respondents were concerned with learning materials and the type of knowledge that will impact their performance and delivery when they attend a workshop since they can equally research into their specific areas without a Resource Person or attending workshops.

In assessing if the Resource Person was rightly chosen and professional, the majority of the respondents agreed that the resource person was professional and therefore rightly chosen. Perhaps respondents saw the resource person as an authority in the area of the discipline selected to accomplish the objective of the workshop. Concerning the satisfaction level of the number of sessions and time allocated for the workshop, more than 70 percent of the participants requested more workshop sessions in a year most likely due to the positive impact it had on them and how helpful it will be on their performance. Certainly, respondents were concerned with the time and recommended more time and number of sessions. Participants also preferred outstation workshops to campus workshops, this largely could be due to the choice of a cozy environment, an obvious place for academic work and brainstorming for critical thinking and learning.

Generally, participants were satisfied with all the parameters of the workshop as required standards that made the workshop successful and agreed that the workshop was beneficial and will help them positively in their performance as faculty to attain better results.

Thus, the findings of the workshop gave a position on what went right or wrong as a basis for management’s decision on what to be improved upon moving forward, in achieving the set target on the policy of training and development to improve faculty performance towards achieving the vision of the University.

Indeed, results from this study can help to improve policy direction and implantation in organising training and development workshops and programmes in Universities in Ghana.

6. Recommendation

The study recommends that:

Evaluation should be a major part of organising programmes, workshops, and events for feedback on performance and improvement in future events.

Identification of appropriate evaluation tools should be part of the planning process for programmes, workshops, and events to measure cost effectiveness and expected results. This is to ensure that management is achieving its set target on training and development to improve faculty performance towards achieving the vision of the University.

Evaluation of programmes, workshops, and events should be given more attention and documented, and research should be conducted in the area of evaluation rather than just the presentation of reports.

7. Limitations of the Study

The research was restricted to only a School of Basic and Biological Sciences faculty workshop organized by the University of Health and Allied Sciences, Ho, Ghana. Also, although there are a lot of factors that affect the success of workshops and programmes, the study only evaluated the organizational strategies, and performance of the Resource Persons, and solicited experiences in terms of benefits or challenges of participants during one workshop for future decisions and replication by other similar event organizers. Thus, though the study mentioned that its findings will help to improve policy direction, the population used and items measured are not enough to draw an emphatic conclusion. The nature of the research questions did not also require subjecting the work to rigorous statistical analysis and did not ask an open-ended question to solicit divergent views from the participants.

Finally, it was noted during the literature search that much research work was not conducted in the area of evaluation. Most materials assessed were reports presented on organized workshops and programmes making assessing current material in terms of research on evaluation for the work challenging.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Readings, K. (2022) Unit Ten: Monitoring and Evaluation. https://www.studocu.com/row/document/university-of-cape-coast/theories-of-management/pmp/17124234
[2] Saarlas, K.N., Paluku, K.M., Roungou, J.-B., Bryce, J.W., Naimoli, J.F. and Benzerroug, E.H. (1994) Multiple Methods for Workshop Evaluation. International Quarterly of Community Health Education, 15, 33-52.
[3] Calhoun, C., Sahay, S. and Wilson, M.L. (2021) Instructional Design Evaluation. In: McDonald, J.K. and West, R.E., Eds., Design for Learning: Principles, Processes, and Praxis, EdTech Books. https://edtechbooks.org/id/instructional_design_evaluation
[4] Urbancová, H., Vrabcová, V., Hudáková, M. and Petrů, G.J. (2021) Effective Training Evaluation: The Role of Factors Influencing the Evaluation of Effectiveness of Employee Training and Development. Sustainability, 13, Article No. 2721. https://doi.org/10.3390/su13052721
[5] Menezes, V.G., et al. (2022) Evaluation of Public Services Considering the Expectations of Users—A Systematic Literature Review. Information, 13, 162. https://doi.org/10.3390/info13040162
[6] Thomson, G., Hoffman, J. and Staniforth, S. (2014) Measuring the Success of Environmental Education Programs. Canadian Parks and Wilderness Society and Sierra Club of Canada.
[7] Omar, M., Gerein, N., Tarin, E., Butcher, C., Pearson, S., and Heidari, G. (2009) Training Evaluation: A Case Study of Training Iranian Health Managers. Human Resources for Health, 7, Article No. 20. https://doi.org/10.1186/1478-4491-7-20
[8] Chatten, K., Purssell, H., Banerjee, A.K., Soteriadou, S. and Ang, Y. (2018) Glasgow Blatchford Score and Risk Stratifications in Acute Upper Gastrointestinal Bleed: Can We Extend This to 2 for Urgent Outpatient Management? Clinical Medicine, 18, 118-122. https://doi.org/10.7861/clinmedicine.18-2-118
[9] Kusek, J.Z. and Rist, R. (2004) Ten Steps to a Results-Based Monitoring and Evaluation System: A Handbook for Development Practitioners. World Bank, Washington DC.
[10] Hansen, M.B. and Vedung, E. (2010) Theory-Based Stakeholder Evaluation. American Journal of Evaluation, 31, 295-313. https://doi.org/10.1177/1098214010366174
[11] Alexander, B. (2006) Web 2.0: A New Wave of Innovation for Teaching and Learning? Educause Review, 41, 32-44.
[12] Laurian, L., Crawford, J., et al. (2010) Evaluating the Outcomes of Plans: Theory, Practice, and Methodology. Environment and Planning B: Planning and Design, 37, 740-757. https://doi.org/10.1068/b35051
[13] Talen Energy (2017) CCR Rule Compliance Data and Information-Raven Power. Lot 15-Annual Fugitive Dust Control Report 2017. Talen Energy, Allentown.
[14] Oliveira, V. and Pinho, P. (2011) Bridging the Gap between Planning Evaluation and Programme Evaluation: The Contribution of the PPR Methodology. Evaluation, 17, 293-307. https://doi.org/10.1177/1356389011411686
[15] Connell. D.J. and Daoust-Filiatrault, L.-A. (2018) Better Than Good: Three Dimensions of Plan Quality. Journal of Planning Education and Research, 38, 265-272.
[16] Lyles, W., Berke, P. and Smith, G. (2012) A Comparison of Local Hazard Mitigation Plan Quality in Six States, USA. Landscape and Urban Planning, 122, 89-99. https://doi.org/10.1016/j.landurbplan.2013.11.010
[17] McDavid, J.C., Huse, I. and Hawthorn, L.R. (2006) Program Evaluation and Performance Measurement: An Introduction to Practice. SAGE Publications, Thousand Oaks.
[18] Cohen, L., Manion, L. and Morrison, K. (2007) Research Methods in Education. Routledge, London. https://doi.org/10.4324/9780203029053
[19] Oluwatayo, J.A. (2012) Validity and Reliability Issues in Educational Research. Journal of Educational and Social Research, 2, 391. https://www.richtmann.org/journal/index.php/jesr/article/view/11851
[20] Robson, C. (2002) Real World Research: A Resource for Social Scientists and Practitioner-Researchers. 2nd Edition, Wiley-Blackwell, Hoboken.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.