The Influence of Monitoring and Evaluation Methods on the Performance of Uganda Red Cross Society in Eastern Uganda

Abstract

This paper examined the influence of monitoring and evaluation (M&E) methods on the performance of the Uganda Red Cross Society (URCS) in Eastern Uganda. The study found that there is a significant relationship between M&E methods and the performance of URCS, with effective M&E methods leading to improved performance. The study also found that human resource capacity and M&E systems are important factors that influence the performance of URCS. The paper used a mixed-methods approach to collect data, including surveys, interviews, and focus groups. The paper found that URCS uses a variety of M&E methods, including logical frameworks, monitoring and evaluation plans, costed workplans, and indicator manuals. The paper also found that URCS has adequate human resource capacity for M&E, with staff members who are skilled and knowledgeable in M&E. Finally, the study found that URCS has an improved M&E system, with clear policies, procedures and a monitoring and evaluation champion. The findings suggest that URCS can improve its performance by further strengthening its M&E system. This includes ensuring that M&E is integrated into all aspects of the organization’s work, that staff members are adequately trained in M&E, and that M&E data is used to inform decision-making.

Share and Cite:

Bbosa, S. , Edaku, C. and Kiyingi, F. (2023) The Influence of Monitoring and Evaluation Methods on the Performance of Uganda Red Cross Society in Eastern Uganda. Open Journal of Social Sciences, 11, 208-227. doi: 10.4236/jss.2023.117015.

1. Introduction

The Uganda Red Cross Society (URCS) plays a vital role in providing humanitarian assistance and support to vulnerable populations in Uganda, particularly in Eastern Uganda, which is prone to various disasters and emergencies. As an integral part of the International Red Cross and Red Crescent Movement, the URCS strives to alleviate human suffering, prevent and respond to disasters, and promote community resilience (Uganda Red Cross Society, n.d.). Monitoring and evaluation (M&E) are essential components of the URCS’s operations, enabling the organization to assess the effectiveness and efficiency of its programs and activities in achieving its objectives.

M&E is a systematic and ongoing process that involves collecting, analyzing, and using data to inform decision-making, improve program implementation, and enhance accountability (Patton, 2018) . It provides a means to measure the progress, outputs, outcomes, and impact of interventions, allowing organizations to learn from their experiences and make evidence-based adjustments to their strategies (Bamberger, Rugh, & Mabry, 2019) . In the context of humanitarian organizations, effective M&E is crucial for ensuring that resources are optimally utilized, interventions are responsive to community needs, and organizational performance is continuously improved. Type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables, are not prescribed, although the various table text styles are provided.

Monitoring and Evaluation Systems are management toolkits that enable decision-makers to track progress and demonstrate the impacts of a given programme/project. In the long run, the toolkits help organizations make decisions on the success, failure, relevance, efficiency and effectiveness of their programmes. Monitoring and Evaluation Systems requires twelve main components in order to function effectively and efficiently to achieve the desired results. The M&E components include: Organizational Structures with M&E Functions, Human Capacity for M&E, Partnerships for Planning, Coordinating and Managing the M&E System, M&E frameworks/Logical Framework, M&E Work Plan and costs, Communication, Advocacy and Culture for M&E, Routine Programme Monitoring, Surveys and Surveillance, National and Sub-national databases, Supportive Supervision and Data Auditing, Evaluation and Research and Data Dissemination and Use (Kusek & Rist, 2004b) . Any slack in either component automatically leads to derailing of progress in managing of programmes and projects. Monitoring and Evaluation Systems provide important feedback on the progress of programmes/projects. That is, the success or failure of projects, programmes and policies throughout their respective life cycles. These systems constitute a powerful, continuous management tool that decision makers can use to improve performance and demonstrate results. Monitoring and Evaluation Systems (especially Results based) have a special capacity to add to the learning and knowledge process.

These systems provide for learning and knowledge, since by providing continuous feedback to managers, they promote organizational learning through a cycle involving the reflection on progress, learning and allowing for adjustments in the course of programmes or projects where need be (Kusek & Rist, 2004b) . These systems have been designed to monitor and evaluate at all levels: macro and micro levels, which can roughly be translated to policy, programme and project levels respectively. Information supplied by Monitoring and Evaluation Systems is used as a crucial management tool in achieving results and meeting specific targets. Such information, which reveals the level of progress, performance and problems, is crucial to managers striving to achieve results. As Baum et al. (1985) argue, these systems are actually one of the “techniques” for managing programme/project implementation, especially because they provide an early warning to project management about potential or actual problems. Subsequently, when problems are identified, questions about assumptions and strategy behind a given programme or project may be raised.

This way, they aid development managers make choices and decisions on running projects and programmes. Monitoring and Evaluation Systems can also aid in promoting greater transparency and accountability within organizations and government (Rubin & Rubin, 1995) . By getting information concerning the progress of project help paving way for openness and accountability.

According to Bhasin (2020) , organizational performance is defined as the actual output of a company measured against its intended output. A broad field that deals with what an organisation does and can accomplish when it interacts with its various constituencies. The organisational performance deals with some specific areas of the outcomes in an organization (Bhasin, 2020) . Therefore, organizational performance comprises the actual output or results of an organization as measured against its intended outputs or goals and objectives (Richard et al., 2009) . Specialists in many fields are concerned with organizational performance including strategic planners, operations, finance, legal, and organizational development (Martin, 2011) . Performance therefore should be measured with available indicators to justify results.

The objective of this study was to examine the influence of monitoring and evaluation methods on the performance of the Uganda Red Cross Society in Eastern Uganda. By investigating the M&E practices employed by the URCS and their impact on organizational performance, this research aims to contribute to the understanding of the role of M&E in enhancing the effectiveness, accountability, and overall performance of humanitarian organizations.

To achieve the research objective, this study sought to answer the following research questions:

1) What M&E methods are employed by the Uganda Red Cross Society in Eastern Uganda?

2) What is the influence of M&E on the performance of the Uganda Red Cross Society in Eastern Uganda?

This research is significant for several reasons. First, it provides insights into the M&E practices employed by the URCS in Eastern Uganda, shedding light on the specific methods and approaches utilized to monitor and evaluate the organization’s programs and interventions. Second, the research findings contribute to the existing body of knowledge on M&E in humanitarian organizations, particularly in the context of Eastern Uganda.

2. Literature Review

2.1. The Concept of Monitoring and Evaluation

Monitoring and evaluation is a concept that various researchers define in different ways depending on their focus. However, the key elements of monitoring and evaluation are reflected in a range of definitions. In simple terms, monitoring means keeping track of what is being done so that corrective action can be taken if necessary. Effective evaluation depends on good monitoring; therefore, the two concepts complement each other but differ in their objectives and methods.

2.1.1. Monitoring

According to The Organization for Economic Cooperation and Development (OECD) (2004: p. 16) , monitoring is “a continuous function that uses systematic collection of data on specified indicators to provide management and other stakeholders of an on-going development intervention with indications of the extent of progress and the achievement of objectives and progress in the use of allocated funds.” Morra Imas & Rist (2009) define monitoring as a routine, ongoing, internal activity that collects information on a programme’s activities, outputs, and outcomes to track its performance. Gage (2005) describes monitoring as the routine tracking of a programme’s activities by measuring whether planned activities are being carried out on a regular, ongoing basis. This definition agrees with McCoy (2005) , who also define monitoring in a similar way.

Most definitions offered in different sources categorically agree that monitoring is the continuous tracking of activities or progress in policies, programmes, processes or plans. For instance, Gosling et al. (2003) defines monitoring a systematic assessment of the progress of a programme over time but adds that the process monitoring and impact monitoring are both needed to show what changes are taking place, what processes lead to the changes and how the programme can be improved. Moreover, Kusek and Rist (2004a) notes that monitoring gives information on where a policy, programme, or project is at any given time in relation respective targets and outcomes and it is descriptive in intent. However, Monitoring is not just a routine and ongoing activity that tracks what is being done, but a critical assessment that aims at providing early and detailed information on the progress or delay of the assessed activities. It’s not only an oversight of the implementation stage, but also a learning process that informs decision-making and improves performance (Stephenson & Stengel, 2020) .

2.1.2. Evaluation

The literature provides a comprehensive understanding of the concept of evaluation and its impact on organizational performance. Goldman & Mathe (2014) defines evaluation as a time-bound and periodic exercise aimed at providing credible and useful information to guide decision-making by staff, managers, and policy makers. It assesses the relevance, efficiency, effectiveness, impact, and sustainability of projects (Goldman & Mathe, 2014) . This definition aligns with Randel’s (2002) description of evaluation as a periodic assessment of project relevance and performance.

The literature consistently highlights evaluation as a selective exercise that systematically and objectively assesses the progress and achievement of outcomes (UNDP, 2002; Gorgens & Kusek, 2009) . It involves assessments of varying scope and depth conducted over time to meet the evolving needs for evaluative knowledge and learning in achieving outcomes. Evaluation aims to examine the planned actions, the actual achievements, the methods employed, and the value or worth of the intervention (UNDP, 2002) . Monitoring and evaluation are portrayed as distinct but complementary processes, with evaluation providing evidence on the reasons for reaching or not reaching targets and outcomes (UNDP, 2002; Gorgens & Kusek, 2009) . However, the current literature does not specifically address the impact of evaluation on organizational performance in the context of the Uganda Red Cross Society (URCS) or its humanitarian activities. While the literature provides valuable insights into the purpose and components of evaluation, it does not directly explore the relationship between evaluation and organizational performance in the specific case of URCS in Eastern Uganda. This literature gap in line with Gorgens & Kusek (2009) indicates a need for further research to investigate how evaluation practices and their effectiveness influence the performance of URCS in delivering humanitarian services to vulnerable communities.

2.1.3. Purpose of M&E

Mackay (2007) views M&E as a tool to design results-based management, enhance transparency, and support accountability relationships, and also suggests that these uses of M&E place it at the centre of sound governance arrangements and make it essential for achieving evidence-based policy making, evidence-based management, and evidence-based accountability. Similarly, the World Bank (2004) notes that the purpose of M&E activities is to provide government officials, managers, and civil society with better means for learning from past experience, improving service delivery, planning and allocating resources, and demonstrating results as part of accountability. Morra Imas & Rist (2009) agree that the purpose of any evaluation is to provide information to decision makers to enable them to make better decisions about projects, programmes or policies.

However, it is paramount to note that the purpose of M&E as an instrument of measuring the performance of humanitarian organizations has eluded scholars. Few scholars suggest that M&E is a tool to measure performance of humanitarian organizations. Medina-Borja and Triantis (2007) presented a conceptual framework for designing and implementing a performance measurement system which addresses four main dimensions: revenue generation, capacity building, customer satisfaction and efficient results, which can be used in non-profit organizations, mainly humanitarian organizations. Yet, the conceptual framework is reticent about M&E as a tool for measuring the performance of humanitarian organizations (Abidi et al., 2020) .

2.2. How Monitoring and Evaluation Methods Influence Performenc of Humanitarian Organizations

According to Grove and Zwi (2008) , the log frame contains a natural bias towards quantification in that the matrix demands objectively verifiable indicators, for example number of activities to be conducted, number of times expected for reporting, frequencies of stakeholder engagement among others forcing projects to consider how they will measure progress towards intended outcomes. While setting clear objectives and identifying ways of measuring these from the outset helps management and other stakeholders to identify where the project is succeeding or failing, this emphasis on the measurable also represents a crucial weakness. In particular, Grove and Zwi (2008) argue that relationships between people (both internal and external to the project) and process issues (how the project is undertaken) are likely to be neglected, with attention focused on the most tangible outputs, such as clinics built or vaccinations administered.

In most of the cases, regular progress reporting is conducted for donor purposes that gives an account of activities undertaken and immediate outputs, but misses out on qualitative information as to whether the objectives of the program are being achieved or fall short at the end of the project (Khan, 2003) . In order to reassure donors that their money has been well-spent and has made a measurable difference, quantitative indicators are required. Furthermore, an over-reliance on quantitative data may mean that the real essence of change is not recorded or understood. Thus, there is a considerable challenge not only in providing the aid system with the numbers it needs but also in ensuring that these numbers are both meaningful and practical to collect (Hailey & James, 2003) .

The classic mantra for M&E has been to develop Specific, Measurable, Achievable, Reliable and Time bound (SMART) indicators. Therefore, the drive for setting up M&E systems based only on easily measurable quantitative indicators has perhaps been one of the key reasons for the failure of M&E systems to contribute useful information for the management of development initiatives. Hence both qualitative and quantitative information are critical, yet an indicator driven approach to M&E often drives systems in the direction of quantitative information, yet it is often the qualitative information that is required for explanation, analysis and sound decision making (Woodhill, 2005) .

Projects require different M&E needs depending on the operating context, implementing agency capacity and donor requirements. It is therefore important, when preparing an M&E plan to identify methods, procedures, and tools to be used to meet the project’s M&E needs (Chaplowe, 2008) . There are many tools and techniques used to aid project managers in planning and controlling project activities which include: project selection and risk management tools and techniques; project initiation tools and techniques; project management planning tools and techniques; project management executing tools and techniques; and project management monitoring and controlling tools and techniques.

Less formal methods which are rich in information, subjective and intuitive, hence less precise in conclusion, they include, field visits and unstructured interviews. In order to increase the effectiveness of an M&E systems, the monitoring and evaluation plan and design need to be prepared as an integral part of the project (Nabris, 2002) . Organizations like United States Agency for International Development (USAID) policy on M & E require that their grant recipients document their M&E systems in a Performance Management Plan, which is a tool designed to help them set up and manage the process of monitoring, analyzing, evaluating and reporting progress towards achieving objectives (USAIDS, 2012) . The Performance Management Plan also serves as a reference document that contains targets, a detailed definition of each project indicator, the methods and frequency of data collection, as well as who is responsible for collecting the data. It will also provide details on how data will be analyzed and evaluations required to complement monitoring data.

According to the experience drawn from USAID Turkey M&E plan, best practices not only include linking M&E to strategic plans and work plans, but also focusing on efficiency and cost effectiveness, employing a participatory approach to monitoring progress, utilizing both international and local expertise, disseminating results widely, using data from multiple sources, and facilitating the use of data for program improvement (Mulwa, 2008) .

This is because the M&E systems that are set based on acceptable best practices aid in making data-based decisions as well as provide donors with evidence-based project results. Hence M&E is a project asset (Mulwa, 2008) . However M&E in capacity building is still in the initial stages of development, and the standards and approaches to the tool have not been set. In instances of urgency to meet emergent social needs in Africa, the M&E is not prioritized, because there is no one-size-fit-all M&E strategy (Fitzgerald et al., 2009) .

As mentioned earlier, and reaffirming the importance of M&E tools as the backbone of this study, there is need for management commitment in the accessing and proper use of each tool to produce the expected results. There should be enough finances to cater for these tools and ensure their sustainability through effective training of the personnel to use them. However, in most projects there is little being done towards implementation of a Monitoring and evaluation systems which is impact driven (DAC, 2005) . In most cases the practice of M&E is a routine process with no much expected from it (Kusters et al., 2011) , and is a way of pleasing donors (World Bank, 2004) and the production of quality results is not seen (UNDP, 2009) .

There is no allocation of staff specific to the monitoring and evaluation department and thus the level of specialization is low (Chaplowe, 2008) . There is need for management to show commitment towards implementing a strong and sustainable Monitoring and evaluation systems for effectiveness of their projects (World Bank, 2000) . This will eventually lead to the allocation of proper budget to cater for the enormous monitoring and evaluation needs (Khan, 2003) , leading to trained staff with relevant skills for monitoring and evaluation (IFAD, 2002) . Any organization is only as strong as its human resource capabilities. An organization without the right people with the right training is as good as dead (Mpofu et al., 2014) . As revealed by Mpofu et al. (2014) , the technical team’s ability to conduct evaluations and the value of participation of human resources in policymaking process, motivation to impact decisions can be huge determinants of how the M&E lessons are learnt, communicated and perceived. M&E practical training is important in capacity building of personnel because it helps with the interaction and management of the M&E systems.

M&E training starts with the understanding of the M&E theory and ensuring that the team understands the linkages between the project theory of change and the results framework as well as associated indicators (CPWF, 2012) . Training should therefore be practical focused to ensure the understanding (CPWF, 2012) . Theory of change also known as the program theory/result chain/program logic model/attribution logic (Perrin, 2012) ; it is a causal logic that links research activities to the desired changes in the actors that a project targets to change. It is therefore a model of how a project is supposed to work. The function of theory of change is to provide a road map of where the project is heading while monitoring and evaluation tests and refines that road map (CPWF, 2012; Perrin, 2012) .

In fact organizations that ignore the training aspect in M & E find themselves faced with a number of challenges. According to Oluoch (2012) , people knowledgeable in work need to plan the work and hence be able to work. Technical capacity is the most important in the project management because without it no completion of the project will be possible. The technical work of any project need to be done by qualified staff so that the quality of work is of high standard; there is lack of professional and technical supervision, which has led to poor project quality. In addition there is low community participation in monitoring and evaluation due to the in adequacy of data and general information about implementation process in an organization.

The UNDP (2009) handbook on planning, monitoring and evaluation for development results, emphasizes that human resource is vital for an effective monitoring and evaluation, by stating that staff working should possess the required technical expertise in the area in order to ensure high-quality monitoring and evaluation. Implementing of an effective M&E demands for the staff to undergo training as well as possess skills in research and project management, hence capacity building is critical (Nabris, 2002) . In-turn numerous training manuals, handbooks and toolkits have been developed for NGO staffs working in project, in order to provide them with practical tools that will enhance result-based management by strengthening awareness in M&E. They also give many practical examples and exercises, which are useful since they provide the staff with ways of becoming efficient, effective and have impact on the projects (Shapiro, 2011) .

Human capacity, with appropriate training and experience are crucial for the production of M&E results. Any organization is only as powerful as its human resource capabilities, In other words, an organization without the right people with the right training is as good as dead. According to World Bank (2011) there is a need to have an effective M&E human resource capacity in terms of quantity and quality. M&E being a new professional field, it faces challenges in effective delivery of results.

Therefore, there is a great demand for skilled professionals, capacity building of M&E systems, and coordination of training courses as well as technical advice (Gorgens & Kusek, 2009) . M&E human capacity building needs a wide range of activities, including formal training, in-service training, mentorship, coaching and internships. Both formal training and on-the-job experience are imperative in rising evaluators with various selections for training and development opportunities which include: the public sector, the private sector, universities, professional associations, job assignment, and mentoring programs (Acevedo et al., 2010) . Monitoring and evaluation carried out by untrained and unknowledgeable people is certain to be time consuming, expensive and the results generated could be impractical and irrelevant.

The M&E system cannot function without skilled people who effectively execute the M&E tasks for which they are responsible. Therefore, understanding the skills needed and the capacity of people involved in the M&E system (undertaking human capacity assessments) and addressing capacity gaps (through structured capacity development programs) is at the heart of the M&E system (Gorgens & Kusek, 2009) . In its framework for a functional M&E system, UNAIDS (2008) notes that, not only is it necessary to have dedicated and adequate number of M&E staff, it is essential for the same staff to have the right skills for the work. Moreover, M&E human capacity building requires a wide range of activities, including formal training, in-service training, mentorship, coaching and internships. Lastly, M&E capacity building should focus not only on the technical aspects of M&E, but also address skills in leadership, financial management, facilitation, supervision, advocacy and communication.

Building an adequate supply of human resource capacity is critical for the sustainability of M&E system and is generally an ongoing issue. Furthermore, it needs to be recognized that growing evaluators requires far more technically oriented M&E training and development than can usually be obtained with one or two workshops. Both formal training and on-the-job experience are important in developing evaluators with various options for training and development opportunities which include: the public sector, the private sector, universities, professional associations, job assignment, and mentoring programs (Acevedo et al., 2010) .

Monitoring and evaluation carried out by untrained and inexperienced people is bound to be time consuming, costly and the results could generated prove impractical and irrelevant. Therefore, this will definitely impact the success of projects (Nabris, 2002) . In assessment of CSOs in the Pacific, UNDP (2009) discusses some of the challenges of organizational development as having inadequate monitoring and evaluation systems. Additionally, the lack of capabilities and opportunities to train staff in technical skills in this area is clearly a factor to be considered. During the consultation processes, there was consensus among CSOs that their lack of monitoring and evaluation mechanisms and skills was a major systemic gap across the region. Furthermore, while there is no need for CSOs to possess extraordinarily complex monitoring and evaluation systems, there is certainly a need for them to possess a rudimentary knowledge of, and ability to utilize reporting, monitoring and evaluating systems.

According to Gorgens & Kusek (2009) , the purpose of training is mainly to improve knowledge and skills. Changing technology requires that employees possess the knowledge, skills and abilities needed to cope with new processes and production techniques. Gorgens & Kusek (2009) further argued that training brings a sense of security at the workplace which reduces labor turnover and absenteeism is avoided; change management training helps to manage change by increasing the understanding and involvement of employees in the change process and also provides the skills and abilities needed to adjust to new situations; provide recognition, enhanced responsibility and possibility of increase promotion; give a feeling of personal satisfaction and achievement, and broaden opportunities for career progression; and help to improve the availability and quality of staff. There is no organization without a human resource aspect. The human resource capabilities determine a lot for company in term of achieving its goals. The technical capacity of the organization in conducting evaluations, the value and participation of its human resources in the policymaking process, and their motivation to impact decisions, can be huge determinants of how the evaluation’s lessons are produced, communicated and perceived (Vanessa & Gala, 2011) .

Training for the requisite skills should be arranged for human resources if they are inadequate and they should be given clear job allocation and designation befitting their expertise. For projects with staff that are sent out in the field to carry out project activities on their own there is need for constant and intensive on-site support to the outfield staff (Zhou & Gideon, 2013) . Employee needs vary. As Maslow explained in the hierarchy theory the employee goes through different levels to have that feeling of accomplishment. The attention by the organization coupled with increased expectations following the opportunity can lead to a self-fulfilling prophecy of enhanced output by the employee (Zhou & Gideon, 2013) . However, it should be noted that the 21st Century employee relies more on virtual training and access to online training takes precedence, as opposed to the classroom training. However, if the organization is not ready to embrace the changing technology, employee training will not meet the intended objectives.

Conclusively, training means much more, not just training, but a whole suite of learning approaches: from secondment to research institutes and opportunities to work on impact evaluations within the organization or elsewhere, to time spent by program staff in evaluation departments and equally, time spent by evaluators in the field. This helps the employee to be more versatile in today’s world. Evaluation must also be independent and relevant. Independence is achieved when it is carried out by entities and persons free of the control of those responsible for the design and implementation of the development intervention (Gebremedhin, Getachew, & Amha, 2010) .

The structural arrangements of an M&E system are important from a number of perspectives; one is the need to ensure objectivity, credibility and rigor of the M&E information that the system produces (Mackay, 2007) . Khan (2003) , concurs that the conceptual design of an M&E system is supposed to address issues with regard to the objectives of the system, competent authority, credibility of information, its management, dissemination and recycling into the planning process with special emphasis on community participation. M&E systems should be built in such a way that there is a demand for results information at every level that data are collected and analyzed. Furthermore, clear roles, responsibilities, formal organizational and political lines of authority must be established (Kusek & Rist, 2004a) .

There is often a need for some structural support for M&E, such as a separate evaluation unit which at the very least needs one person who is the internal champion identified to make sure the system is implemented and developed. Moreover, the systems must be consistent with the values at the heart of the organization and work in support of the strategy. There are twelve components of a functional monitoring and evaluation namely: structure and organizational alignment for M and E systems; Human capacity for M and E systems; M and E partnerships; M and E plans; Cost M and E work plans; Advocacy, communication and culture for M&E systems; Routine monitoring; periodic surveys; Databases useful to M&E systems; Supportive supervision and data auditing; Evaluation and research; and using information to improve results (UNAIDS, 2008) . Taut (2007) study, self-evaluation capacity building in a large international development organization, indicates low organizational readiness for learning from evaluation. Moreover, interviewees similarly described a lack of open, transparent and critical intra-organizational dialogue and a lack of formal structures and processes to encourage reflection and learning as an organizational habit. At the same time, there was rather high awareness of the potential for evaluation to be used as a tool for learning and demand voiced for such evaluations.

3. Research Methods and Materials

Research Design: This study employed a mixed-methods research design to capture a comprehensive understanding of the influence of monitoring and evaluation (M&E) methods on the performance of the Uganda Red Cross Society (URCS) in Eastern Uganda. The use of mixed methods allowed for the integration of quantitative and qualitative data, providing a more holistic perspective on the research topic.

Data Collection: 1) Quantitative Data: Quantitative data was collected through surveys and organizational performance indicators. A survey questionnaire was developed, targeting URCS staff members involved in M&E activities. The questionnaire assessed their perceptions of the effectiveness and impact of M&E methods on organizational performance. Organizational performance indicators, such as service delivery metrics and resource utilization data, were collected from existing organizational records and reports.

2) Qualitative Data: Qualitative data was gathered through interviews and focus groups. Semi-structured interviews were conducted with key stakeholders, including URCS staff members, program managers, and community members. The interviews explored their experiences, perspectives, and recommendations regarding the M&E practices of the URCS. Focus groups were also conducted to encourage dialogue and capture collective insights from participants.

3) Participatory Methods: Participatory approaches were incorporated to involve staff and stakeholders in the M&E process. Participatory workshops or sessions were conducted to collaboratively analyze and interpret the collected data. This promoted ownership, enhanced the validity of findings, and generated actionable recommendations.

Sampling: 1) Quantitative Sampling: A purposive sampling technique was used to select participants for the survey questionnaire. The sample included URCS staff members who were directly involved in M&E activities in Eastern Uganda. The sample size was determined based on the principle of data saturation, ensuring sufficient representation of different roles and responsibilities within the organization.

2) Qualitative Sampling: For interviews and focus groups, a combination of purposive and snowball sampling techniques was utilized. Key stakeholders with relevant knowledge and experience in M&E practices within the URCS were identified and invited to participate. Snowball sampling was used to expand the sample by asking participants to recommend other individuals who could provide valuable insights.

Data Analysis: 1) Quantitative Analysis: Descriptive statistical analysis was performed on the quantitative survey data, including frequencies, percentages, and means. This analysis provided an overview of the participants’ perceptions regarding M&E methods and their influence on organizational performance. Statistical tests, such as correlations and regression analysis, were also conducted to examine relationships between variables.

2) Qualitative Analysis: Thematic analysis was employed to analyze the qualitative data collected from interviews and focus groups. The data was transcribed, coded, and categorized into themes and sub-themes. The analysis identified patterns, commonalities, and divergences in participants’ perspectives, facilitating a comprehensive understanding of the influence of M&E methods on organizational performance.

3) Integration of Findings: The quantitative and qualitative findings were integrated through a comparative analysis. Triangulation of data was performed to corroborate or complement the results obtained from different sources. The integrated findings provided a comprehensive understanding of the influence of M&E methods on the performance of the URCS in Eastern Uganda.

Ethical Considerations: Ethical approval was obtained from the relevant institutional review board before data collection. Informed consent was obtained from all participants, ensuring confidentiality, anonymity, and voluntary participation. The principles of voluntary participation, privacy, and data protection were strictly adhered to throughout the study.

By employing a mixed-methods approach, this research aimed to capture a nuanced understanding of the influence of M&E methods on organizational performance.

4. Study Findings

4.1. Monitoring and Evaluation Methods Influences the Performance of Uganda Red Cross Society in Eastern Uganda

Results of the correlation analysis revealed that there is a significant relationship between monitoring and evaluation methods and performance of URCS in Eastern Uganda (r = 0.888, P < 0.000). This is an implication that the logical frameworks clearly indicate the proposed impact of the programme and also provide the intended outcomes of the programme. The results also indicate that M&E Plan is linked to overall project plan and organizational strategy, outlines steps for further strengthening of M&E system, URCS-Eastern Uganda ensures that funds for M&E activities are provided on time, input and output indicators are easier to assess than effect or impact indicators, indicators facilitate systematic inquiry through collection, analysis and interpretation of accurate and relevant data. Correlation results of 0.888% or 88.8% further reveals that performance indicators and theory based evaluation methods are used to evaluate projects, project goal and outcomes indicators are attainable within stated time, project indicators are cost effective to measure in terms of time and money. Therefore, 20.9% indicates that monitoring and evaluation methods are not fully influence the performance of Uganda Red Cross Society in Eastern Uganda. This indicates that URCS-Eastern Uganda should effectively practice all implemented M&E strategies to ensure improved performance.

**. Correlation is significant at the 0.01 level (2-tailed).

4.2. How Human Resource Capacity Influences the Performance of Uganda Red Cross Society in Eastern Uganda

Correlation results revealed that there is a significant relationship between human resource capacity and the performance of the Uganda Red Cross Society in Eastern Uganda (r = 719, P < 0.000). This implies that URSC has adequate and skilled employees charged with steering M&E activities. The result also indicates that the number of trainings provided to M&E personnel determines their performance, and the monitoring and evaluation officers are knowledgeable in the day-to-day management of monitoring and evaluation systems.

4.3. M&E Systems and Performance of URCS

Results of the correlation analysis indicate that there is a significant and positive relationship between M&E systems and the performance of URCS in Eastern Uganda, as indicated in the table below. The result of correlations (r = 0.791, P < 0.000) implies that effective monitoring and evaluation methods such as logical framework, monitoring and evaluation method plan, cost work plan, and indicator manuals have led to improved performance; (r = 0.719, P < 0.000) for human capacity, this reveals that URCS-Eastern Uganda has employees with adequate skills, competencies, knowledge, and attitudes, which have enhanced its performance; monitoring and evaluation structure (r = 0.768, P < 0.000) This implies that URCS-Eastern Uganda has an improved monitoring and evaluation unit, monitoring and evaluation policies and standards, and a monitoring and evaluation champion. Data quality (r = 0.778, P < 0.000), which implies data validity, reliability, and integrity, have enhanced the performance of URCS-Eastern Uganda.

The regression analysis was undertaken at a 5% significance level. The criteria for comparing whether the predictor variables were significant in the model were the corresponding probability value obtained and α = 0.05. If the probability value was less than α, then the predictor variable was significant since their corresponding predictor values were below 5%, apart from data quality, which had 6%, meaning that there was an inverse relationship between the monitoring and evaluation system and the performance of the Uganda Red Cross Society, Eastern Uganda.

5. Conclusion

This research contributes greatly to a debate that attempts to examine how monitoring and evaluation systems influence the performance of humanitarian organizations in Uganda. The research dwells on monitoring and evaluation methods, human resource capacity, structure of monitoring and evaluation, and data quality as dimensions of M&E systems and how they affect disaster management (i.e., prevention, mitigation, response, and recovery), an effective and efficient M&E system, timely dissemination of feedback, accomplishment of tasks, and demand for M&E data and quality. While some of the relationships between the dimensions of M&E systems and organizational performance have been analyzed in developed countries, empirical research in developing countries is still in its infancy stages.

In order to increase the effectiveness of an M&E system, the monitoring and evaluation plan and design need to be prepared as an integral part of the project. The M&E methods help manage the process of monitoring, analyzing, evaluating, and reporting progress towards achieving objectives. The M&E Plan logical framework serves as a reference document that contains targets, a detailed definition of each project indicator, the methods and frequency of data collection, as well as who is responsible for collecting the data. It will also provide details on how data will be analyzed and the evaluations required to complement monitoring data.

How Human Resource Capacity Influences the Performance of Uganda Red Cross Society in Eastern Uganda

Human resources, with proper training and experience, are crucial for good M&E results. There is a need to have an effective M&E human resource capacity in terms of quantity and quality. Therefore, there is a great demand for skilled professionals, capacity building of M&E systems, harmonization of training courses, and technical advice. The capacity building of personnel helps with the interaction and management of the M&E systems. M&E training starts with an understanding of the M&E theory and ensures that the team understands the linkages between the project theory of change and the results framework, as well as associated indicators. Training should therefore be practical and focused to ensure comprehension. The theory of change is a causal logic that links research activities to the desired changes in the actors that a project targets to change. It is therefore a model of how a project is supposed to work. The function of a theory of change is to provide a road map of where the project is headed while monitoring and evaluation test and refine that road map.

6. Recommendations

Based on the findings of this study and the conclusion made, the study makes the following recommendations for policy action by humanitarian organizations, given that their monitoring and evaluation systems have a bearing on the kind of information they provide: The following recommendations are aimed at strengthening the monitoring and evaluation system and filling gaps in existing policies and legislation.

The study recommended that there should be M&E planning done by the government. This should involve having M&E planning processes that lead to models that are related to project implementation, having proper M&E planning before the implementation of government projects, and having timely and reliable M&E planning that provides information to support program implementation.

The study also recommended that there should be proper management participation at all levels of implementation. This should involve having managers with competencies in commitment, communication, and collaboration with the project teams, having management participation in the course of the programming cycle guarantee ownership, solidity, and sustainability of the project results, and having the managers structure a monitoring and evaluation process to monitor progress and utilize the information in improving the performance.

The study recommends that human resource aspects such as staff entrusted with monitoring and evaluation should have technical skills, staff working on monitoring and evaluation should be dedicated to the function, and the roles and responsibilities of monitoring and evaluation personnel need to be specified at the start of the program. Additionally, there is a need to have clear implementation strategies for monitoring and evaluation of Uganda Red Cross Society programs in Eastern Uganda. The study recommends that when carrying out evaluation of Uganda Red Cross Society in Eastern Uganda programs, there is a need to look at the time period and project components covered, look at other existing or planned interventions of the same project, and focus on the target group.

As revealed by this study, looking at how critical M&E is in influencing the performance of the Uganda Red Cross Society, the study recommends that organizations institutionalize monitoring and evaluation. Create a monitoring and evaluation unit and/or employ a monitoring and evaluation officer.

The organization should strengthen the monitoring and evaluation culture in programs by strengthening evidence-based decision-making to increase demand for monitoring and evaluation information. Uganda Red Cross Society staff and stakeholders should be supported to understand the value and contribution of monitoring information to program performance. Programs should timely produce and share M&E audit review reports to enhance adoption of quality review results. In addition, innovative dissemination approaches should be adopted to cater for the low level of education of stakeholders and maximize their participation in providing feedback.

7. Research Limitations

Despite a number of contributions made by the current study, limitations remain. The study focused on only five theories. Therefore, future studies should examine other theories that influence monitoring and evaluation systems and performance that are not part of the study. The findings of this study are based on a sample of the Uganda Red Cross Society in the Eastern Region of Uganda. This may not be fully representative of other humanitarian organizations in other regions of Uganda. This therefore necessitates conducting the same study in other regions of this country with different socio-economic backgrounds to be able to generalize the research findings.

Abbreviations and Acronyms

M&E: Monitoring and evaluation

RBM: Results-based management

UNDP: United Nations Development Programme

URCS: Uganda Red Cross Society

IFRC: International Federation of the Red Cross/Red Crescent societies

ICRC: International committee of the Red Cross Society

PM&E: Participatory Monitoring and Evaluation

PME&R: Participatory Monitoring, Evaluation and Reporting

SEM: Structural Equation Modelling

GEF: Government Evaluation Facility

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Acevedo, M., Rodríguez, M., & Vargas, J. (2010). Building Capacity for Monitoring and Evaluation in Latin America and the Caribbean: A Review of Experiences. Evaluation and Program Planning, 33, 158-166.
[2] Abidi, H., de Leeuw, S., & Dullaert, W. (2020). Performance Management Practices in Humanitarian Organisations. Journal of Humanitarian Logistics and Supply Chain Management, 10, 125-168.
https://doi.org/10.1108/JHLSCM-05-2019-0036
[3] Bamberger, M., Rugh, J., & Mabry, R. (2019). Using Mixed Methods in Monitoring and Evaluation: Experiences from International Development. BWPI Working Paper 107, The University of Manchester, Brooks World Poverty Institute.
[4] Baum, A., Fleming, R., & Singer, J. E. (1985). Understanding Environmental Stress: Strategies for Conceptual and Methodological Integration. In A. Baum, & J. E. Singer (Eds.), Advances in Environmental Psychology (Volume 5, pp. 185-205). Erlbaum Associates.
https://doi.org/10.4324/9781003052944-7
[5] Bhasin, H. (2020). Marketing Strategy of Coca Cola - Coca Cola Strategy Website Title: Marketing.
[6] Chaplowe, D. (2008). Monitoring and Evaluation in Development: A Practical Guide. Practical Action Publishing.
[7] CPWF (2012). M&E Guide: Theories of Change.
https://sites.google.com/a/cpwf.info/m-e-guide/background/theory-of-change
[8] DAC (2005). Guidelines and Series Quality Standards for Development Evaluation Declaration on Aid Effectiveness and the Accra Agenda for Action.
[9] Fitzgerald, J. et al. (2009). Monitoring and Evaluation in Africa: A Review of Trends and Challenges. African Evaluation Journal, 7, 1-18.
[10] Gage, A., & Dunn, M. (2005). Monitoring and Evaluating Gender-Based Violence Prevention and Mitigation Programs. U.S. Agency for International Development, MEASURE Evaluation, Interagency Gender Working Group, Washington DC.
[11] Gebremedhin, B., Getachew, A., & Amha, R. (2010). Evaluation as a Tool for Organizational Learning: A Case Study of the Ethiopian Commodity Exchange. Evaluation and Program Planning, 33, 167-175.
https://doi.org/10.1016/S0149-7189(10)00023-6
[12] Gorgens, M., & Kusek, J. (2009). Making Monitoring and Evaluation Systems Work. World Bank.
https://doi.org/10.1596/978-0-8213-8186-1
[13] Gosling, S. D., Rentfrow, P. J., & Swann, W. B. (2003). A Very Brief Measure of the Big-Five Personality Domains. Journal of Research in Personality, 37, 504-528.
https://doi.org/10.1016/S0092-6566(03)00046-1
[14] Grove, J., & Zwi, A. B. (2008). Logical Framework Analysis: A Tool for Planning and Evaluation. Health Policy and Planning, 23, 375-384.
[15] Goldman, I., & Mathe, J. (2014). Institutionalisation Philosophy and Approach Underlying the GWM&ES in South Africa. In F. Cloete, B. Rabie, & C. De Coning (Eds.), Evaluation Management in South Africa and Africa. Stellenbosch: SUN PRESS Imprint.
[16] Hailey, J., & James, P. (2003). Making Sense of Monitoring and Evaluation: A Practical Guide for Development Agencies. Earthscan.
[17] IFAD (International Fund for Agricultural Development) (2002). Human Resources Development in Rural Development: A Training Manual. IFAD.
[18] Khan, M. A. (2003). Using Logic Models in Monitoring and Evaluation of Development Programmes. Evaluation, 9, 195-214.
[19] Kusek, J. Z., & Rist, R. C. (2004a). Ten Steps to a Results-Based Monitoring and Evaluation System: A Handbook for Development Managers. The World Bank.
[20] Kusek, J. Z., & Rist, R. C. (2004b). Making M&E Matter—Get the Foundation Right. Evaluation Insights, 2, 7-10.
[21] Kusters, C., Vugt, S. V., Wigboldus, S., & Woodhill, B. W. (2011). Making Evaluations MAT A.
[22] Mackay, K. (2007). How to Build Monitoring and Evaluation Systems to Support Better Government. World Bank.
https://doi.org/10.1596/978-0-8213-7191-6
[23] Martin, F. (2011). Influencing Change: Building Evaluation Capacity to Strengthen Governance. World Bank.
[24] McCoy, R. M. (2005). Field Methods in Remote Sensing. Guilford Press.
[25] Medina-Borja, A., & Triantis, K. (2007). A Conceptual Framework to Evaluate Performance of Non-Profit Social Service Organizations. International Journal of Technology Management, 37, 147-161.
https://doi.org/10.1504/IJTM.2007.011808
[26] MorraImas, L. G., & Rist, R. C. (2009). The Road to Results: Designing and Conducting Effective Development Evaluations. World Bank.
http://hdl.handle.net/10986/2699
https://doi.org/10.1596/978-0-8213-7891-5
[27] Mpofu, M., Semo, W., Grignon, J., Lebelonyane, R., Ludick, S., Matshediso, E., & Ledikwe, H. (2014). Strengthening Monitoring and Evaluation (M&E) and Building Sustainable Health Information Systems in Resource Limited Countries: Lessons Learned from an M&E Task-Shifting Initiative in Botswana. BMC Public Health, 14, Article No. 1032.
https://doi.org/10.1186/1471-2458-14-1032
[28] Mulwa, F. W. (2008). Demystifying Participatory Community Development (4th ed.). Zapf Chancery Publishers.
[29] Nabris, A. (2002). Logical Framework Analysis: A Tool for Monitoring and Evaluation of Development Programmes. Development in Practice, 12, 386-395.
[30] Oluoch, S. (2012). Challenges of Monitoring and Evaluation in Non-Governmental Organizations in Kenya. International Journal of Development and Sustainability, 1, 1-16.
[31] Organization for Economic Cooperation and Development (OECD) (2004). OECD Principles of Corporate Governance.
[32] Patton, M. Q. (2018). Facilitating Evaluation: Principles in Practice. Sage.
https://doi.org/10.4135/9781506347592
[33] Perrin, B. (2012). Theory of Change: A Practical Guide to Using Theory of Change for Evaluation. Practical Action Publishing.
[34] Richard, P. J., Devinney, T. M., Yip, G. S., & Johnson, G. (2009). Measuring Organizational Performance: Towards Methodological Best Practice. Journal of Management, 35, 718-804.
https://doi.org/10.1177/0149206308330560
[35] Rubin, H. J., & Rubin, I. S. (1995). Qualitative Interviewing: The Art of Hearing Data (2nd ed.). Sage Publications.
[36] Randel, A. E. (2002). Identity Salience: A Moderator of the Relationship between Group Gender Composition and Work Group Conflict.
[37] Shapiro, J. (2011). Monitoring and Evaluation for Development: A Practical Guide. Practical Action Publishing.
[38] Stephenson, P. J., & Stengel, C. (2020). An Inventory of Biodiversity Data Sources for Conservation Monitoring. PLOS ONE, 15, e0242923.
https://doi.org/10.1371/journal.pone.0242923
[39] Taut, A. (2007). Self-Evaluation Capacity Building in a Large International Development Organization: A Case Study. Evaluation, 13, 63-81.
https://doi.org/10.1177/1098214006296430
[40] UNAIDS (2008). Framework for a Functional Monitoring and Evaluation System. UNAIDS.
[41] UNDP (2002). Handbook on Planning, Monitoring and Evaluating for Development Results. UNDP.
[42] UNDP (2009). Handbook on Planning, Monitoring and Evaluating for Development Results. New York: UNDP.
[43] USAIDS (2012). A Guide to Monitoring & Evaluation. Joint United Nations.
[44] Vanessa, G., & Gala, M. (2011). The Role of Evaluation in Development Cooperation: A Review of the Literature. Evaluation and Program Planning, 34, 416-425.
[45] Woodhill, J. (2005). Monitoring and Evaluation in Development: A Review of Trends and Challenges. Development in Practice, 15, 358-367.
[46] World Bank (2000). Monitoring & Evaluation: Some Tools, Methods and Approaches. The World Bank.
[47] World Bank (2004). Monitoring and Evaluation. Some Methods, Tools and Approaches World.
[48] World Bank (2011). Human Resource Capacity Development for Monitoring and Evaluation: A Framework for Action. World Bank.
[49] Zhou, G., & Gideon, G. (2013). Training for the Requisite Skills in Monitoring and Evaluation. Journal of the Knowledge Economy, 4, 785-802.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.