Sequencing Error Reduction Initiatives in Services

Abstract

This paper uses data from actual service systems to develop and illustrate a planning methodology for sequencing error reduction initiatives. The proposed methodology reflects three levels of error reduction planning: 1) system visualization, 2) process measurement, and 3) fault detection. Guttman scaling is used to order the error reduction initiatives and identify which systems utilized comprehensive error reduction plans. Analysis of efficiency data reveals that the systems that did comprehensive error reduction planning outperformed those that did not.

Share and Cite:

L. Hensley, R. and S. Utley, J. (2021) Sequencing Error Reduction Initiatives in Services. Journal of Service Science and Management, 14, 651-662. doi: 10.4236/jssm.2021.146041.

1. Introduction

As service operations continue to grow in size and complexity, the opportunity for things to “go wrong” with service delivery also increases. In some situations, the resulting service failure can produce potentially devastating consequences such as disruption of a critical service offering, service quality degradation, significant waste of valuable resources and, in extreme cases, endangerment of customer safety and well-being (Song et al., 2013). Since companies must try to prevent serious service failures, error reduction initiatives should play an increasingly important role in service operations. Service managers must recognize that error reduction is no longer just a technical matter but rather a strategic issue that can affect customer welfare and key organizational metrics like productivity, cost, quality, customer retention and profitability (Madu, 2005; Kuei & Madu, 2003; Song et al., 2013).

Given the current strategic implications of error reduction in service operations, some researchers have begun to redefine the concept of error reduction. Once viewed as merely a component of reliability, error reduction now plays a key role in quality assurance. The concepts of reliability and its role in quality management are seen as important. Sun et al. (2008: p. 52) define reliability as “quality over time” while Madu (1999: p. 698) argues that “Quality and reliability are synonymous. A system cannot be reliable if it does not have high quality. Likewise a system cannot be of high quality if it is not reliable.”

Despite this attention on reliability, relatively few studies in the service management literature have focused on the role of error reduction as an important component of reliability improvement techniques (Hensley & Utley, 2011; Gunes & Devici, 2002). Moreover, these studies dealt mainly with technical tools for analyzing service failure at the sub-system level while ignoring possible system-wide effects of reliability problems (Song et al., 2013; Gunawardane, 2004). Since these system-wide effects may be potentially devastating for a service company and its customers, a systems-based approach to undertaking reliability improvement initiatives is a planning imperative for service managers. This study proposes such an approach and illustrates how it can be used to sequence error reduction improvements in service contexts. Data from actual service operations are used to establish a sequencing framework. Efficiency data from these operations are analyzed to investigate how the extent of framework adoption affects system performance. As discussed in the next section, the proposed sequencing framework reflects three natural levels of error reduction.

2. Error Reduction as a Component of Reliability

A systems-based approach to improving service reliability involves three key components: 1) visualization of the entire system as a network of many inter-connected subsystems and components, 2) measurement and maintenance of measurement technologies, and 3) a fault detection methodology to help minimize the effect of individual service failures on overall system performance. These key concepts form three natural levels in the sequencing framework shown in Figure 1.

2.1. System Visualization

As Figure 1 illustrates, system visualization forms the base of the sequencing framework. While most past research on reliability has focused exclusively on subsystem reliability, relatively few studies have advocated a system-wide approach to reliability (Gorkemli & Ulusoy, 2010; Hensley & Utley, 2011; Song et al., 2013). A system-wide approach is essential to improving reliability because a lack of systems thinking frequently generates service errors (Kuei & Madu, 2003). In contrast with the many techniques used to analyze error at the component level, system visualization allows the service manager to consider the reliability of the system as a whole while simultaneously accounting for the effects

Figure 1. Reliability sequence.

of subsystem/component reliability on system performance (Madu, 2005; Hensley & Utley, 2011; Song et al., 2013).

System visualization can take varying forms in practice. For instance, in a high contact service system, service blueprinting can tie together the multiple service phases the customer experiences during service delivery (Shostack, 1984). In a low contact service system such as a public utility, system visualization might entail a network diagram of the physical components of the service system and their inter-relationships. For example, in a public water utility the physical elements of the delivery system include distribution lines, water storage facilities, water purification facilities, pumping stations and the like.

2.2. Measurement and Maintenance of Measurement Technology

The second level of the framework depicted in Figure 1 involves measurement. As in the case of service quality, it is difficult to analyze and improve reliability without meaningful metrics (Palm et al., 1997; Sulek, 2004). These metrics should align with the performance issues the company wishes to investigate. The exact form of the metrics will depend on the service context. For instance, reliability metrics developed for health care services will differ substantially from reliability measures used in the restaurant industry.

Once appropriate metrics are devised the service will need to maintain the technology and equipment used in the measurement process. Maintainability is critical to reliability management for two reasons: 1) it reduces the probability that the measurement process is generating inaccurate data, and 2) it helps to prevent disruptions to the measurement process due to technical failure of the measurement equipment (Madu, 2005).

2.3. Fault Detection

The top level of the framework shown in Figure 1 deals with fault/error detection. Reliability management at this level requires closer analysis of the subsystems/components that were identified at the system visualization level (Hensley & Utley, 2011). The primary purpose of the fault detection level is to specify and enact reliability tests tailored to the specific system component(s) under consideration. Results from these tests should help the manager keep problems at the component level from spreading to system level performance problems (Song et al., 2013).

While the three levels of the framework presented in Figure 1 represent natural stages in a reliability planning process, they do not by themselves guide managers through the maze of potential reliability initiatives that are possible within a specific industry. Needed is a straightforward methodology by which managers can utilize industry specific knowledge to sequence reliability improvements. Such a methodology is presented in the next section.

3. Sequencing Methodology

Sequencing reliability initiatives is a special case of the more general problem of developing a scale to order a set of binary questions. Guttman (1944) addressed this general problem by devising a scaling methodology based on an analysis of patterns in question responses (Abdi, 2010). Guttman’s approach is not only useful for positioning the binary questions on a single dimension but it also supports prediction of any outside variables affected by items on the scale (Guttman, 1944).

While Guttman scaling has been used primarily in social psychology and education research (Abdi, 2010), there are some instances of its use in business contexts. For instance, an early application by Stagner et al. (1958) described the development of a ten-item Guttman scale to measure management attitude toward unions (0.085 error rate) and a nine-item scale to measure union attitude toward management (0.098 error rate).

Later business applications included both scale development and performance predictions based on scale scores. For example, Wood and LaForge (1979, 1981) devised and used a Guttman scale to measure comprehensiveness of planning efforts at large U.S. banks. They identified a six-item Guttman scale which was then compared to growth in net income and return on investment. Their results showed that comprehensive planners (those with high scores on the planning scale) out-performed banks that had less comprehensive planning processes in place. Robinson and Pearce (1988) used the Wood and LaForge (1979) scale in a study of manufacturing firms and found that “firms which engaged in a high-to-moderate level of sophistication in planning and were committed to a consistent and effective strategic orientation ranked in the highest performing group” (Robinson & Pearce, 1988: p. 56). Wood et al. (1995) created a planning scale for operational level planning in large U.S. banks. The five-item scale was compared to performance and showed that comprehensive planners out-performed those banks that did less comprehensive planning.

4. Case Application

The application context consists of a set of municipal water systems operating within a single state in Southeast United States. Drought conditions—in some cases severe—occurred frequently in this state, particularly in its western counties. The effects of the drought are evidenced by reduced water supplies, low lake/reservoir levels, failing wells and poor crop production. Municipalities are forced to seek additional water sources and additional water distribution infrastructures. Disputes over water rights and access to river flows may occur. Since hydroelectric production can be interrupted, replacement power costs may be incurred. There are also the costs associated with the curtailment of the use of water for recreational purposes.

4.1. Study Sample

The water systems utilized in this study ranged from urban systems to small rural systems with water supply originating from rivers, reservoirs and wells. Some of these systems purchase water on an emergency basis from a neighboring system. Other systems regularly purchase supplemental water supply from nearby systems. Reliable water delivery and operating efficiency are crucial for this set of water systems. A total of 535 distinct water systems in the geographic area were surveyed. Surveys were initially completed by the manager of the water system and were then checked and revised, if necessary, by an engineering company hired to conduct the study. The survey instrument was comprehensive in that it asked respondents to report operational performance metrics as well as complete a set of binary questions dealing their adoption of various reliability initiatives. One survey did not contain any answers to the binary questions and was dropped from further evaluation. This resulted in a usable sample size of 534.

The responses to the binary questions were analyzed with Guttman’s scaling methodology to determine the natural ordering for reliability initiatives in this service context. The specific steps in the scaling process are illustrated in the following subsection.

4.2. Guttman Scaling Process

Application of the Guttman scaling methodology consisted of a series of steps which are described below and summarized in Figure 2.

4.2.1. Summation of the “Yes” Answers for Each Binary Question under Consideration for the Scale

A total of seven binary questions on reliability initiatives were considered for inclusion in the scale (see Table 1). Each of these initiatives is used in practice by at least some of the systems surveyed. The number of “yes” answers was totaled for each question.

4.2.2. Ranking the Binary Questions by the Number of “Yes” Answers Found in Step 1

Table 1 shows that question 6 (Is the system mapped?) received the highest number of “yes” answers. A “yes” answer to this question means the locations of

Figure 2. Guttman scaling process.

Table 1. Binary questions considered for inclusion in the Guttman scale.

all distribution lines, pumping stations, and water processing and storage facilities are shown on a system diagram. Question 5 (Are all valves, hydrants and meters located?) received the second highest number of “yes” responses. The third most frequent yes answer occurred for Question 3 (Is a meter replacement system in place?). Question 2 (Is a valve exercise program in place?) and question 1 (Is a leak detection program in place?) received the 4th and 5th most responses, respectively. Finally, question 7 (Is the system mapped in GIS format?) and question 4 (Has a leak detection study been done in the last five years?) received the 6th and 7th most responses, respectively.

4.2.3. Structuring the Scale

Once the binary questions have been ranked by number of “yes” responses, it is important to calculate the difference in “yes” totals for each pair of adjacent questions (Table 1).

This provides a check on the spacing of adjacent items considered for the scale. If adjacent items have “yes” totals that are numerically close, then the analyst should consider dropping one of the adjacent items from the final scale. Stagner et al. (1958: p. 298) observe, “ideally, in Guttman scaling, the marginal entries should be widely and fairly uniformly spread; i.e., there should be a range from an item answered favorably by almost everyone to one which is answered unfavorably by most of the population, and items well-spaced between these two”. Application of this rule reduced the number of questions to six—with question 7 omitted. After considering the order logic, it was decided to also drop question 2 because valve exercising is a routine maintenance task rather than a system visualization technique, a process measurement technique or a fault detection test. Thus, it did not seem to fit with the focus of the frework. The resulting 5-level scale is shown in Table 2.

4.2.4. Checking Scale Error

Since no scale is perfect, error must be measured (Guttman, 1944). On a perfect Guttman scale, for every respondent, a scale score of “2’’ means that the first two questions had “yes” answers while the remaining questions in the series received “no” answers. An error means that a deviation from the expected pattern has occurred. For example, an answer pattern that begins “yes, no, yes…” constitutes an error because a “yes” answer to the third question should imply that the answer to the second question was also “yes” (Wood & LaForge, 1981). The total number of errors in the proposed scale was 179. An error measure for the entire scale can be found with the formula:

Scale Error = Total number of scaling errors Number of items × Number of subects = 179 5 × 534 = 0.067 (1)

Table 2. Final Guttman scale.

Guttman (1944) used total scale error to devise a measure known as the coefficient of reproducibility (denoted CR) which is defined by the formula:

CR = 1 Total number of scaling errors Number of items × Number of subjects = 1 179 5 × 534 = 0.933 (2)

According to Guttman (1944: p.150), the coefficient of reproducibility (CR) is “the empirical relative frequency with which the attributes do correspond to the intervals of a scale variable”.

Thus, the coefficient of reproducibility can be thought of as the extent to which the proposed scale approaches a perfect (error free) Guttman scale. A CR value greater than 0.9 (or equivalently, a percent error less than 10%) is considered acceptable (Guttman, 1944). In this application context, the CR value of 93.3% is well above 90% threshold (or, equivalently, the 6.7% error rate falls well below the 10% maximum rate).

Although the coefficient of reproducibility exceeds the 90% threshold, it is important to check that the CR is not inflated by the responses of extreme subjects who either answered “no” to all five questions or “yes” to all five questions. To address the possibility of inflated CR values, Menzel (1953) suggests the use of the coefficient of scalability (CS) which removes the extreme subjects from CR calculation. The coefficient of scalability is defined by the formula:

CS = 1 Total number of scaling errors Number of items × ( Number of non extreme subjects ) (3)

In this application context, 22 respondents answered “no” to all questions while 50 respondents answered “yes” to all questions. The number of non-extreme respondents can be easily calculated:

Number of ( non extreme subjects ) = 534 ( 22 + 51 ) = 461 (4)

The coefficient of scalability is also easily computed:

CS = 1 ( Total number of scaling errors Number items × ( Number non extreme subjects ) ) = 1 ( 179 5 × 461 ) = 0.9223 (5)

Since the suggested threshold for the coefficient of scalability is 0.6 (Menzel, 1953), the computed CS of 0.9223 indicates that the responses of extreme subjects did not produce a misleading coefficient of reproducibility.

Given the low error rate (6.67%) and the high coefficient of scalability (92.23%) in this context, the proposed scale provides a natural sequencing for reliability initiatives for this set of water systems. In addition, this scale can be used to analyze outcome variables of interest to the water systems. An example of how the scale supports this analysis process is described in the following subsection.

4.3. Analysis of Performance

One important performance measure for water systems is the Percent Water Loss metric. This metric is an efficiency measure based on ratio of annual water loss to total annual usage. The ratio approach controls for the size of the water system. The formula for Percent Water Loss is given below.

Percent Water Loss = ( ( Average Monthly Unaccounted for Water Use ) × 12 Total Annual Water Usage ) × 100 (6)

The Guttman scale shown in Table 2 was used to compare the Percent Water Loss for systems positioned at the low end of the scale (i.e., exhibited rudimentary reliability planning) and systems positioned at the upper end of the scale (i.e., exhibited more comprehensive reliability planning).

The analysis began with the reduction of the usable sample to the 305 systems that answered survey questions related to average monthly unaccounted water use and total annual water usage. These 305 respondents were divided into 4 groups: 1) those who were doing no reliability planning and thus scored zero on the Guttman scale, 2) those who scored a 1 or 2 on the scale and thus exhibited a low level of reliability planning, 3) those who scored a 3 on the scale and were therefore defined as being at the medium planning level, and 4) those who scored either a 4 or 5 and thus exhibited a high degree of reliability planning (Table 3). For purposes of further analysis, respondents who scored zero were omitted.

Data analysis was conducted using SPSS. Descriptive statistics showed that the mean scores for the annual water loss percent ranged from a high of 20.5% for the respondents having a low degree of reliability planning to a low of 10.9% for those having a high degree of reliability planning (Table 4).

T-tests were run in order to compare the mean annual water loss percent for the three groups (see Table 5). Results showed that the low planners are significantly different from both the medium planners (p = 0.045) and the high planners (p = 0.026).

Table 3. Groupings.

Table 4. Descriptive statistics.

Table 5. T-test results1.

1Significant at p ≤ 0.05.

5. Discussion

This study proposed a systems-based approach to undertaking reliability initiatives. The approach identified three natural levels to sequencing reliability initiatives: 1) system visualization, 2) measurement and maintenance of measurement technology, and 3) fault detection. Guttman scaling was applied to survey results from a case application involving 534 water systems to determine if industry specific data reflected these three natural planning levels.

Scaling results from the case application supported the systems-based sequencing approach. The two lowest levels on the Guttman scale did correspond to system visualization. In this application visualization included water system mapping (level 1 on the Guttman scale) and location of all valves, hydrants and meters (level 2 on the Guttman scale). The middle level on the Guttman scale (a meter replacement program in place) corresponded to the measurement/maintenance of measurement technology level in the sequencing framework. Finally, the 4th and 5th levels on the Guttman scale (leak detection program in place and leak detection study done in the last five years, respectively) mirrored the fault detection level or the highest level in the sequencing approach.

The scaling results were then used to categorize the respondents by the extent to which they adopted the reliability initiatives in the Guttman scale. Systems with minimal adoption (i.e., initiatives adopted related only to system visualization) were found to be significantly less efficient in water usage (as measured by the Percent Water Loss metric) than those positioned at both the middle level (measurement/maintenance of measurement technology) and the top level of the sequencing framework (fault detection level). This finding is not surprising since knowing the locations of distribution lines, meters, hydrants and valves are not the same as actually testing for water loss and having reliable meters. Given the frequent drought conditions occurring across the entire state, water loss represents a pressing problem for this set of water systems. The water systems positioned at the low end of the scale should consider adopting higher level reliability initiatives to better manage their water resources.

Although the results from the case application suggest that the system-based framework can help a service manager sequence reliability initiatives, there are limitations in the current study that must be addressed in future research. First, this research context dealt with water systems; it does not predict how well the framework and the Guttman scaling methodology would work in other contexts. Thus, it is important to replicate the findings in other types of services, particularly those that are high contact services. Second, the survey instrument used in this study dealt with performance and planning issues in water systems and thus is not suited to other types of services. This implies that additional research will be needed to construct a reliability planning survey for another type of service. Such a survey must capture both the types of initiatives and the performance metrics applicable in the new service setting.

While these limitations will need to be addressed in future research efforts, the methodology discussed in this study does offer practitioners a general approach to sequencing reliability initiatives. It also provides a way to classify service operations by using Guttman scaling results and thus to identify systems with relatively comprehensive reliability planning. Finally, the methodology helps both researchers and practitioners to link a system’s level in the sequencing framework to performance metrics that are tailored to the particular research context.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Abdi, H. (2010). Guttman Scaling. In N. Salkind (Ed.), Encyclopedia of Research Design (pp. 1-5). Sage.
https://www.utdallas.edu/~herve/abdi-GuttmanScaling2010-pretty.pdf
[2] Gorkemli, L., & Ulusoy, S.K. (2010). Fuzzy Bayesian Reliability and Availability Analysis of Production Systems. Computers & Industrial Engineering, 59, 690-696.
https://doi.org/10.1016/j.cie.2010.07.020
[3] Gunawardane, G. (2004). Measuring Reliability of Service Systems using Failure Rates: Variations and Extensions. International Journal of Quality & Reliability Management 21, 578-590.
https://doi.org/10.1108/02656710410536581
[4] Gunes, M., & Deveci, I. (2002). Reliability of Service Systems and an Application in Student Office. International Journal of Quality & Reliability Management, 19, 206-211.
https://doi.org/10.1108/02656710210413525
[5] Guttman, L. (1944). A Basis for Scaling Qualitative Data. American Sociological Review, 9, 139-150.
https://doi.org/10.2307/2086306
[6] Hensley, R. L., & Utley, J. S. (2011). Using Reliability Tools in Service Operations. International Journal of Quality and Reliability Management, 28, 587-598.
https://doi.org/10.1108/02656711111132599
[7] Kuei, C. H., & Madu, C. N. (2003). Customer-Centric Six Sigma Quality and Reliability Management. International Journal of Quality and Reliability Management, 20, 954-964.
https://doi.org/10.1108/02656710310493661
[8] Madu, C. N. (1999). Reliability and Quality Interface. International Journal of Quality and Reliability Management, 16, 691-698.
https://doi.org/10.1108/02656719910286198
[9] Madu, C. N. (2005). Strategic Value of Reliability and Maintainability Management. International Journal of Quality and Reliability Management, 22, 317-328.
https://doi.org/10.1108/02656710510582516
[10] Menzel, H. (1953). A New Coefficient for Scalogram Analysis. Public Opinion Quarterly, 17, 268-280.
[11] Palm, A. C., Rodriguez, R. N., Spiring, F. A., & Wheeler, D. J. (1997). Some Perspectives and Challenges for Control Chart Methods. Journal of Quality Technology, 29, 122-127.
https://doi.org/10.1080/00224065.1997.11979739
[12] Robinson, R. B., & Pearce, J. A. (1988). Planned Patterns of Strategic Behavior and Their Relationship to Business-Unit Performance. Strategic Management Journal, 9, 43-60.
https://doi.org/10.1002/smj.4250090105
[13] Shostack, G. L. (1984). Designing Services That Deliver. Harvard Business Review, 62, 133-139.
[14] Song, B., Lee, C., & Park, Y. (2013). Assessing the Risks of Service Failures Based on Ripple Effects: A Bayesian Network Approach. International Journal of Production Economics, 141, 493-504.
https://doi.org/10.1016/j.ijpe.2011.12.010
[15] Stagner, R., Chalmers, W. E., & Derber, M. (1958). Guttman-Type Scales for Union and Management Attitudes toward Each Other. Journal of Applied Psychology, 42, 293-300.
https://doi.org/10.1037/h0045269
[16] Sulek, J. (2004). Statistical Quality Control in Services. International Journal of Services Technology and Management, 5, 522-531.
[17] Sun, J., Xi, L., Du, S., & Ju, B. (2008). Reliability Modeling and Analysis of Serial-Parallel Hybrid Multi-Operational Manufacturing System Considering Dimensional Quality, Tool Degradation and System Configuration. International Journal of Production Economics, 114, 149-164.
https://doi.org/10.1016/j.ijpe.2008.01.002
[18] Wood, D. R., & LaForge, R. L. (1979). The Impact of Comprehensive Planning on Financial Performance. Academy of Management Journal, 22, 516-526.
[19] Wood, D. R., & LaForge, R. L. (1981). Toward the Development of a Planning Scale: An Example from the Banking Industry. Strategic Management Journal, 2, 209-216.
https://doi.org/10.1002/smj.4250020209
[20] Wood, D. R., Minor, E. D., & Hensley, R. L. (1995). Evaluating Operations Center Planning. Bankers Magazine, 178, 14-16.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.