Share This Article:

Promoting Balance in Output Efficiencies for Cross-Efficiency Evaluation in Data Envelopment Analysis

Abstract Full-Text HTML XML Download Download as PDF (Size:413KB) PP. 664-685
DOI: 10.4236/jamp.2019.73047    244 Downloads   519 Views  
Author(s)    Leave a comment

ABSTRACT

Cross-efficiency evaluation is recognized as an effective way of efficiency assessment for a set of decision making units (DMUs) in the framework of data envelopment analysis (DEA). It has been generally suggested that secondary goals be introduced for cross-efficiency evaluation owing to the non-uniqueness of optimal solutions in self-evaluation. This paper develops a variety of secondary goals in the spirit of promoting balance in the output efficiencies of the DMU under evaluation. The proposed models attempt to make each output contribute as equally as possible to the self-evaluated efficiency. In this way, the weight flexibility can for one thing be reduced by the introduced secondary goals with selections from alternate optimal solutions, in addition to counting on the dilution of flexibility in the subsequent peer-evaluation. The proposed approach might be applicable to evaluation problems in which multiple outputs are considered important and balance is encouraged to put all dimensions into sufficient use. The effectiveness of the proposed approach and its comparisons with some relevant secondary goals are illustrated empirically using numerical examples.

1. Introduction

Data envelopment analysis (DEA) was first developed by Charnes et al. [1] for measuring the relative efficiencies of a set of decision making units (DMUs) with multiple inputs and outputs. By the self-evaluation under the CCR model shortened for Charnes, Cooper and Rhodes [1] , each DMU can choose the input and output weights most favorable to itself for evaluating its own efficiency. More than one DMU is generally evaluated as efficient. The flexibility in determining the weights of inputs and outputs may sometimes lead to unrealistic weight schemes [2] . For example, a DMU specialized in one aspect may heavily weight it with the other aspects ignored. This flexibility also makes the favorable weights differ from one DMU to another, implying that the efficiencies of all the DMUs obtained from self-evaluation are less comparable.

To address these problems, the cross-efficiency evaluation method was first introduced by Sexton et al. [3] and later investigated by Doyle and Green [2] [4] . By cross-efficiency evaluation, a DMU is peer-evaluated with the input and output weights of all the DMUs in the sample. The cross-efficiency evaluation improves the discrimination power of DEA [4] and reduces weight flexibility. However, as indicated in [2] , the non-uniqueness of weight schemes as the optimal solutions to CCR model may reduce the use of cross-efficiency evaluation. One remedy for that, suggested by Sexton et al. [3] and Doyle and Green [2] , is to introduce a secondary goal to choose a weight scheme from alternate optimal solutions. The previous secondary goals include those proposed by Anderson et al. [5] , Liang et al. [6] , Wang and Chin [7] , and Wu et al. [8] .

The conventional secondary goals for cross-efficiency evaluation are linked to the aggressive and benevolent ideas proposed by Sexton et al. [3] and Doyle and Green [2] . For example, Oral et al. [9] utilized the benevolent cross-efficiency technique to evaluate and select industrial R&D projects in a collective decision setting. Talluri and Sarkis [10] applied the aggressive cross-efficiency evaluation to an efficiency and productivity study on a cellular manufacturing system. Anderson et al. [5] employed the aggressive formulation to prove the fixed weighting nature of a cross-evaluation model in the case of one input and multiple outputs. Liang et al. [6] extended the cross-efficiency evaluation model by introducing a number of different secondary objective functions. Liang et al. [11] generalized the original cross-efficiency to game cross-efficiency, where each DMU is treated as a player that seeks to maximize its own efficiency while keeping the cross-efficiencies of the other DMUs not deteriorate. Wu et al. [12] introduced a modified DEA game cross-efficiency model in variable returns to scale, which was applied to Olympic rankings, considering each country as a competitor in a non-cooperative game. Flokou et al. [13] employed aggressive and benevolent cross-efficiency formulations to evaluate Greek NHS general hospitals. Liu et al. [14] applied aggressive cross-efficiency evaluation to an eco-efficiency analysis of coal-fired power plants considering undesirable output and ranking priority. Liu et al. [15] introduced an aggressive secondary model by which the cross-efficiencies of the other DMUs are minimized while the aggressive game cross-efficiency of the DMU under evaluation is guaranteed.

Besides the aggressive and benevolent cases, practical situations may turn out the case that in peer-evaluation a DMU acts neither aggressive nor benevolent to others but concerns its own when choosing a set of input and output weights. For instance, Wang and Chin [7] proposed a DEA model which determines a set of weights for each DMU to put each of the outputs into as much use as possible. In practice, promoting balance in output efficiencies is appropriate for various circumstances where all the considered outputs should be valued [16] , as in the case such as financial portfolio selection, funding agencies, and new product development [17] .

In light of promoting balance, this paper introduces a variety of secondary goals for DEA cross-efficiency evaluation to enable decision makers (DMs) to have more methodological choices. Each of the proposed models represents an evaluation criterion in pursuit of balance in the output efficiencies of a DMU to make all of its outputs contribute as equally as possible to its self-evaluated efficiency. The advantage of the proposed secondary goals over varied others lies in that the weight flexibility can at the beginning be reduced by promoting balance, in addition to relying only on the dilution of flexibility using the peer-evaluation process as done by the conventional cross-efficiency evaluation.

We begin in the next section with a brief description of the cross-efficiency evaluation process. Various secondary goals in the spirit of promoting balance in output efficiencies are developed in Section 3. In Section 4, numerical examples are provided to illustrate the effectiveness of the proposed approach and its comparisons to some previous methods, followed by concluding remarks.

2. The Cross-Efficiency Evaluation Process

Suppose there are n peer DMUs { DMU j : j = 1 , 2 , , n } to be evaluated, with m positive inputs, x i j ( i = 1 , 2 , , m ) , and s positive outputs, y r j ( r = 1 , 2 , , s ) . The self-evaluated efficiency of a particular DMUo, o { 1 , 2 , , n } , can be measured by the following CCR model [1] :

max θ o o = r = 1 s u r o y r o i = 1 m v i o x i o s .t . θ o j = r = 1 s u r o y r j i = 1 m v i o x i j 1 , j = 1 , 2 , , n , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s , (1)

where v i o and u r o are, respectively, the weights of the ith input and the rth output of DMUo.

Note that model (1) is a non-linear programming. By the transformation in Charnes and Cooper [18] , it can be equivalently transformed into the following linear programming (LP) for computation.

max θ o o = r = 1 s u r o y r o s .t . i = 1 m v i o x i o = 1 , r = 1 s u r o y r j i = 1 m v i o x i j 0 , j = 1 , 2 , , n , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s . (2)

Let { v i o , u r o , θ o o } be the optimal solution to (1) when DMUo is under evaluation, then the self-evaluated efficiency of DMUo is formulated as

θ o o = θ o o = r = 1 s u r o y r o i = 1 m v i o x i o , (3)

which is the maximum efficiency that DMUo can achieve relative to the other DMUs in the sample.

Similarly, the cross-efficiency of DMUj ( j = 1 , 2 , , n , j o ) peer-evaluated by DMUo can be formulated as

θ o j = r = 1 s u r o y r j i = 1 m v i o x i j . (4)

In this way, DMUj ( j = 1 , 2 , , n ) obtains n efficiencies, one from self-evaluation and the other n-1 from peer-evaluation. Averaging these n efficiencies of DMUj, we have

θ j = 1 n o = 1 n θ o j , (5)

which is referred to as the cross-efficiency score of DMUj ( j = 1 , 2 , , n ) .

3. Secondary Goals for Promoting Balance in Output Efficiencies

As mentioned above, the weight flexibility in self-evaluation often results in unrealistic weight schemes. That is, the inputs and outputs favorable to the evaluated DMU will be heavily weighted whereas those unfavorable to it will be less weighted or even neglected. In practice, promoting balance in weights is appropriate for various circumstances where all the measures taken into consideration should be valued. By this promotion, the weight flexibility will be reduced. The resultant balanced weights will be more realistic for these circumstances. In the spirit of promoting balance, we propose various secondary goals below for cross-efficiency evaluation.

Note that, in practice, the units of measurement often vary from one measure to another. Model formulations based on absolute weights may not be appropriate, because absolute weights are relevant to the units of measurement and comparisons between them are meaningless. For instance, when evaluating industrial robots, it is difficult to compare the absolute weights on measures like load capacity and repeatability which have different units of measurement. In contrast, a virtual weight, which, as defined in Sarrico and Dyson [19] , represents the product of the value in terms of a measure by the absolute weight assigned to this measure, is units invariant. Using virtual weights instead of absolute weights is therefore more appropriate for comparisons. Moreover, for managerial implications, it is difficult to ascertain meaningful restrictions on absolute weights. With restrictions on virtual weights, however, DMs can intuitively identify the contribution of a DMU’s performance under each dimension to its efficiency [19] [20] . Based on these discussions, our proposed secondary goals will be modeled based on virtual weights.

3.1. Minimizing the Differences between Output Efficiencies

Dimitrov and Sutton [16] defined a so-called measure of “symmetry” as

Z k l o = | u k o y k o u l o y l o | , k , l , (6)

where Z k l o is the difference between output k and output l for DMUo. Based on this measure, the difference between the output efficiencies of output k and output l for DMUo can be formulated as

D k l o = | u k o y k o i = 1 m v i o x i o u l o y l o i = 1 m v i o x i o | , k , l , (7)

where u k o y k o i = 1 m v i o x i o ( k ) is defined as the output efficiency of the kth output of DMUo, because the sum of all the s output efficiencies, i.e., k = 1 s u k o y k o i = 1 m v i o x i o , is defined as the self-evaluated efficiency of DMUo in CCR model.

In pursuit of balance in output efficiencies, a reasonable way is to force D k l o ( k , l ) to be no more than a level γ . In virtue of this, a model for minimizing the differences between output efficiencies as a secondary goal is constructed as follows:

min γ s .t . θ o o = r = 1 s u r o y r o i = 1 m v i o x i o , θ o j = r = 1 s u r o y r j i = 1 m v i o x i j 1 , j = 1 , 2 , , n , j o , D k l o = | u k o y k o i = 1 m v i o x i o u l o y l o i = 1 m v i o x i o | γ , k , l = 1 , 2 , , s , k l , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s , (8)

where v i o ( i = 1 , 2 , , m ) , u r o , u k o , u l o ( r , k , l = 1 , 2 , , s , k l ) and γ are decision variables.

Model (8) aims to derive a weight scheme that minimizes the pairwise differences between the output efficiencies of DMUo while keeping unchanged its self-evaluated efficiency. In so doing, all the output efficiencies of DMUo may get closer in value with a reduction of zero output efficiencies. This method might be applicable to, for example, DEA-based multi-criteria decision making (MCDM) problems such as inventory classification, in which the criteria are viewed as multiple outputs produced by a constant input [21] . The criteria such as average unit cost, annual dollar usage and lead time are considered as important in assessing an inventory item. Thus, all of them should be valued in some way.

Theorem 1. Model (8) is equivalent to model (9).

min γ = β α s .t . θ o o = r = 1 s u r o y r o i = 1 m v i o x i o , θ o j = r = 1 s u r o y r j i = 1 m v i o x i j 1 , j = 1 , 2 , , n , j o , α u r o y r o i = 1 m v i o x i o β , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , α 0 , (9)

where v i o ( i = 1 , 2 , , m ) , u r o ( r = 1 , 2 , , s ) , α , β and γ are decision variables.

Proof. Note that model (8) can be equivalently expressed as below.

min γ = maximum k , l { 1 , 2 , , s } , k l { | u k o y k o i = 1 m v i o x i o u l o y l o i = 1 m v i o x i o | } s .t . θ o o = r = 1 s u r o y r o i = 1 m v i o x i o , θ o j = r = 1 s u r o y r j i = 1 m v i o x i j 1 , j = 1 , 2 , , n , j o , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s . (10)

Let E o = max r { 1 , 2 , , s } { u r o y r o i = 1 m v i o x i o } , e o = min r { 1 , 2 , , s } { u r o y r o i = 1 m v i o x i o } . The objective function of model (10) can be converted into (11).

min γ s .t . E o e o γ . (11)

Next, the constraint α u r o y r o i = 1 m v i o x i o β , r = 1 , 2 , , s in model (9) can be replaced by { β E o α e o . Hence, we have that E o e o β α = γ in model (9). Comparing with (11), it is easily obtained that models (8) and (9) are equivalent.

Note that D k l o γ , k , l = 1 , 2 , , s , k l in model (8) result in s ( s 1 ) constraints, whereas α u r o y r o i = 1 m v i o x i o β , r = 1 , 2 , , s in model (9) lead to 2s constraints. Theorem 1 is thus important for easing the computation of model (8) especially when the number of outputs is large.

Model (9) can be interpreted as to seek a weight scheme that minimizes the range of output efficiencies for DMUo with its self-evaluated efficiency unchanged. By Charnes and Cooper transformation, model (9) can be linearized as model (12) for the solution.

min γ = β α s .t . i = 1 m v i o x i o = 1 , r = 1 s u r o y r o = θ o o , r = 1 s u r o y r j i = 1 m v i o x i j 0 , j = 1 , 2 , , n , j o ,

u r o y r o β , r = 1 , 2 , , s , u r o y r o α , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , α 0. (12)

3.2. Minimizing the Deviations of Output Efficiencies from the Mean

In pursuit of balance, an ideal point for DMUo is intuitively where all the output efficiencies equal the mean, i.e.,

θ ¯ o o = θ o o / s . (13)

The absolute deviation of output r from the mean can then be formulated as

Θ r o = | u r o y r o i = 1 m v i o x i o θ ¯ o o | , r = 1 , 2 , , s . (14)

To approach such an ideal point, one way is to make the difference as small as possible between each output efficiency and the mean. Specifically, the following model is proposed to formalize this secondary goal.

min σ s .t . θ o o = r = 1 s u r o y r o i = 1 m v i o x i o , θ o j = r = 1 s u r o y r j i = 1 m v i o x i j 1 , j = 1 , 2 , , n , j o , Θ r o = | u r o y r o i = 1 m v i o x i o θ o o s | σ , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s , (15)

where v i o ( i = 1 , 2 , , m ) , u r o ( r = 1 , 2 , , s ) and σ are decision variables.

By model (15), DMUo searches for a weight scheme to minimize the absolute deviations of output efficiencies from the mean with its self-evaluated efficiency remaining unchanged. In this way, model (15) attempts to provide balanced output efficiencies for DMUo by promoting centralization on the mean, which more directly aims at equal output efficiencies. As a result, the output efficiencies may turn out less variation than in the previous case. This model might be appropriately applied to specific settings where all the outputs count for as much as each other and should make contributions as equal as possible to the self-evaluated efficiency. An example would be the performance evaluation of new product development projects, as in Swink et al. [17] , with a large sample size and a fairly parsimonious set of dimensions.

In addition, for computation, model (15) is equivalent to the LP model below.

min σ s .t . i = 1 m v i o x i o = 1 , r = 1 s u r o y r o = θ o o , r = 1 s u r o y r j i = 1 m v i o x i j 0 , j = 1 , 2 , , n , j o , u r o y r o σ θ o o s , r = 1 , 2 , , s ,

u r o y r o + σ θ o o s , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s . (16)

3.3. Minimizing the Total Deviation of Output Efficiencies from the Mean

In light of promoting centralization on the mean, model (15) restricts the absolute deviations Θ r o ( r = 1 , 2 , , s ) to no more than a level σ and minimizes the value of σ . Alternatively, in attempting to minimize the total absolute deviation Θ o = r = 1 s Θ r o , one might consider another form of secondary goal for cross-efficiency evaluation by the following model.

min Θ o = r = 1 s Θ r o s .t . θ o o = r = 1 s u r o y r o i = 1 m v i o x i o , θ o j = r = 1 s u r o y r j i = 1 m v i o x i j 1 , j = 1 , 2 , , n , j o , Θ r o = | u r o y r o i = 1 m v i o x i o θ o o s | , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s , (17)

where v i o ( i = 1 , 2 , , m ) , u r o ( r = 1 , 2 , , s ) , Θ r o and Θ o are decision variables.

Model (17) tries to intensify the balance in the output efficiencies of DMUo by minimizing the total absolute deviation of output efficiencies from the mean when the self-evaluated efficiency keeps unchanged. Model (17) therefore runs on the equalization principle for each output of DMUo. To put this evaluation criterion into practice, model (17) would be suitable for similar cases as model (15) adapts to, e.g., in the setting where performance evaluation works for new product development projects, but more strongly aims at balance in multiple outcomes of the projects. When considerable dimensions are involved in the project analysis, using model (17) may significantly mitigate the partial emphasis on favorable dimensions as well as the neglect of unfavorable dimensions, hence putting all the dimensions into use as much as possible.

Note that model (17) is non-linear due to the equality constraint. However, when minimizing the objective function of model (17), the equalities in the third group of constraints can be replaced by inequalities because any optimal solution will finally meet the equality. Based on this transform, model (17) becomes the LP model (18) with v i o ( i = 1 , 2 , , m ) , u r o ( r = 1 , 2 , , s ) , τ r o and τ o as decision variables.

min τ o = r = 1 s τ r o s .t . i = 1 m v i o x i o = 1 , r = 1 s u r o y r o = θ o o , r = 1 s u r o y r j i = 1 m v i o x i j 0 , j = 1 , 2 , , n , j o , u r o y r o τ r o θ o o s , r = 1 , 2 , , s ,

u r o y r o + τ r o θ o o s , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s . (18)

3.4. Relationships with Some Previous Secondary Goals

Observingly, the model proposed by Wang and Chin [7] as below has an implicit goal to reduce the differences between output efficiencies.

max δ s .t . i = 1 m v i o x i o = 1 , r = 1 s u r o y r o = θ o o , r = 1 s u r o y r j i = 1 m v i o x i j 0 , j = 1 , 2 , , n , j o ,

u r o y r o δ , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , δ 0. (19)

Note that model (19) imposes a lower bound to the output efficiencies and the objective is to maximize the lower bound. Symmetrically, we can draw a model that imposes an upper bound to the output efficiencies and the objective is to minimize the upper bound, as below.

min μ s .t . i = 1 m v i o x i o = 1 , r = 1 s u r o y r o = θ o o , r = 1 s u r o y r j i = 1 m v i o x i j 0 , j = 1 , 2 , , n , j o ,

u r o y r o μ , r = 1 , 2 , , s , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s . (20)

By comparing model (12) with models (19) and (20), it might be deemed that model (12) is an enhancement of models (19) and (20), as the former more directly aims at promoting balance with both upper and lower bounds for output efficiencies.

Another secondary goal proposed in Jahanshahloo et al. [22] employs the weight assignment technique introduced in Dimitrov and Sutton [16] . Both in light of promoting balance, their model seeks to minimize the total difference between all the output efficiencies, whereas model (8) aims at minimizing the maximum difference between output efficiencies. We present an input-oriented version of their model as below for numerical comparisons in the next section, since all the models discussed in this study are conducted under input-orientation.

min k , l φ k l o s .t . i = 1 m v i o x i o = 1 , r = 1 s u r o y r o = θ o o , r = 1 s u r o y r j i = 1 m v i o x i j 0 , j = 1 , 2 , , n , u k o y k o u l o y l o φ k l o , k , l ,

u k o y k o + u l o y l o φ k l o , k , l , v i o 0 , i = 1 , 2 , , m , u r o 0 , r = 1 , 2 , , s . (21)

4. Illustrative Examples

In practice, performance evaluation is often conducted based on multiple output dimensions, each of which is important. The impact of any dimension on the evaluation results cannot be arbitrarily ignored. For example, in addition to talent cultivation and faculty strength, the evaluation of university departments may also examine their scientific research output; in addition to the quantity of output, the evaluation of manufacturing systems also considers the quality factors. When multiple dimensions need to be examined, the approach proposed in this paper aims to enable all the dimensions to play a role in the evaluation and in a further step attempts to balance their importance. Therefore, we in this section apply the proposed secondary goals to two illustrative examples for cross-efficiency evaluation and show the differences between the related models. Particularly, the evaluation features are illustrated through the results of using different secondary goals.

4.1. Seven Academic Departments in a University

The data of seven academic departments in a university are derived from Wong and Beasley [20] and documented in Table 1. Each department is observed as a DMU with three inputs (Input 1: number of academic staff, Input 2: academic staff salaries (£’000), Input 3: support staff salaries (£’000)) and three outputs (Output 1: number of undergraduate students, Output 2: number of postgraduate students, Output 3: number of research papers).

Note that Output 3 was discarded by Wang and Chin [7] , since it has no effect on the self-evaluated efficiencies of the seven departments as reported in the last column of Table 1. This means that for self-evaluation, Output 3 may or may not be assigned a zero weight, depending on which of the alternative optimal solutions (if any) to model (2) the solver finds first. However, as pointed out in Wong and Beasley [20] , “general expectations about what constitutes a university department lead us to believe that research output should be an important component of total departmental output”. Hence, assigning a zero weight to Output 3 is generally unsatisfactory. Moreover, having no impact on the self-evaluated efficiencies does not mean that Output 3 has no influence on the cross-efficiencies of the seven departments. It is therefore more preferable to maintain Output 3 for cross-efficiency evaluation and promoting balance in the three departmental outputs is to be claimed.

Based on the above discussion, Output 3 remains in our study. As a result of using model (2), six out of the seven departments are self-evaluated as efficient. To further discriminate between them, cross-efficiency evaluation with various secondary goals is implemented. Table 2 presents the three output efficiencies of the seven departments as a result of using models (8), (15), (17), (19), (20) and (21). It is found that the six models provide balanced output efficiencies for DMUs 1, 5, 6 and 7. These four DMUs are more likely to be all-round performers because they can be self-evaluated as efficient with balanced output efficiencies. The six models also provide non-zero output efficiencies for DMU 3, although they do not reach an agreement on the result. DMU 2 obtains non-zero output efficiencies from using models (8), (19) and (21), whereas it receives a zero output efficiency on Output 2 from using the other three models. Finally, DMU 4 obtains the least balanced output efficiencies since only Output 1 is rated with non-zero efficiency by the six models. From Table 1 it can be seen that DMU 4 is the worst performer on Outputs 2 and 3, and even on Output 1 it is the last but one. It is therefore the only one which is inefficient. As inefficient DMUs have a unique optimal solution to CCR model, i.e., model (2), it is not wondering that secondary goal models render the same three output efficiencies for it.

Tables 3-8 contain the cross-efficiency matrices resulting from using the six models, respectively. The cross-efficiency scores calculated by (5) and the ranking are reported in the last two columns of each table.

We first focus on the comparison of the results from using models (8) and (21). The two models lead to both different cross-efficiency scores and inconsistent rankings for the seven departments. As seen from Table 3 and Table 4, DMU 5 ranks 3rd and DMU 1 ranks 4th by model (8), whereas by model (21)

Table 1. Data of the seven university departments.

Table 2. Output efficiencies of the seven departments by using different models.

Table 3. Cross-efficiencies of the seven departments by using model (8).

Table 4. Cross-efficiencies of the seven departments by using model (21).

Table 5. Cross-efficiencies of the seven departments by using (19).

Table 6. Cross-efficiencies of the seven departments by using model (20).

Table 7. Cross-efficiencies of the seven departments by using model (15).

Table 8. Cross-efficiencies of the seven departments by using model (17).

these two DMUs exchange their positions. The difference is traceable to the rating DMUs 3, 5 and 6, whose weight schemes determined by model (8) are different from that determined by model (21), leading to distinctive cross-efficiencies of the other DMUs. The cross-efficiencies of the six DMUs rated by the weight scheme of DMU 3 derived from model (21) are all no greater than that derived from model (8). Conversely, most of the DMUs are better rated by the weight schemes of DMUs 5 and 6 derived from model (21) than that from model (8). For DMU 1, the weight schemes of DMU 3 derived from both models (8) and (21) rate it as efficient, while the weight schemes of DMUs 5 and 6 derived from models (21) rate it better than those derived from models (8) do. As for DMU 5, the weight schemes of DMUs 3 and 6 derived from models (8) rate it better than those derived from models (21) do. As a result, both DMUs 1 and 5 obtain different ranking positions by models (8) and (21).

Next, we turn to the results of using models (19) and (20). As shown in Table 5 and Table 6, the two models lead to different cross-efficiency scores for the seven departments, however, they yield a consistent cross-efficiency ranking. DMU 6 is rated as the most efficient, followed by DMUs 2, 1, 5, 3 and 7, while DMU 4 is regarded as the least efficient. The cross-efficiency scores of the five top-ranked departments resulting from model (19) are greater than those resulting from model (20). This occurs mainly due to the fact that model (19) aims at maximizing the output efficiency of the worst-performing output, which directly reduces the number of zero weights, whereas model (20) seeks to minimize the output efficiency of the best-performing output, which does less on reducing zero weights. Apparently, with a reduction of zero weights more outputs can be put into use in peer-evaluation. For DMUs 4 and 7, which are bottom-ranked, DMUs 5 and 3 are, respectively, the only one whose weight scheme derived from model (19) rate it better than that derived from model (20). Thus, the cross-efficiency scores of DMUs 4 and 7 resulting from model (19) are lower than those resulting from model (20).

The results of using models (15) and (17) are reported in Table 7 and Table 8. For this example, the two models yield identical cross-efficiency scores. DMUs 6 and 4 are identified as the most and the least efficient, respectively. Since both models attempts to promote centralization on the mean of output efficiencies, it is not wondering to obtain the same or similar results for some cases. The same result implies that the effectiveness of models (15) and (17) in promoting balance in output efficiencies for this example is equal.

4.2. Twelve Flexible Manufacturing Systems

Consider another example regarding the evaluation of twelve flexible manufacturing systems (FMSs), each viewed as a DMU with two inputs (Input 1: capital and operating costs ($00 000), Input 2: floor space rqmts. (000 ft2)) and four outputs (Output 1: improvements in qualitative factors (%), Output 2: work-in-process (10), Output 3: percentage of tardy jobs, Output 4: yield (00)). Table 9 presents the data extracted from Sheng and Sueyoshi [23] . The last column contains the self-evaluated efficiencies of the twelve FMSs.

The four output efficiencies of the twelve FMSs obtained from the six models are documented in Table 10. The six models provide balanced output efficiencies for DMUs 2, 5 and 7, all of which are self-evaluated as efficient and therefore can be deemed as all-round performers among the twelve DMUs. The six models also provide non-zero output efficiencies for DMUs 1 and 4, indicating that these two DMUs can be self-evaluated as efficient without ignoring their performance on any output. DMU 6 receives non-zero output efficiencies from all the models except model (20), under which Output 1 is ignored by DMU 6. DMU 9 obtains non-zero output efficiencies from models (15) and (19), while under the other four models Output 3 is ignored by DMU 9. For the other five inefficient DMUs, each of them is provided with the same four output efficiencies by the six models and all of them receive some zero output efficiencies from each of the six models. DMU 10, which is self-evaluated as inefficient, obtains the least balanced output efficiencies because only Output 2 is given a non-zero efficiency by the six models.

Table 9. Data of the twelve flexible manufacturing systems.

Table 10. Output efficiencies of the twelve FMSs by using different models.

By means of models (8) and (21), the cross-efficiencies of the twelve FMSs are reported in Table 11 and Table 12, respectively. The two models lead to different rankings. Four out of the twelve FMSs get new ranking positions when model (8) is used instead of model (21). Specifically, by model (8) DMUs 3 and 2 come in, respectively, the 5th and 6th positions, while by model (21) the contrary is the case. Also, with model (8) DMUs 10 and 12 are queued at, respectively, the 10th and 11th positions, whereas the converse is true with model (21). These differences are caused by DMUs 4 and 6, whose weight schemes determined by model (8) vary from that determined by model (21). Most of the DMUs are better rated with the weight schemes of DMUs 4 and 6 obtained from model (21) in contrast to those obtained from model (8).

Table 11. Cross-efficiencies of the twelve FMSs by using model (8).

Table 12. Cross-efficiencies of the twelve FMSs by using model (21).

The results of using models (19) and (20) are reported in Table 13 and Table 14. With both models, DMU 5 is rated as the most efficient, followed by DMUs 4, 1 and 7, while DMU 9 is rated as the least efficient. The difference exists in the 10th and 11th positions, where DMU 12 outperforms DMU 10 by model (19) but DMU 12 is inferior to DMU 10 by model (20). The two cross-efficiency matrices show that for each of DMUs 1, 4, 6 and 9, the weight schemes determined by models (19) and (20) lead to distinctive cross-efficiencies of the other DMUs. Specifically, the weight schemes of DMUs 1 and 6 determined by model (19) provide higher cross-efficiencies than those determined by model (20), while for DMUs 4 and 9 it turns out on the contrary. Thus, the cross-efficiency scores resulting from models (19) and (20) present differences. It is seen that the cross-efficiency scores of all the DMUs except DMUs 6 and 9 obtained from using model (20) are greater than those obtained from using model (19).

Table 13. Cross-efficiencies of the twelve FMSs by using model (19).

Table 14. Cross-efficiencies of the twelve FMSs by using model (20).

The results of using models (15) and (17) are documented in Table 15 and Table 16. Different cross-efficiency scores are also identified by the two models, leading to the difference in the ranking of DMUs 10 and 12. This is attributed to the weight schemes of DMUs 1, 4, 6 and 9. Comparing the corresponding columns in both tables for each of these four rating DMUs, it is observed that the weight schemes of DMUs 4 and 9 determined by model (17) provide most of the DMUs with greater cross-efficiencies than those determined by model (15). However, for DMUs 1 and 6, their weight schemes derived from model (15) find most of the DMUs more cross-efficient than those derived from model (17). The results show that unlike Example 1, models (15) and (17) in this example have different effectiveness in promoting balance in output efficiencies for some DMUs.

Table 15. Cross-efficiencies of the twelve FMSs by using model (15).

Table 16. Cross-efficiencies of the twelve FMSs by using model (17).

5. Conclusions

The original DEA allows each DMU to evaluate its efficiency relative to the other DMUs with its favorable weight scheme. The weight scheme is often unrealistic due to the flexibility in weight selection. To reduce the flexibility and furthermore promote balance in output efficiencies, a variety of secondary goals are introduced in this study for cross-efficiency evaluation. Each of the proposed models seeks, under a particular evaluation criterion, a weight scheme that makes the contributions of output efficiencies of the DMU be evaluated as balanced as possible to the self-evaluated efficiency. The proposed approach is then applied to two empirical datasets to validate the effectiveness of the introduced secondary goals. For managerial implications, the proposed approach might be suitable and applicable to DEA-based multi-criteria evaluation problems, such as inventory classification and new product development projects, where multiple outputs are considered important and should be valued in a way, especially in the settings where a large sample size and considerable dimensions are involved and balance is strongly encouraged to put all the dimensions into use as much as possible.

For future directions, it is noted that the secondary goals in light of promoting balance could be extended to other forms such as incorporating with DMs’ preference structure, considering the competitive relationships among DMUs [24] , or the two-stage [25] and network [26] systems. Another extension lies in that the proposed approach uses the simple averaging for cross-efficiency aggregation and this can be extended to other choices [27] [28] .

Acknowledgements

This study was supported by the National Natural Science Foundation of China (71702187).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Chen, J. (2019) Promoting Balance in Output Efficiencies for Cross-Efficiency Evaluation in Data Envelopment Analysis. Journal of Applied Mathematics and Physics, 7, 664-685. doi: 10.4236/jamp.2019.73047.

References

[1] Charnes, A., Cooper, W.W. and Rhodes, E. (1978) Measuring the Efficiency of Decision Making Units. European Journal of Operational Research, 2, 429-444.
https://doi.org/10.1016/0377-2217(78)90138-8
[2] Doyle, J. and Green, R. (1994) Efficiency and Cross-Efficiency in DEA: Derivations, Meanings and Uses. Journal of the Operational Research Society, 45, 567-578.
https://doi.org/10.1057/jors.1994.84
[3] Sexton, T.R., Silkman, R.H. and Hogan, A.J. (1986) Data Envelopment Analysis: Critique and Extensions. In: Silkman, R.H., Ed., Measuring Efficiency: An Assessment of Data Envelopment Analysis, Jossey-Bass Inc., San Francisco, 8-10.
https://doi.org/10.1002/ev.1441
[4] Doyle, J.R. and Green, R.H. (1995) Cross-Evaluation in DEA: Improving Discrimination among DMUs. INFOR Information Systems and Operational Research, 33, 205-222.
https://doi.org/10.1080/03155986.1995.11732281
[5] Anderson, T.R., Hollingsworth, K. and Inman, L. (2002) The Fixed Weighting Nature of A Cross-Evaluation Model. Journal of Productivity Analysis, 17, 249-255.
https://doi.org/10.1023/A:1015012121760
[6] Liang, L., Wu, J., Cook, W.D. and Zhu, J. (2008) Alternative Secondary Goals in DEA Cross-Efficiency Evaluation. International Journal of Production Economics, 113, 1025-1030.
https://doi.org/10.1016/j.ijpe.2007.12.006
[7] Wang, Y.-M. and Chin, K.S. (2010) A Neutral DEA Model for Cross-Efficiency Evaluation and Its Extension. Expert Systems with Applications, 37, 3666-3675.
https://doi.org/10.1016/j.eswa.2009.10.024
[8] Wu, J., Chu, J., Sun, J., Zhu, Q. and Liang, L. (2016) Extended Secondary Goal Models for Weights Selection in DEA Cross-Efficiency Evaluation. Computers & Industrial Engineering, 93, 143-151.
https://doi.org/10.1016/j.cie.2015.12.019
[9] Oral, M., Kettani, O. and Lang, P. (1991) A Methodology for Collective Evaluation and Selection of Industrial R&D Projects. Management Science, 37, 871-885.
https://doi.org/10.1287/mnsc.37.7.871
[10] Talluri, S. and Sarkis, J. (1997) Extensions in Efficiency Measurement of Alternate Machine Component Grouping Solutions via Data Envelopment Analysis. IEEE Transactions on Engineering Management, 44, 299-304.
https://doi.org/10.1109/17.618171
[11] Liang, L., Wu, J., Cook, W.D. and Zhu, J. (2008) The DEA Game Cross-Efficiency Model and Its Nash Equilibrium. Operations Research, 56, 1278-1288.
https://doi.org/10.1287/opre.1070.0487
[12] Wu, J., Liang, L. and Chen, Y. (2009) DEA Game Cross-Efficiency Approach to Olympic Rankings. Omega, 37, 909-918.
https://doi.org/10.1016/j.omega.2008.07.001
[13] Flokou, A., Kontodimopoulos, N. and Niakas, D. (2011) Employing Post-DEA Cross-Evaluation and Cluster Analysis in A Sample of Greek NHS Hospitals. Journal of Medical Systems, 35, 1001-1014.
https://doi.org/10.1007/s10916-010-9533-9
[14] Liu, X., Chu, J., Yin, P. and Sun, J. (2017) DEA Cross-Efficiency Evaluation Considering Undesirable Output and Ranking Priority: A Case Study of Eco-Efficiency Analysis of Coal-Fired Power Plants. Journal of Cleaner Production, 142, 877-885.
https://doi.org/10.1016/j.jclepro.2016.04.069
[15] Liu, W., Wang, Y.-M. and Lv, S. (2017) An Aggressive Game Cross-Efficiency Evaluation in Data Envelopment Analysis. Annals of Operations Research, 259, 241-258.
https://doi.org/10.1007/s10479-017-2524-1
[16] Dimitrov, S. and Sutton, W. (2010) Promoting Symmetric Weight Selection in Data Envelopment Analysis: A Penalty Function Approach. European Journal of Operational Research, 200, 281-288.
https://doi.org/10.1016/j.ejor.2008.11.043
[17] Swink, M., Talluri, S. and Pandejpong, T. (2006) Faster, Better, Cheaper: A Study of NPD Project Efficiency and Performance Tradeoffs. Journal of Operations Management, 24, 542-562.
https://doi.org/10.1016/j.jom.2005.09.004
[18] Charnes, A. and Cooper, W.W. (1962) Programming with Linear Fractional Functional. Naval Research Logistics Quarterly, 9, 181-185.
https://doi.org/10.1002/nav.3800090303
[19] Sarrico, C.S. and Dyson, R.G. (2004) Restricting Virtual Weights in Data Envelopment Analysis. European Journal of Operational Research, 159, 17-34.
https://doi.org/10.1016/S0377-2217(03)00402-8
[20] Wong, Y.H.B. and Beasley, J.E. (1990) Restricting Weight Flexibility in Data Envelopment Analysis. Journal of the Operational Research Society, 41, 829-835.
https://doi.org/10.1057/jors.1990.120
[21] Ramanathan, R. (2006) ABC Inventory Classification with Multiple-Criteria Using Weighted Linear Optimization. Computers & Operations Research, 33, 695-700.
https://doi.org/10.1016/j.cor.2004.07.014
[22] Jahanshahloo, G.R., Lotfi, F.H., Jafari, Y. and Maddahi, R. (2011) Selecting Symmetric Weights as A Secondary Goal in DEA Cross-Efficiency Evaluation. Applied Mathematical Modelling, 35, 544-549.
https://doi.org/10.1016/j.apm.2010.07.020
[23] Sheng, J. and Sueyoshi, T. (1995) A Unified Framework for the Selection of a Flexible Manufacturing System. European Journal of Operational Research, 85, 297-315.
https://doi.org/10.1016/0377-2217(94)00041-A
[24] Yang, Z. and Wei, X. (2019) The Measurement and Influences of China’s Urban Total Factor Energy Efficiency under Environmental Pollution: Based on the Game Cross-Efficiency DEA. Journal of Cleaner Production, 209, 439-450.
https://doi.org/10.1016/j.jclepro.2018.10.271
[25] Orkcü, H.H., Ozsoy, V.S., Orkcü, M. and Bal, H. (2019) A Neutral Cross Efficiency Approach for Basic Two Stage Production Systems. Expert Systems with Applications, 125, 333-344.
https://doi.org/10.1016/j.eswa.2019.01.067
[26] Kao, C. and Liu, S.-T. (2019) Cross Efficiency Measurement and Decomposition in Two Basic Network Systems. Omega, 83, 70-79.
https://doi.org/10.1016/j.omega.2018.02.004
[27] Oukil, A. (2018) Embedding OWA under Preference Ranking for DEA Cross-Efficiency Aggregation: Issues and Procedures. International Journal of Intelligent Systems, 34, 947-965.
https://doi.org/10.1002/int.22082
[28] Carrillo, M. and Jorge, J.M. (2018) Integrated Approach for Computing Aggregation Weights in Cross-Efficiency Evaluation. Operations Research Perspectives, 5, 256-264.
https://doi.org/10.1016/j.orp.2018.08.005

  
comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.