Research on the Establishment of Evaluation Index System for Military Software Suppliers

Abstract

Establishing a scientifically justifiable system of evaluation indices is crucial for selecting and evaluating military software suppliers. Based on an initial screening of the evaluation indices, the grey-rough set method was used to reduce and select the evaluation indices. As a result, a two-stage evaluation index system covering both qualification examination and supplier evaluation was ultimately established, and the meanings and applications of each evaluation index were explained. The results show that the grey-rough set method can effectively reduce and screen the evaluation indices for military software suppliers.

Share and Cite:

Wang, X. , Hu, K. , Zhang, X. , Gou, Q. , Wang, W. , Deng, M. , Zhou, X. , Ma, T. and Zhang, Z. (2023) Research on the Establishment of Evaluation Index System for Military Software Suppliers. Open Journal of Business and Management, 11, 1996-2013. doi: 10.4236/ojbm.2023.115110.

1. Introduction

In recent years, with the continuous progress of national defense and military modernization, military informatization construction has been improved ( Foreman, Favaró, Saleh, & Johnson, 2015 ). The number of development projects for military software has increased ( Cho, Hwang, Shin, Kim, & In, 2021 ), and an increasing number of software vendors have participated in the development and maintenance of military software systems ( Merola, 2006 ). Selecting the most suitable software vendor from a multitude of suppliers has become the focus of attention for military units.

2. Literature Review

Regarding the issue of supplier evaluation and selection, numerous domestic and foreign scholars have conducted extensive research. In 1966, Dickson (1966) published a research article titled “An Analysis of Vendor Selection Systems and Decisions”, which established a pioneering foundation for research on supplier evaluation and selection by constructing 23 evaluation indicators encompassing past performance, technical capability, after-sales service, etc., and ranking the importance of these indicators. In the research that followed over the past fifty years, studies on supplier evaluation and selection have covered various fields, including the general manufacturing industry, the traditional construction industry, the telecommunications service industry, the modern logistics industry, and others. In their study, Song, Wang, Guo, Lu and Liu (2021) employed a combination of the mechanism equation model (SEM) and intuitionistic fuzzy analytic hierarchy process (IFAHP) to conduct a comprehensive evaluation of prefabricated modular building suppliers. Uygun, Kacamak and Kahraman (2015) utilized a combination of the Decision-Making Trial and Evaluation Laboratory (DEMATEL) and Fuzzy Analytic Network Process (Fuzzy ANP) methods to conduct a study on the selection and evaluation of outsourcing suppliers for telecommunications companies. Ghorbani and Ramezanian (2020) designed a scenario-based two-stage stochastic programming model for the evaluation and selection of carriers in humanitarian relief operations.

Regarding the selection of software vendors, some scholars ( Li et al., 2021 ) focused on evaluating and selecting management software vendors, providing a scientific basis for procurement decisions in universities. Khan, Niazi and Ahmad (2011) identified through a systematic literature review (SLR) method the factors that negatively impact the selection of offshore software development outsourcing project vendors. Rashid, Khan, Khan and Ilyas (2021) designed and developed a multi-level agile green maturity model (GAMM) to assess the maturity level of global software vendors in agile software development. Some scholars ( Huang et al., 2018 ) constructed an evaluation indicator system for BIM software vendors based on the characteristics of BIM software, providing references for scientifically selecting BIM software vendors. Other scholars ( Wang et al., 2022 ) constructed an evaluation indicator system for third-party testing vendors of military software and validated the objectivity and usability of this indicator system through examples. Currently, there is relatively little research on the selection of military software vendors. Due to significant differences between military software projects and general software service projects, such as high confidentiality requirements, long service cycles, and complex technical performance, issues arise when applying traditional software supplier evaluation and selection methods. Therefore, it is necessary to establish a targeted and practical evaluation indicator system for selecting military software vendors.

3. The Structure of the Paper

The structure of this article comprises several sections. The introduction in Chapter 1 primarily presents the research purpose and background. Chapter 2 focuses on the literature review, specifically reviewing important literature in the field of supplier selection. Chapter 3 briefly discusses the composition and structure of the article. In Chapter 4, the fundamental principles and main processes of constructing the evaluation index system for military software suppliers are introduced. Chapter 5 involves the initial selection of evaluation indicators for military software suppliers. Chapter 6 begins by introducing the grey-rough set-based indicator screening method, followed by the screening of the evaluation index system for military software suppliers, along with explanations and applications of each indicator within the index system. Chapter 7 concludes the article and highlights the universality of the results.

4. The Principles and Process of Constructing the Index System for Military Software Suppliers

4.1. Principles for Constructing the Evaluation Index System

The construction of the evaluation index system should fully consider the uniqueness of military software. Based on the analysis of evaluation factors mentioned earlier, the following principles are formulated:

1) Combining Practicality with Operability

The evaluation index system for military software suppliers should align with the practical context of evaluating these suppliers. The quantifiable parameters of the constructed indicators should be easy to collect and calculate, enabling their practical application in the selection process of military software suppliers. The evaluation results should comprehensively and objectively reflect the suppliers’ overall capabilities, assisting military units in identifying the best suppliers during the software outsourcing process.

2) Combining Scientific Rigor with Purposefulness

Scientific rigor and purposefulness should be considered when constructing the evaluation index system. The constructed index system must adhere to scientific principles, ensuring its rationality. Simultaneously, it must also align with the purpose of supplier evaluation, facilitating subsequent supplier selection. The selection of indicators should be scientifically reasonable, accurately reflecting the characteristics of military software suppliers. The construction of indicators should exhibit distinct hierarchies and differentiation.

3) Combining Universality with Specialty

When constructing evaluation indicators for military software suppliers, it is necessary to compare different types of military software and establish evaluation indicators that encompass a wide range and have common characteristics. This ensures that the evaluation indicators apply to various types of military software outsourcing projects. Additionally, it is important to set evaluation indicators distinct from those used for general software suppliers, taking into account the unique aspects of military software, thereby ensuring both universality and representativeness.

4) Combining Qualitative and Quantitative Approaches

The evaluation index system for military software suppliers should reflect various aspects of the suppliers’ capabilities. It should include both qualitative and quantitative indicators. The selection of quantitative indicators should ensure data accessibility and operational simplicity. For factors that cannot be quantitatively described, qualitative indicators should be used to provide a comprehensive reflection of the suppliers’ overall capabilities. It is important to define the relevant meanings of the indicators and quantify them through expert ratings or other methods.

4.2. Process of Constructing an Evaluation Index System

Through comprehensive analysis of the evaluation factors for military software suppliers, combined with current research on software supplier evaluation selection at home and abroad, as well as actual investigations of military units, the initial selection of evaluation indicators is conducted. The grey-rough set method is employed to optimize and reduce the indicators, ultimately determining the evaluation indicator system for military software suppliers. The construction process of the evaluation indicator system for military software suppliers is illustrated in Figure 1.

1) Analysis of Evaluation Factors and Initial Selection of Indicators

By collecting, summarizing, and integrating research on software supplier selection both domestically and internationally, and combining it with the research findings from relevant military units, a preliminary evaluation indicator system for military software suppliers is synthesized and organized.

2) Optimization of Indicators Based on Grey-Rough Set Theory

The selected military software suppliers are subjected to research and analysis. The “Survey Questionnaire for the Construction of Evaluation Indicator System for Military Software Suppliers” is designed, and experts, project managers, procurement personnel, and military software suppliers involved in software engineering project management are invited to rate the indicators. By employing the combined approach of grey correlation analysis and rough set theory, the evaluation indicators are reduced and representative indicators with strong representativeness for evaluating military software suppliers are determined.

3) Determination and Analysis of Evaluation Indicators

The selected indicators are explained and analyzed to establish quantitative calculation or qualitative judgment methods for each indicator. Ultimately, a comprehensive evaluation indicator system for military software suppliers is formed, enabling it to fully reflect the suppliers’ comprehensive capabilities and provide practical guidance.

Figure 1. Construction process of evaluation indicator system for military software suppliers.

5. The Initial Selection of Evaluation Indicators

Based on the analysis of evaluation factors for military software suppliers, combined with the relevant research literature on software supplier evaluation and selection both domestically and internationally, as well as actual surveys conducted in military units, the evaluation indicators for selecting military software suppliers are preliminarily selected by distinguishing between supplier qualification review and supplier evaluation and selection, following the actual steps of selecting military software suppliers. The framework for the initial selection of supplier indicators is presented in Table 1.

Table 1. Indicator system for initial supplier selection.

6. Evaluating Indicator Selection

The previous text discussed the initial selection of evaluation indicators for selecting military software suppliers. However, during the practical application, the inherent relationships and logical redundancies between these indicators, as well as their appropriateness for evaluation, may have been overlooked. These factors could affect the accuracy of the evaluation and make it difficult to directly apply the selected indicators. Therefore, it is necessary to further screen the initial set of indicators.

Common methods for simplifying indicator systems include Analytic Hierarchy Process (AHP), Principal Component Analysis (PCA), Factor Analysis (FA), and Linear Discriminant Analysis (LDA). However, traditional evaluation methods like AHP tend to be subjective, PCA can only handle linearly correlated problems, and FA and LDA require a high sample size with a large amount of data. Hence, this study adopts a combination of Grey Relation Analysis (GRA) and Rough Set Theory (RST) to simplify the evaluation indicator system.

6.1. Indicator Selection Method based on Grey-Rough Set

Grey Relation Analysis (GRA) is a method that uses grey system theory to characterize the influence of multiple factors on the target factor. It has the advantages of simplicity, wide applicability, robustness, and ease of interpretation. Rough Set Theory (RST), on the other hand, is a data model used to analyze and process incomplete and uncertain data. It possesses strong feature extraction capabilities, good interpretability, and simple and reliable algorithms.

By combining these two methods, it is possible to calculate the correlation among the various indicators while identifying redundant ones. This allows for the selection and optimization of the indicator system. The specific calculation process is as follows:

STEP 1: Establish the rating matrix.

Through questionnaire surveys, experienced experts in the field of supplier selection are invited to rate the initial indicators based on four dimensions: representativeness, necessity, scientificity, and systematicity. Each dimension is scored out of 25, with a total of 100 points. The total score for each indicator across the four dimensions represents the expert’s rating. The evaluation indicator system for military software suppliers is treated as a multi-attribute decision information system.

S = { U , A , V , f }

U is the set of experts, A is the set of indicator attributes, A = C D is the set of attributes, C is the subset of conditional attributes (expert attributes), D is the subset of decision attributes (indicator attributes), V is the set of attribute values, f : U × A V and is an information function, i.e., the attribute value of each object (the result of the scoring of the indicator by each expert).

Due to the differences in scoring among various indicators, it is necessary to standardize the raw data of expert ratings. The range method, known for its simplicity, applicability, and preservation of the original data distribution, is widely used for data standardization in various scenarios. Therefore, this article employs the range method for data standardization. As the experts’ ratings reflect the importance of each indicator and all indicators are considered beneficial, standardization should be conducted using Equation (1).

w i ( j ) = v i ( j ) min i ( v i ( j ) ) max i ( v i ( j ) ) min i ( v i ( j ) ) (1)

v i ( j ) represents the combined score of the expert on the indicator, and the normalized scoring matrix is W = [ w i j ] , where i = 1 , 2 , , m ; j = 1 , 2 , , n .

STEP 2: Establishing the correlation matrix.

Let’s set it W 0 = { w 01 , w 02 , , w 0 n } as the reference data column. For any i j , i , j = 1 , 2 , , m , we can obtain the correlation coefficient of the comparative data column Z i concerning the reference data column Z 0 in terms of the indicator k using Equation (2).

θ i ( k ) = min i min j | w i ( j ) w 0 ( j ) | + ρ max i max j | w i ( j ) w 0 ( j ) | | w 0 ( k ) w i ( k ) | + ρ max i max j | w 0 ( j ) w j ( j ) | (2)

Here, ρ [ 0 , 1 ] represents the discrimination coefficient. The correlation coefficients can be used to construct the correlation matrix Z, as shown in Equation (3).

Z = ( θ 11 θ 12 θ 1 n θ 21 θ 2 n θ n n ) (3)

STEP 3: Determining the optimal threshold using the F-statistic.

Since the classification of indicators can be influenced by the threshold λ , this study introduces the F-statistic method to determine the threshold to achieve more scientifically and objectively classified results.

Assuming is the set of evaluation objects to be classified, for any of B i , where represents the evaluation object score for the first indicator (where i = 1 , 2 , , m ; k = 1 , 2 , , n ).

Assuming that the number of categories under the threshold λ is r, the calculation of the average value of the scores of the objects in the first category in the indicator is shown in Equation (4) and represents the number of objects included in the first category j.

b ¯ i k = 1 o j i = 1 o j b i k , k = 1 , 2 , , n (4)

The average of the scores of all evaluation subjects on the indicators is calculated as shown in Equation (5).

b ¯ k = 1 m i = 1 m b i k , k = 1 , 2 , , n (5)

And then the F-statistic can be calculated:

F = j = 1 r o j k = 1 n ( y ¯ i k y ¯ k ) 2 / ( r 1 ) j = 1 r i = 1 o j k = 1 n ( y ¯ i k y ¯ k ) 2 / ( m r ) (6)

Equation (6), m represents the total number of evaluations to be classified, and obeys the distribution, in which the denominator represents the distance of the samples within the group and the numerator represents the distance of the samples between the groups, so the larger the value, the better the classification effect. According to the relevant knowledge of significance test in mathematical statistics, if F > F α ( r 1 , n r ) , where α = 0.05 , indicating that the difference between the groups is more obvious, the classification is relatively reasonable if at the same time, there is more than one value to meet the inequality F > F α ( r 1 , n r ) , it is necessary to further calculate the value of ( F F α ) / F α , and select the F value with larger calculation results.

STEP 4: Indicator reduction based on rough set theory

As mentioned earlier, S = { U , A , V , f } let be an information system. When a subset of attributes PQ ( P Q ) is taken from this system, the indiscernibility relation P is denoted as I n d ( P ) . It divides the universe of discourse U into k equivalence classes, which can be represented as:

U / I n d ( P ) = { W 1 , W 2 , , W k } (7)

In an information system S = { U , A , V , f } , assume that H A , H is an equivalence relation and h H , if I n d ( H ) = I n d ( H { h } ) , h is said to be redundant in H, and vice versa, h is said to be necessary for H, and if all h are necessary for H, H is said to be independent. If two equivalence relations on a domain satisfy the conditions that M N , are independent I n d ( N ) = I n d ( M ) , then the domain U is said to be approximately reduced over the set N of attributes as a Property set M.

STEP 5: Comprehensive evaluation analysis of indicators.

This study adopts the method of calculating weights using rough set theory. The weights of the indicators are calculated based on the experts’ ratings. The specific calculation process is as follows:

In the information system S = { U , A , V , f } , A = C D is a set of attributes, in the measurement of the importance of each attribute to introduce the concept of information quantity, if K A , and U / I n d ( K ) = { x 1 , x 2 , , x n } , then the information quantity of K can be derived from Equation (8).

I ( K ) = i = 1 n | x i | | U | [ 1 | x i | | U | ] = 1 1 | U | 2 i = 1 n | x i | 2 (8)

C = { c 1 , c 2 , , c n } is a subset of the conditional attributes and c i is a sub-attribute, then the importance relative S i g c ( c i ) to can be found as shown in Equation (9).

S i g c ( c i ) = I ( C ) I ( C { c i } ) (9)

Then the weights of the sub-attributes ω i are calculated as:

ω i = S i g c ( c i ) i = 1 n S i g c ( c i ) (10)

It is further possible to calculate a composite assessment value for the indicator:

S j = i = 1 n ω i s i j (11)

Equation (11), S j represents the j comprehensive evaluation result of the first indicator and s i j represents the attribute value of the first indicator under the first attribute.

6.2. Steps in Screening Indicators

STEP 1: Establishing a scoring matrix

Due to the numerous criteria involved in the supplier qualification review stage, the 11 secondary indicators in the table are denoted for ease of subsequent calculations R 1 ~ R 11 . Through a questionnaire survey, 7 experienced experts J 1 ~ J 7 specializing in supplier selection research were invited to rate the relevant indicators across 4 dimensions. The scoring values provided by each expert for each indicator were recorded (with each dimension ranging from 0 to 25, totaling 100 points). Table 2 presents the original scoring statistics for the supplier qualification review indicators as assessed by the expert panel.

Based on the formula, the data has been standardized. The standardized data is presented in Table 3.

STEP 2: Establishing the Association Matrix

By applying formula (2), the grey correlation matrix is computed for all

Table 2. Original scoring statistics for supplier qualification review indicators by expert panel.

Table 3. Standardized data for supplier qualification review indicators by expert panel.

Figure 2. Dynamic clustering diagram of comprehensive indicators for supplier qualification review.

indicators. The optimal grey correlation effect is achieved when the resolution coefficient is ρ = 0.5 . Consequently, the grey correlation matrix is obtained. Based on this correlation matrix, the dynamic clustering of supplier qualification review indicators is generated, as depicted in Figure 2.

Z = [ 1 0 .6238 0 .6182 0 .6059 0 .6364 0 .6455 0 .6246 0 .7217 0 .5181 0 .5768 0 .6616 1 0 .6051 0 .6224 0 .6442 0 .6436 0 .6309 0 .5007 0 .6453 0 .5504 0 .6595 1 0 .6095 0 .8846 0 .7022 0 .5959 0 .6784 0 .6675 0 .5182 0 .6595 1 0 .5665 0 .6398 0 .7496 0 .6140 0 .5501 0 .5778 0 .5463 1 0 .6705 0 .5411 0 .6865 0 .6225 0 .5531 0 .7102 1 0 .7036 0 .5977 0 .6527 0 .5241 0 .5995 1 0 .6659 0 .6428 0 .5680 0 .5455 1 0 .5030 0 .6211 0 .5852 1 0 .6002 0 .7040 1 0 .7589 1 ]

STEP 3: Determining the Optimal Threshold through F-Statistic

To ensure a more scientific and objective classification of indicators, this study employs the F-statistic method to determine the threshold for evaluating the indicator system. By utilizing formulas (4), (5), and (6), Table 4 is obtained, presenting the analysis of the optimal threshold for the supplier qualification review stage (where α = 0.05 ).

The optimal threshold is determined by selecting the maximum value ( F F α ) / F α from the table, which corresponds to λ . From Table 4, it is evident that the optimal threshold is λ = 0.7217 , and the optimal number of classifications is 7. At this threshold, the optimal classification is as follows:

U / J = { { R 10 , R 11 } , { R 9 } , { R 6 } , { R 4 , R 7 } , { R 3 , R 5 } , { R 2 } , { R 1 , R 8 } }

STEP 4: Indicator Reduction Based on Rough Set Theory

Using the same method, the optimal classification results can be obtained by sequentially removing the ratings of each expert. Table 5 presents the results of the optimal classification after removing the ratings of each expert.

STEP 5: Comprehensive Evaluation Analysis of Indicators

By applying formula (8), the information content of the optimal classification

Table 4. Analysis of optimal threshold for supplier qualification review stage.

after sequentially removing attributes can be obtained. Table 6 presents the information content statistics for the optimal classification method after removing each attribute.

By substituting the above results into formulas (9) and (10), the importance and weights of each attribute can be calculated. Table 7 presents the statistics for attribute importance and weights.

According to formula (11), the comprehensive evaluation results for each indicator in the supplier qualification review stage can be obtained. Table 8 presents the statistics for the comprehensive evaluation results of supplier qualification review indicators.

From the above evaluation, it is evident that the two indicators, R4 and R6,

Table 5. Optimal classification results after removing attribute.

Table 6. Information content statistics for optimal classification method after removing attributes J 1 J 7 .

Table 7. Statistics of attribute importance and weights.

Table 8. Statistics of comprehensive evaluation results for supplier qualification review indicators.

rank last, and their overall evaluation values differ significantly from the other indicators. These indicators are considered non-core. The indicator regarding the establishment time requirement lacks the necessary relevance for supplier qualification review, while the indicator on reputation includes content related to the examination of intellectual property rights R6 and should be removed. Thus, the final set of indicators is obtained.

Similarly, for the evaluation and selection stage of suppliers, a reduction and screening of indicators can be performed. The calculation results indicate that the four indicators, and rank last, and their overall evaluation values differ significantly from the other indicators. These indicators are considered non-core. The indicator A2 duplicates the content of market share and market size, while the indicator includes system fault tolerance B5, which is already covered by the indicator. The indicator is difficult to quantify due to the measurement of service attitude and should not be used as an evaluation indicator. Additionally, the indicators of personnel technical level and technical capability have similar content. Therefore, all four indicators should be removed.

6.3. Determination and Analysis of Evaluation Indicator System

Based on the previous application of the grey-rough set theory to screen and reduce the evaluation indicators for military software suppliers, the final determination of the evaluation indicator system for military software suppliers, along with their respective meanings and applications, is presented in Table 9.

7. Conclusion

This paper focuses on the evaluation and selection characteristics of military software suppliers. Drawing on previous domestic and international research, the initial selection of evaluation indicators for military software suppliers is conducted. The grey-rough set method is employed to reduce and select the indicator system, resulting in the final construction of the evaluation indicator system for military software suppliers. The main conclusions are as follows:

Table 9. Meaning and application of evaluation indicators for military software suppliers.

1) A grey-rough set model for indicator selection is constructed, which calculates the correlation between indicators and identifies redundant ones, leading to a more objective and fair reduction result.

2) An evaluation indicator system for military software suppliers is established, consisting of two stages: qualification review and evaluation selection. The meanings and applications of relevant indicators are explained, providing theoretical guidance for military units in selecting military software suppliers.

Overall, this study validates the applicability of the grey-rough set model in the process of indicator reduction. Additionally, the constructed indicator system aligns well with the procurement bidding process. By considering Supplier Strength, Product Technical Solution, Service Level, Product Pricing, and Implementation Capability, a comprehensive evaluation of military software suppliers is conducted. Moreover, while incorporating general supplier evaluation aspects, this study highlights the uniqueness of evaluating military software suppliers. The constructed indicator system demonstrates broad applicability and practicality, aiming to provide valuable insights for future research. Therefore, the study confirms the suitability of the grey-rough set model in indicator reduction and presents a comprehensive evaluation framework for military software suppliers.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Cho, S., Hwang, S., Shin, W., Kim, N., & In, H. P. (2021). Design of Military Service Framework for Enabling Migration to Military SaaS Cloud Environment. Electronics, 10, 572.
https://doi.org/10.3390/electronics10050572
[2] Dickson, G. W. (1966). An Analysis of Vendor Selection Systems and Decisions. Journal of Purchasing, 2, 5-17.
https://doi.org/10.1111/j.1745-493X.1966.tb00818.x
[3] Foreman, V. L., Favaró, F. M., Saleh, J. H., & Johnson, C. W. (2015). Software in Military Aviation and Drone Mishaps: Analysis and Recommendations for the Investigation Process. Reliability Engineering & System Safety, 137, 101-111.
https://doi.org/10.1016/j.ress.2015.01.006
[4] Ghorbani, M., & Ramezanian, R. (2020). Integration of Carrier Selection and Supplier Selection Problem In Humanitarian Logistics. Computers & Industrial Engineering, 144, Article ID: 106473.
https://doi.org/10.1016/j.cie.2020.106473
[5] Huang, Y. J., Liu, Y. Y., Liu, E. L. et al. (2018). Evaluation and Selection of BIM Software Suppliers Based on FAHP. Mathematics in Practice and Theory, 48, 51-58. (In Chinese)
[6] Khan, S. U., Niazi, M., & Ahmad, R. (2011). Barriers in the Selection of Offshore Software Development Outsourcing Vendors: An Exploratory Study Using a Systematic Literature Review. Information and Software Technology, 53, 693-706.
https://doi.org/10.1016/j.infsof.2010.08.003
[7] Li, J., Liu, X. D., & Rao, Y. (2021). Method for Supplier Selection for Military Enterprise Based on Prospect-Regret Theory. Journal of Air Force Engineering University (Natural Science Edition), 22, 97-103. (In Chinese)
[8] Merola, L. (2006). The COTS Software Obsolescence Threat. In 5th International Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (p. 7). The Institute of Electrical and Electronics Engineers.
[9] Rashid, N., Khan, S. U., Khan, H. U., & Ilyas, M. (2021). Green-Agile Maturity Model: An Evaluation Framework for Global Software Development Vendors. IEEE Access, 9, 71868-71886.
https://doi.org/10.1109/ACCESS.2021.3079194
[10] Song, Y., Wang, J., Guo, F., Lu, J., & Liu, S. (2021). Research on Supplier Selection of Prefabricated Building Elements from the Perspective of Sustainable Development. Sustainability, 13, Article No. 6080.
https://doi.org/10.3390/su13116080
[11] Uygun, O., Kacamak, H., & Kahraman, U. A. (2015). An Integrated DEMATEL and Fuzzy ANP Techniques for Evaluation and Selection of Outsourcing Provider for a Telecommunication Company. Computers & Industrial Engineering, 86, 137-146.
https://doi.org/10.1016/j.cie.2014.09.014
[12] Wang, L., Wu, G. J., Xie, L. et al. (2022). An Evaluation Method of the Third Party Test Suppliers Selection for Military Software. Electronic Warfare Technology, 37, 92-96. (In Chinese)

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.