Generalized Soft Expert Set and Its Applications

Abstract

The expert set theory performs an important part in making decisions. It can successfully convey the opinions of every expert since it includes several experts. In this study, generalizations and soft expert sets are combined. First, the generalized soft expert set notion is presented, then follows the definition of operation approaches and characteristics as the complement, union, intersection, AND and OR. The application of the generalized soft expert set concept to decision issues is then presented with an example. The study also introduces the idea of a generalized soft expert matrix and demonstrates how it may be used to solve decision-making issues. The most striking aspect of this paper is to present the concepts of generalized soft expert sets and generalized soft expert matrices and to apply them to the same decision-making scenario.

Share and Cite:

Liu, X. (2023) Generalized Soft Expert Set and Its Applications. Journal of Applied Mathematics and Physics, 11, 2444-2460. doi: 10.4236/jamp.2023.118156.

1. Introduction

It is frequently impossible to analyze problems using traditional research methods because so many fields, including economics, engineering, environmental science, sociology, and medicine, include ambiguous data. Therefore, to resolve the problem of dealing with uncertainty, new mathematical techniques are required. These techniques must function as efficient tools for addressing various forms of uncertainty and imprecision in embedded systems. The idea of soft sets, which Molodtsov [1] initially articulated in 1999, has received a lot of momentum due to its adaptability, level of detail, and breadth of problem-solving. Soft set operations and applications have been examined by Chen et al. [2] and Maji et al. [3] [4] . Maji et al. [5] also presented the idea of fuzzy soft sets and looked into their characteristics. This idea was also used to address various decision-making issues by Roy and Maji [6] . As a generalization of soft sets, Alkhazaleh et al. [7] presented the idea of soft multisets. Additionally, they provided definitions for probability fuzzy soft sets and fuzzy parameterized interval-valued fuzzy soft sets in [8] [9] , as well as examples of how they might be used in decision-making and medical diagnostics. The idea of generalized fuzzy soft sets and related operations, as well as their use for decisions and diagnostics in medicine, was introduced by Majumdar and Samanta in 2010 [10] .

Although effective, the previous models looked at typically only one expert. Several operations, including joint and crossover, must be carried out if you want to accept the viewpoints of multiple experts. For the user, this is problematic. The ideas of soft expert sets and fuzzy soft expert sets were established by Alkhazaleh and Salleh [11] [12] in order to overcome this issue. The user can view all expert opinions in one model without any alteration. The user can access expert viewpoints despite any tampering. By eliminating their inconsistencies, Serdar and Hilal [13] make several adjustments to the soft expert sets that are crucial for the notion of soft sets. Generalized fuzzy soft expert sets (GFSESs), which are utilized to assess a decision problem, are a novel idea that Hazaymeh et al. [14] acquire as well. In the context of soft expert sets, Lancy and Arockiarani [15] identify various matrix types and suggest a decision paradigm based on the soft expert sets.

The definition of generalized soft expert is introduced in this study. We describe its basic operations, including complement, union, intersection, AND, and OR, and look at how they work. We give an illustration of a decision-problem where this concept is used. The definition of a generalized soft expert matrix and a method for solving a decision problem are also provided. The important sections of this essay are summarized below. Section 3 introduces the idea of generalized soft expert sets. Section 4 provides the basic steps and some properties of generalized soft expert sets. In Section 5, an approach of the generalized soft expert set is demonstrated. Section 6 describes a generalized soft expert matrix and how to utilize it to resolve decision-making issues.

2. Preliminaries

In this section, we review several fundamental ideas that are relevance to this study.

Definition 2.1 Let V be a universe set and H a set of parameters. Let P ( V ) denote the power set of V and A H . A pair ( F , A ) is called a soft set over V, where F is a mapping

F : A P ( V ) .

In other words, a soft set over V is a parameterized family of subsets of the universe V. For ε A , F ( ε ) is one of the approximate components of ( F , A ) .

Definition 2.2 Let V be a universe set and H a set of parameters. Let I V denote the power set of all fuzzy subsets of V, A H . A pair ( F , H ) is called a fuzzy soft set over V, where F is defined by

F : A I V .

Definition 2.3 Let V = { v 1 , v 2 , , v n } be the universal set of element and H = { h 1 , h 2 , , h m } the parameters set. The pair ( V , H ) will be called a soft universe. Let F : H I V and μ a fuzzy subset of H; that is, μ : H I = [ 0 , 1 ] , where I V is the collection of all fuzzy subsets of V. F μ : H I V × I is defined as

F μ ( h ) = ( F ( h ) , μ ( h ) ) .

Then F μ is called a generalized fuzzy soft set (GFSS) over the soft set ( V , H ) . Here for each parameter h i , F μ ( h i ) = ( F ( h i ) , μ ( h i ) ) indicates not only the degree of belongingness of the elements of V in F ( h i ) but also the degree of possibility of such belongingness which is represented by μ ( h i ) . So we can write as follows:

F μ ( h i ) = ( { v 1 F ( h i ) ( v 1 ) , v 2 F ( h i ) ( v 2 ) , , v n F ( h i ) ( v n ) } , μ ( h i ) ) ,

where F ( h i ) ( v 1 ) , F ( h i ) ( v 2 ) , , F ( h i ) ( v n ) are the degree of belongingness, μ ( h i ) is the degree of possibility of such belongingness.

Definition 2.4 Let V be a universe set, H a parameters set, X an experts set, O = {1 = agree, 0 = disagree} an opinions set. Let U = H × X × O and A U . Then ( F , A ) is known as a soft expert set over V and F is given by

F : A P ( V ) ,

where P ( V ) denotes a power set of V.

Example 2.1 Let V = { v 1 , v 2 , v 3 } be a universe set, H = { h 1 , h 2 } a parameters set and X = { p 1 , p 2 } be an experts set, U = H × X × O . We define a function

F : U P ( V )

as follows:

F ( h 1 , p 1 , 1 ) = { v 1 , v 2 , v 3 } , F ( h 1 , p 2 , 1 ) = { v 2 , v 3 } ,

F ( h 2 , p 1 , 1 ) = { v 1 , v 2 } , F ( h 2 , p 2 , 1 ) = { v 2 } ,

F ( h 1 , p 1 , 0 ) = , F ( h 1 , p 2 , 0 ) = { v 1 } ,

F ( h 2 , p 1 , 0 ) = { v 3 } , F ( h 2 , p 2 , 0 ) = { v 1 , v 3 } .

Then ( F , U ) consists of the following approximate sets:

( F , U ) = { ( ( h 1 , p 1 , 1 ) , { v 1 , v 2 , v 3 } ) , ( ( h 1 , p 2 , 1 ) , { v 2 , v 3 } ) , ( ( h 2 , p 1 , 1 ) , { v 1 , v 2 } ) , ( ( h 2 , p 2 , 1 ) , { v 2 } ) , ( ( h 1 , p 1 , 0 ) , ) , ( ( h 1 , p 2 , 0 ) , { v 1 } ) , ( ( h 2 , p 1 , 0 ) , { v 3 } ) , ( ( h 2 , p 2 , 0 ) , { v 1 , v 3 } ) } .

Definition 2.5 Let V be a set of universe, H a set of parameters, X a set of experts, O an opinions set. Let U = H × X × O , A U . Let μ be a fuzzy set of U which is defined by μ : U I = [ 0 , 1 ] . Then ( F μ , A ) is known as an generalized fuzzy soft expert set over U and F μ is given by

F μ : A I V × I ,

where I V is denoted as all fuzzy subsets of V.

Definition 2.6 Let V = { v 1 , v 2 , , v m } be a universe set, H = { h 1 , h 2 , , h n } a parameters set, X a set of experts. Let O = {1 = agree, 0 = disagree} be an opinions set, U = H × X × O , A U , F is a mapping given by

F : A P ( V ) .

Then the matrix representation of the soft expert set over ( F , A ) is defined as

A = [ a i j ] m × n or A = [ a i j ] ,

where

a i j = { ( a g j ( v i ) , d g j ( v i ) ) , h j A ( 0 , 1 ) , h j A .

a g j ( v i ) represents the level of acceptance of v i in the soft expert set F ( h j ) , d g j ( v i ) represents the level non-acceptance of v i in the soft expert set F ( h j ) .

Definition 2.7 Let A = [ a i j ] m × n and B = [ b i j ] m × n be two soft expert matrices, then we define addition of A and B as A + B = [ c i j ] m × n , where

c i j = ( max ( a g A , a g B ) , min ( d g A , d g B ) ) , i , j .

Definition 2.8 Let A = [ a i j ] m × n and B = [ b i j ] m × n be two soft expert matrices, then we define subtraction of A and B as A B = [ c i j ] m × n , where

c i j = ( min ( a g A , a g B ) , max ( d g A , d g B ) ) , i , j .

3. Generalized Soft Expert Set

In this part, we develop the generalized soft expert set idea and investigate some of its aspects.

Let V be a universe set, H a parameters set, X an experts set, and O = {1 = agree, 0 = disagree} a set of opinions. Let U = H × X × O and A U , μ be a fuzzy set of U; that is, μ : U I = [ 0 , 1 ] .

Definition 3.1 A pair ( F μ , A ) is called a generalized soft expert set (GSES in short) over V, where F μ is given by

F μ : A P ( V ) × I , (1)

where P ( V ) denotes the collection of all subsets of V. Here for each h i , F μ ( h i ) = ( F ( h i ) , μ ( h i ) ) indicates not only the degree of belongingness of the elements of V in F ( h i ) , but also the degree of possibility of such belongingness which is represented by μ ( h i ) .

Example 3.1 Let V = { v 1 , v 2 , v 3 , v 4 } be a universe set, H = { h 1 , h 2 , h 3 } a parameters set, X = { p 1 , p 2 , p 3 } an experts set. Let U = H × X × O , μ a fuzzy set of U, that is, μ : U I = [ 0 , 1 ] . Define a function F μ : U P ( V ) × I as follows:

F μ ( h 1 , p 1 , 1 ) = ( { v 1 , v 2 , v 4 } , 0.2 ) , F μ ( h 1 , p 2 , 1 ) = ( { v 2 , v 4 } , 0.3 ) ,

F μ ( h 1 , p 3 , 1 ) = ( { v 4 } , 0.6 ) , F μ ( h 2 , p 1 , 1 ) = ( { v 1 , v 4 } , 0.5 ) ,

F μ ( h 2 , p 2 , 1 ) = ( { v 2 , v 3 } , 0.6 ) , F μ ( h 2 , p 3 , 1 ) = ( { v 1 , v 2 , v 3 , v 4 } , 0.5 ) ,

F μ ( h 3 , p 1 , 1 ) = ( { v 1 , v 2 , v 3 } , 0.5 ) , F μ ( h 3 , p 2 , 1 ) = ( { v 2 , v 3 , v 4 } , 0.7 ) ,

F μ ( h 3 , p 3 , 1 ) = ( { v 3 , v 4 } , 0.4 ) , F μ ( h 1 , p 1 , 0 ) = ( { v 3 } , 0.6 ) ,

F μ ( h 1 , p 2 , 0 ) = ( { v 1 , v 3 } , 0.5 ) , F μ ( h 1 , p 3 , 0 ) = ( { v 1 , v 2 , v 3 } , 0.6 ) ,

F μ ( h 2 , p 1 , 0 ) = ( { v 2 , v 3 } , 0.4 ) , F μ ( h 2 , p 3 , 0 ) = ( { v 1 , v 4 } , 0.4 ) ,

F μ ( h 2 , p 3 , 0 ) = ( , 0.3 ) , F μ ( h 3 , p 1 , 0 ) = ( { v 4 } , 0.6 ) ,

F μ ( h 3 , p 2 , 0 ) = ( { v 1 } , 0.5 ) , F μ ( h 3 , p 3 , 0 ) = ( { v 1 , v 2 } , 0.2 ) .

Then ( F μ , U ) consists of the following approximate sets:

( F μ , U ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 1 , p 2 , 1 ) , ( { v 2 , v 4 } , 0.3 ) ) , ( ( h 1 , p 3 , 1 ) , ( { v 4 } , 0.6 ) ) , ( ( h 2 , p 1 , 1 ) , ( { v 1 , v 4 } , 0.5 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.5 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 1 , v 2 , v 3 } , 0.5 ) ) , ( ( h 3 , p 2 , 1 ) , ( { v 2 , v 3 , v 4 } , 0.7 ) ) , ( ( h 3 , p 3 , 1 ) , ( { v 3 , v 4 } , 0.4 ) ) , ( ( h 1 , p 1 , 0 ) , ( { v 3 } , 0.6 ) ) ,

( ( h 1 , p 2 , 0 ) , ( { v 1 , v 3 } , 0.5 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 1 , 0 ) , ( { v 2 , v 3 } , 0.4 ) ) , ( ( h 2 , p 3 , 0 ) , ( { v 1 , v 4 } , 0.4 ) ) , ( ( h 2 , p 3 , 0 ) , ( , 0.3 ) ) , ( ( h 3 , p 1 , 0 ) , ( { v 4 } , 0.6 ) ) , ( ( h 3 , p 2 , 0 ) , ( { v 1 } , 0.5 ) ) , ( ( h 3 , p 3 , 0 ) , ( { v 1 , v 2 } , 0.2 ) ) } .

Definition 3.2 Let ( F μ , A 1 ) , ( G δ , A 2 ) are two GSESs over V, ( F μ , A 1 ) is called the generalized soft expert subset of ( G δ , A 2 ) , if ( F μ , A 1 ) ˜ ( G δ , A 2 ) , for all a A 1 A 2 , F μ ( a ) G δ ( a ) . If ( F μ , A 1 ) ˜ ( G δ , A 2 ) , then ( G δ , A 2 ) is known as the generalized soft expert superset of ( F μ , A 1 ) .

Example 3.2 Consider example 3.1. Let A 1 , A 2 be defined as follows:

A 1 = { ( h 1 , p 1 , 1 ) , ( h 1 , p 2 , 1 ) , ( h 2 , p 2 , 1 ) , ( h 2 , p 3 , 1 ) , ( h 3 , p 1 , 1 ) , ( h 3 , p 3 , 1 ) , ( h 1 , p 3 , 0 ) , ( h 2 , p 1 , 0 ) , ( h 3 , p 2 , 0 ) } ,

A 2 = { ( h 1 , p 1 , 1 ) , ( h 1 , p 2 , 1 ) , ( h 2 , p 2 , 1 ) , ( h 2 , p 3 , 1 ) , ( h 1 , p 3 , 0 ) , ( h 2 , p 1 , 0 ) } .

Apparently, A 2 A 1 . ( F μ , A 1 ) and ( G δ , A 2 ) be two GSESs of V as follows:

( F μ , A 1 ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 1 , p 2 , 1 ) , ( { v 2 , v 4 } , 0.3 ) ) , , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.5 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 1 , v 2 , v 3 } , 0.5 ) ) , ( ( h 3 , p 3 , 1 ) , ( { v 3 , v 4 } , 0.4 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 1 , 0 ) , ( { v 2 , v 3 } , 0.4 ) ) , ( ( h 3 , p 2 , 0 ) , ( { v 1 } , 0.5 ) ) } ,

and

( G δ , A 2 ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 1 , p 2 , 1 ) , ( { v 2 , v 4 } , 0.3 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.5 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 1 , 0 ) , ( { v 2 , v 3 } , 0.4 ) ) } .

Therefore ( G δ , A 2 ) ˜ ( F μ , A 1 ) .

Proposition 3.1 Let ( F μ , A 1 ) and ( G δ , A 2 ) be two GSESs over V, then

( F μ , A 1 ) ˜ ( G δ , A 2 ) A 1 A 2 . (2)

Definition 3.3 Two GSESs ( F μ , A 1 ) and ( G δ , A 2 ) over V are said to be equal, if ( F μ , A 1 ) ˜ ( G δ , A 2 ) and ( G δ , A 2 ) ˜ ( F μ , A 1 ) .

Proposition 3.2 If ( F μ , A 1 ) , ( G δ , A 2 ) and ( H σ , A 3 ) are three GSESs over V, then

1) ( F μ , A 1 ) = ( G δ , A 2 ) A 1 = A 2 ,

2) ( F μ , A 1 ) = ( G δ , A 2 ) and ( G δ , A 2 ) = ( H σ , A 3 ) ( F μ , A 1 ) = ( H σ , A 3 ) .

Definition 3.4 The subset ( F μ , A 1 ) 1 of ( F μ , A 1 ) which is called an agree-GSES is defined as

( F μ , A 1 ) 1 = { ( a , F μ ( a ) ) : a A 1 } , (3)

where A 1 U 1 and U 1 = H × X × { 1 } .

Definition 3.5 The subset ( F μ , A 1 ) 0 of ( F μ , A 1 ) which is called a disagree-GSES is defined as

( F μ , A 1 ) 0 = { ( a , F μ ( a ) ) : a A 1 } , (4)

where A 1 U 0 and U 0 = H × X × { 0 } .

Example 3.3 Consider example 3.1, the agree-GSES ( F μ , U ) 1 over V is

( F μ , U ) 1 = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 1 , p 2 , 1 ) , ( { v 2 , v 4 } , 0.3 ) ) , ( ( h 1 , p 3 , 1 ) , ( { v 4 } , 0.6 ) ) , ( ( h 2 , p 1 , 1 ) , ( { v 1 , v 4 } , 0.5 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.5 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 1 , v 2 , v 3 } , 0.5 ) ) , ( ( h 3 , p 2 , 1 ) , ( { v 2 , v 3 , v 4 } , 0.7 ) ) , ( ( h 3 , p 3 , 1 ) , ( { v 3 , v 4 } , 0.4 ) ) } .

The disagree-GSES ( F μ , U ) 0 over V is

( F μ , U ) 0 = { ( ( h 1 , p 1 , 0 ) , ( { v 3 } , 0.6 ) ) , ( ( h 1 , p 2 , 0 ) , ( { v 1 , v 3 } , 0.5 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 1 , 0 ) , ( { v 2 , v 3 } , 0.4 ) ) , ( ( h 2 , p 3 , 0 ) , ( { v 1 , v 4 } , 0.4 ) ) , ( ( h 2 , p 3 , 0 ) , ( , 0.3 ) ) , ( ( h 3 , p 1 , 0 ) , ( { v 4 } , 0.6 ) ) , ( ( h 3 , p 2 , 0 ) , ( { v 1 } , 0.5 ) ) , ( ( h 3 , p 3 , 0 ) , ( { v 1 , v 2 } , 0.2 ) ) } .

Definition 3.6 The complement of ( F μ , A 1 ) which is denoted by ( F μ , A 1 ) C ˜ , is defined as ( F μ , A 1 ) C ˜ = ( F μ C ˜ , ¬ A 1 ) , where F μ C ˜ : ¬ A P ( U ) is given by

F μ C ˜ ( a ) = C ˜ ( F μ ( ¬ a ) ) (5)

for all a ¬ A 1 , ¬ A 1 { ¬ H × X × O } and C ˜ is a generalized complement.

Example 3.4 Consider example 3.1, then the complement of ( F μ , U ) C ˜ is

( F μ , U ) C ˜ = { ( ( h 1 , p 1 , 0 ) , ( { v 3 } , 0.8 ) ) , ( h 1 , p 2 , 0 ) , ( { v 1 , v 3 } , 0.7 ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.4 ) ) , ( ( h 2 , p 1 , 0 ) , ( { v 2 , v 3 } , 0.5 ) ) , ( ( h 2 , p 2 , 0 ) , ( { v 1 , v 4 } , 0.4 ) ) , ( ( h 2 , p 3 , 0 ) , ( , 0.5 ) ) , ( ( h 3 , p 1 , 0 ) , ( { v 4 } , 0.5 ) ) , ( ( h 3 , p 2 , 0 ) , ( { v 1 } , 0.3 ) ) , ( ( h 3 , p 3 , 0 ) , ( { v 1 , v 2 } , 0.6 ) ) , ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.4 ) ) ,

( ( h 1 , p 2 , 1 ) , ( { v 2 , v 4 } , 0.5 ) ) , ( ( h 1 , p 3 , 1 ) , ( { v 4 } , 0.4 ) ) , ( ( h 2 , p 1 , 1 ) , ( { v 1 , v 4 } , 0.6 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.7 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 1 , v 2 , v 3 } , 0.4 ) ) , ( ( h 3 , p 2 , 1 ) , ( { v 2 , v 3 , v 4 } , 0.5 ) ) , ( ( h 3 , p 3 , 1 ) , ( { v 3 , v 4 } , 0.8 ) ) } .

Proposition 3.3 If ( F μ , A 1 ) is a GSES over V, then

1) ( ( F μ , A 1 ) C ˜ ) C ˜ = ( F μ , A 1 ) ,

2) ( F μ , A 1 ) 1 C ˜ = ( F μ , A 1 ) 0 ,

3) ( F μ , A 1 ) 0 C ˜ = ( F μ , A 1 ) 1 .

4. Some Operations of the GSESs

In this part, we will present several GSES operations, deduce their features, and provide some examples.

Definition 4.1 The union of the GSESs ( F μ , A 1 ) and ( G δ , A 2 ) of V is defined as ( H Ω , B ) = ( F μ , A 1 ) ˜ ( G δ , A 2 ) , where B = A 1 A 2 { H × X × O } , for all a B ,

H Ω ( a ) = { F μ ( a ) , a A 1 A 2 , G δ ( a ) , a A 2 A 1 , F μ ( a ) = G δ ( a ) , a A 1 A 2 . (6)

Example 4.1 Consider example 3.1, suppose

A 1 = { ( h 1 , p 1 , 1 ) , ( h 2 , p 2 , 1 ) , ( h 3 , p 1 , 1 ) , ( h 3 , p 3 , 1 ) , ( h 1 , p 3 , 0 ) , ( h 3 , p 2 , 0 ) } ,

A 2 = { ( h 1 , p 1 , 1 ) , ( h 2 , p 3 , 1 ) , ( h 1 , p 3 , 0 ) , ( h 3 , p 1 , 0 ) } .

Let ( F μ , A 1 ) and ( G δ , A 2 ) are two GSESs over V such that

( F μ , A 1 ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 } , 0.6 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 1 , v 2 , v 3 } , 0.5 ) ) , ( ( h 3 , p 3 , 1 ) , ( { v 3 , v 4 } , 0.4 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) , ( ( h 3 , p 2 , 0 ) , ( { v 1 } , 0.5 ) ) } ,

and

( G δ , A 2 ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.5 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) , ( ( h 3 , p 1 , 0 ) , ( { v 4 } , 0.6 ) ) } .

Then ( F μ , A 1 ) ˜ ( G δ , A 2 ) = ( H Ω , B ) , where

( H Ω , B ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 } , 0.6 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.5 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 1 , v 2 , v 3 } , 0.5 ) ) , ( ( h 3 , p 3 , 1 ) , ( { v 3 , v 4 } , 0.4 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) , ( ( h 3 , p 1 , 0 ) , ( { v 4 } , 0.6 ) ) , ( ( h 3 , p 2 , 0 ) , ( { v 1 } , 0.5 ) ) } .

Proposition 4.1 Let ( F μ , A 1 ) , ( G δ , A 2 ) and ( H σ , A 3 ) be three GSESs over V. Then

1) ( F μ , A 1 ) ˜ ( F μ , A 1 ) = ( F μ , A 1 ) ,

2) ( F μ , A 1 ) ˜ ( G δ , A 2 ) = ( G δ , A 2 ) ˜ ( F μ , A 1 ) ,

3) ( F μ , A 1 ) ˜ ( ( G δ , A 2 ) ˜ ( H σ , A 3 ) ) = ( ( F μ , A 1 ) ˜ ( G δ , A 2 ) ) ˜ ( H σ , A 3 ) .

Definition 4.2 The intersection of the GSESs ( F μ , A 1 ) and ( G δ , A 2 ) of V is denoted by ( H Ω , B ) = ( F μ , A 1 ) ˜ ( G δ , A 2 ) , where B = A 1 A 2 { H × X × O } for all

a B and

H Ω ( a ) = { F μ ( a ) = G δ ( a ) , if B , , otherwise . (7)

Example 4.2 Consider Example 4.1, we have ( F μ , A 1 ) ˜ ( G δ , A 2 ) = ( H Ω , B ) ,

where

( H σ , B ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 2 , v 4 } , 0.2 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 1 , v 2 , v 3 } , 0.6 ) ) } .

Proposition 4.2 If ( F μ , A 1 ) , ( G δ , A 2 ) and ( H σ , A 3 ) are three GSESs over V, then

( F μ , A 1 ) ˜ ( F μ , A 1 ) = ( F μ , A 1 ) ,

( F μ , A 1 ) ˜ ( G δ , A 2 ) = ( G δ , A 2 ) ˜ ( F μ , A 1 ) ,

( F μ , A 1 ) ˜ ( ( G δ , A 2 ) ˜ ( H σ , A 3 ) ) = ( ( F μ , A 1 ) ˜ ( G δ , A 2 ) ) ˜ ( H σ , A 3 )

( F μ , A 1 ) ˜ ( ( G δ , A 2 ) ˜ ( H σ , A 3 ) ) = ( ( F μ , A 1 ) ˜ ( G δ , A 2 ) ) ˜ ( ( F μ , A 1 ) ˜ ( H σ , A 3 ) ) ,

( F μ , A 1 ) ˜ ( ( G δ , A 2 ) ˜ ( H σ , A 3 ) ) = ( ( F μ , A 1 ) ˜ ( G δ , A 2 ) ) ˜ ( ( F μ , A 1 ) ˜ ( H σ , A 3 ) ) .

Definition 4.3 Let ( F μ , A 1 ) and ( G δ , A 2 ) be two GSESs over V, then

( F μ , A 1 ) ( G δ , A 2 ) is defined as

( F μ , A 1 ) ( G δ , A 2 ) = ( H Ω , A 1 × A 2 ) , (8)

where for all ( a 1 , a 2 ) A 1 × A 2 , H Ω ( a 1 , a 2 ) = F μ ( a 1 ) ˜ G δ ( a 2 ) .

Definition 4.4 Let ( F μ , A 1 ) and ( G δ , A 2 ) be two GSESs over V, then

( F μ , A 1 ) ( G δ , A 2 ) is defined as

( F μ , A 1 ) ( G δ , A 2 ) = ( H Ω , A 1 × A 2 ) , (9)

where for all ( a 1 , a 2 ) A 1 × A 2 , H Ω ( a 1 , a 2 ) = F μ ( a 1 ) ˜ G δ ( a 2 ) .

Example 4.3 Let

( F μ , A 1 ) = { ( ( h 1 , p 2 , 1 ) , ( { v 1 , v 4 } , 0.6 ) ) , ( ( h 3 , p 3 , 1 ) , ( { v 2 , v 4 } , 0.3 ) ) , ( ( h 1 , p 1 , 0 ) , ( { v 3 , v 4 } , 0.4 ) ) , ( ( h 3 , p 2 , 0 ) , ( { v 4 } , 0.5 ) ) } ,

and

( G δ , A 2 ) = { ( ( h 1 , p 2 , 1 ) , ( { v 1 , v 4 } , 0.6 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 4 } , 0.3 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 3 } , 0.3 ) ) } .

Then

( F μ , A 1 ) ( G δ , A 2 ) = { ( ( ( h 1 , p 2 , 1 ) , ( h 1 , p 2 , 1 ) ) , ( { v 1 , v 4 } , 0.6 ) ) , ( ( ( h 1 , p 2 , 1 ) , ( h 2 , p 3 , 1 ) ) , ( { v 4 } , 0.3 ) ) , ( ( ( h 1 , p 2 , 1 ) , ( h 3 , p 1 , 1 ) ) , ( , 0.3 ) ) , ( ( ( h 3 , p 3 , 1 ) , ( h 1 , p 2 , 1 ) ) , ( { v 4 } , 0.3 ) ) , ( ( ( h 3 , p 3 , 1 ) , ( h 2 , p 3 , 1 ) ) , ( { v 4 } , 0.3 ) ) , ( ( ( h 3 , p 3 , 1 ) , ( h 3 , p 1 , 1 ) ) , ( , 0.3 ) ) ,

( ( ( h 1 , p 1 , 0 ) , ( h 1 , p 2 , 1 ) ) , ( { v 4 } , 0.4 ) ) , ( ( ( h 1 , p 1 , 0 ) , ( h 2 , p 3 , 1 ) ) , ( { v 4 } , 0.3 ) ) , ( ( ( h 1 , p 1 , 0 ) , ( h 3 , p 1 , 1 ) ) , ( { v 3 } , 0.3 ) ) , ( ( ( h 3 , p 2 , 0 ) , ( h 1 , p 2 , 1 ) ) , ( { v 4 } , 0.5 ) ) , ( ( ( h 3 , p 2 , 0 ) , ( h 2 , p 3 , 1 ) ) , ( { v 4 } , 0.3 ) ) , ( ( ( h 3 , p 2 , 0 ) , ( h 3 , p 1 , 1 ) ) , ( , 0.3 ) ) } .

( F μ , A 1 ) ( G δ , A 2 ) = { ( ( ( h 1 , p 2 , 1 ) , ( h 1 , p 2 , 1 ) ) , ( { v 1 , v 4 } , 0.6 ) ) , ( ( ( h 1 , p 2 , 1 ) , ( h 2 , p 3 , 1 ) ) , ( { v 1 , v 4 } , 0.6 ) ) , ( ( ( h 1 , p 2 , 1 ) , ( h 3 , p 1 , 1 ) ) , ( { v 1 , v 3 , v 4 } , 0.6 ) ) , ( ( ( h 3 , p 3 , 1 ) , ( h 1 , p 2 , 1 ) ) , ( { v 1 , v 2 , v 4 } , 0.6 ) ) , ( ( ( h 3 , p 3 , 1 ) , ( h 2 , p 3 , 1 ) ) , ( { v 2 , v 4 } , 0.3 ) ) , ( ( ( h 3 , p 3 , 1 ) , ( h 3 , p 1 , 1 ) ) , ( { v 2 , v 3 , v 4 } , 0.3 ) ) ,

( ( ( h 1 , p 1 , 0 ) , ( h 1 , p 2 , 1 ) ) , ( { v 1 , v 3 , v 4 } , 0.6 ) ) , ( ( ( h 1 , p 1 , 0 ) , ( h 2 , p 3 , 1 ) ) , ( { v 3 , v 4 } , 0.4 ) ) , ( ( ( h 1 , p 1 , 0 ) , ( h 3 , p 1 , 1 ) ) , ( { v 3 , v 4 } , 0.4 ) ) , ( ( ( h 3 , p 2 , 0 ) , ( h 1 , p 2 , 1 ) ) , ( { v 1 , v 4 } , 0.6 ) ) , ( ( ( h 3 , p 2 , 0 ) , ( h 2 , p 3 , 1 ) ) , ( { v 4 } , 0.5 ) ) , ( ( ( h 3 , p 2 , 0 ) , ( h 3 , p 1 , 1 ) ) , ( { v 3 , v 4 } , 0.5 ) ) } .

Proposition 4.3 If ( F μ , A 1 ) , ( G δ , A 2 ) and ( H σ , A 3 ) are three GSESs over V, then

1) ( ( F μ , A 1 ) ( G δ , A 2 ) ) C ˜ = ( F μ , A 1 ) C ˜ ( G δ , A 2 ) C ˜ ,

2) ( ( F μ , A 1 ) ( G δ , A 2 ) ) C ˜ = ( F μ , A 1 ) C ˜ ( G δ , A 2 ) C ˜ ,

3) ( ( F μ , A 1 ) ( G δ , A 2 ) ) ( H σ , A 3 ) = ( F μ , A 1 ) ( ( G δ , A 2 ) ( H σ , A 3 ) ) ,

4) ( ( F μ , A 1 ) ( G δ , A 2 ) ) ( H σ , A 3 ) = ( F μ , A 1 ) ( ( G δ , A 2 ) ( H σ , A 3 ) ) .

5. An Application of GSES in Decision-Making

We provide an application of generalized soft expert set theory to a decision-making issue in this section.

Suppose one enterprise needs to find an employee. Let V = { v 1 , v 2 , v 3 , v 4 , v 5 , v 6 } be a collection of applicant compositions, H = { h 1 , h 2 , h 3 , h 4 } a set of parameters, where h i ( i = 1 , 2 , 3 , 4 ) indicate “good attitude”, “cheerful personality”, “good English” and “good communication skills”, respectively. To make a fair selection, three experts form a committee members set X = { p 1 , p 2 , p 3 } . Let O = {1 = agree, 0 = disagree} be a set of experts opinions. T he following algorithm may be used to fill the position.

Algorithm 1:

1) Input the GSES ( F μ , U ) .

2) Find the agree-GSES and the disagree-GSES.

3) Compute t j = i λ j u i j of the agree-GSES.

4) Compute r j = i λ j u i j of the disagree-GSES.

5) Compute s j = t j r j .

6) Find m, for which s m = max s j . If m has more than one value, then the company can choose any one of them.

After careful consideration, the committee obtains the GSES as follows:

( F μ , U ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 4 , v 6 } , 0.5 ) ) , ( ( h 1 , p 2 , 1 ) , ( { v 2 , v 4 , v 5 } , 0.3 ) ) , ( ( h 1 , p 3 , 1 ) , ( { v 1 , v 4 , v 5 , v 6 } , 0.2 ) ) , ( ( h 2 , p 1 , 1 ) , ( { v 2 , v 3 } , 0.4 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 , v 5 , v 6 } , 0.7 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 2 , v 3 , v 4 } , 0.8 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 3 , v 5 , v 6 } , 0.3 ) ) , ( ( h 3 , p 2 , 1 ) , ( { v 1 , v 3 , v 4 } , 0.5 ) ) ,

( ( h 3 , p 3 , 1 ) , ( { v 1 , v 2 , v 3 } , 0.4 ) ) , ( ( h 4 , p 1 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 } , 0.4 ) ) , ( ( h 4 , p 2 , 1 ) , ( { v 1 , v 2 , v 3 , v 4 , v 6 } , 0.5 ) ) , ( ( h 4 , p 3 , 1 ) , ( { v 2 , v 3 , v 4 , v 5 } , 0.3 ) ) , ( ( h 1 , p 1 , 0 ) , ( { v 2 , v 3 , v 5 } , 0.3 ) ) , ( ( h 1 , p 2 , 0 ) , ( { v 1 , v 3 , v 6 } , 0.6 ) ) , ( ( h 1 , p 3 , 0 ) , ( { v 2 , v 3 } , 0.3 ) ) , ( ( h 2 , p 1 , 0 ) , ( { v 1 , v 4 , v 5 , v 6 } , 0.4 ) ) ,

( ( h 2 , p 2 , 0 ) , ( { v 1 , v 4 } , 0.5 ) ) , ( ( h 2 , p 3 , 0 ) , ( { v 1 , v 5 , v 6 } , 0.6 ) ) , ( ( h 3 , p 1 , 0 ) , ( { v 1 , v 2 , v 4 } , 0.3 ) ) , ( h 3 , p 2 , 0 ) , ( { v 2 , v 5 , v 6 } , 0.2 ) , ( ( h 3 , p 3 , 0 ) , ( { v 4 , v 5 , v 6 } , 0.4 ) ) , ( h 4 , p 1 , 0 ) , ( { v 5 , v 6 } , 0.6 ) , ( ( h 4 , p 2 , 0 ) , ( { v 5 } , 0.3 ) ) , ( ( h 4 , p 3 , 0 ) , ( { v 1 , v 6 } , 0.7 ) ) } .

We show the agree-GSES and the disagree-GSES in Table 1 and Table 2, where

{ u i j = 1 , v i F 1 ( ε ) u i j = 0 , v i F 1 ( ε ) and { u i j = 1 , v i F 0 ( ε ) u i j = 0 , v i F 0 ( ε ) .

Now according to the formula s j = t j r j , we can find the best choices for the company to fill the position. From Table 1 and Table 2, we get Table 3.

Table 1. Agree-GSES.

Table 2. Disagree-GSES.

Table 3. Score sheet.

Because max s j = s 3 , the best option is v 3 .

6. An Application of Expert Matric of GSES in Decision-Making

In this part, we create generalized expert matrices and then give complement, addition, and subtraction operations. Then, we show how the generalized expert matrices were used in a decision-making situation.

Definition 6.1 Let V = { v 1 , v 2 , , v m } be a set of universe, H = { h 1 , h 2 , , h n } a parameters set and X an experts set. Let O = {1 = agree, 0 = disagree} be a set of opinions, U = H × X × O and A U . Let μ a fuzzy set over H which is defined by μ : U I = [ 0 , 1 ] and F μ is given by

F μ : A P ( V ) × I . (10)

Then the expert matrix of the GSES ( F μ , A ) is defined as

A m × ( n + 1 ) = [ a i j μ i 1 , μ i 0 ] or A = [ a i j μ i 1 , μ i 0 ] , (11)

where a i j = ( a g j ( u i ) , d g j ( u i ) ) , a g j ( u i ) indicates the acceptance level of u i over F μ ( e j ) , d g j ( u i ) indicates the non-acceptance level of u i over F μ ( e j ) , μ i 1 indicates the possible degree of the acceptance level of u i , and μ i 0 indicates the possible degree of the level of the non-acceptance level of u i .

Example 6.1 Think about example 3.1. Three expert matrices can be produced with three experts making the decision.

A = [ ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) 0.2 , 0.6 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) 0.5 , 0.4 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) 0.5 , 0.6 ] ,

B = [ ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) 0.3 , 0.5 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) 0.6 , 0.4 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.7 , 0.5 ] ,

C = [ ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) 0.6 , 0.6 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.5 , 0.3 ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.4 , 0.2 ] .

Definition 6.2 If A = [ a i j μ i 1 , μ i 0 ] and B = [ b i j δ i 1 , δ i 0 ] are two expert matrices of a GSES, then A is equal to B, if for i , j , there are

a g A = a g B , d g A = d g B , μ i 1 = δ i 1 and μ i 0 = δ i 0 . (12)

Definition 6.3 If A = [ a i j μ i 1 , μ i 0 ] is an expert matrix of the GSES, where a i j = ( a g j ( u i ) , d g j ( u i ) ) . Then the complement of the expert matrix is defined as A o = [ a i j o μ i 1 , μ i 0 o ] , where for i , j , there are

a i j o = ( d g j ( u i ) , a g j ( u i ) ) , μ i 1 , μ i 0 o = 1 μ i 1 , 1 μ i 0 . (13)

Definition 6.4 If A = [ a i j μ i 1 , μ i 0 ] and B = [ b i j δ i 1 , δ i 0 ] are two expert matrices of the same form over the GSESs, then the addition of A and B is denoted by A + B = [ c i j λ i 1 , λ i 0 ] , where

c i j = ( max ( a g A , a g B ) , min ( d g A , d g B ) ) , λ i 1 = max ( μ i 1 , δ i 1 ) , λ i 0 = min ( μ i 0 , δ i 0 ) . (14)

Definition 6.5 If A = [ a i j μ i 1 , μ i 0 ] and B = [ b i j δ i 1 , δ i 0 ] are two expert matrices over the GSESs, then the subtraction of A and B is denoted by A B = [ c i j λ i 1 , λ i 0 ] , where

c i j = ( min ( a g A , a g B ) , max ( d g A , d g B ) ) , λ i 1 = min ( μ i 1 , δ i 1 ) , λ i 0 = max ( μ i 0 , δ i 0 ) . (15)

Example 6.2 If A and B are two expert matrices over the GSESs as follows:

A = [ ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.5 , 0.3 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.6 , 0.7 ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) 0.8 , 0.4 ] ,

B = [ ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) 0.4 , 0.5 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.7 , 0.6 ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) 0.4 , 0.2 ] .

Then

A o = [ ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.5 , 0.7 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.4 , 0.3 ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) 0.2 , 0.6 ] ,

B o = [ ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.6 , 0.5 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.3 , 0.4 ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) 0.6 , 0.8 ] A + B = [ ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.5 , 0.3 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.7 , 0.6 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.8 , 0.2 ]

A B = [ ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) 0.4 , 0.5 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.6 , 0.7 ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) 0.4 , 0.4 ] .

Proposition 6.1 If A and B are two expert matrices of the same form over the GSESs, then

1) A + B = B + A ,

2) A B = B A .

According to the expert matrices of a GSES, we describe a different approach to the issue raised in Section 5 as an example. The committee may employ the next algorithm.

Algorithm 2:

1) Input the GSES ( F μ , U ) .

2) Find the expert matrices over the GSES ( F μ , U ) .

3) Find the complement expert matrices.

4) Find the addition expert matrices.

5) Find the adaptation matrices.

6) Compute the s c o r e ( v i ) . The top scorer will be chosen as the selector.

Consider the problem of section 5, the GSES ( F μ , U ) as follows:

( F μ , U ) = { ( ( h 1 , p 1 , 1 ) , ( { v 1 , v 4 , v 6 } , 0.5 ) ) , ( ( h 1 , p 2 , 1 ) , ( { v 2 , v 4 , v 5 } , 0.3 ) ) , ( ( h 1 , p 3 , 1 ) , ( { v 1 , v 4 , v 5 , v 6 } , 0.2 ) ) , ( ( h 2 , p 1 , 1 ) , ( { v 2 , v 3 } , 0.4 ) ) , ( ( h 2 , p 2 , 1 ) , ( { v 2 , v 3 , v 5 , v 6 } , 0.7 ) ) , ( ( h 2 , p 3 , 1 ) , ( { v 2 , v 3 , v 4 } , 0.8 ) ) , ( ( h 3 , p 1 , 1 ) , ( { v 3 , v 5 , v 6 } , 0.3 ) ) , ( ( h 3 , p 2 , 1 ) , ( { v 1 , v 3 , v 4 } , 0.5 ) ) ,

( ( h 2 , p 2 , 0 ) , ( { v 1 , v 4 } , 0.5 ) ) , ( ( h 2 , p 3 , 0 ) , ( { v 1 , v 5 , v 6 } , 0.6 ) ) , ( ( h 3 , p 1 , 0 ) , ( { v 1 , v 2 , v 4 } , 0.3 ) ) , ( h 3 , p 2 , 0 ) , ( { v 2 , v 5 , v 6 } , 0.2 ) , ( ( h 3 , p 3 , 0 ) , ( { v 4 , v 5 , v 6 } , 0.4 ) ) , ( h 4 , p 1 , 0 ) , ( { v 5 , v 6 } , 0.6 ) , ( ( h 4 , p 2 , 0 ) , ( { v 5 } , 0.3 ) ) , ( ( h 4 , p 3 , 0 ) , ( { v 1 , v 6 } , 0.7 ) ) } .

Then we can obtain the expert matrices over GSES ( F μ , Z ) ,

A = [ ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) 0.5 , 0.3 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) 0.4 , 0.4 ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.3 , 0.3 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.4 , 0.6 ] ,

B = [ ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) 0.3 , 0.6 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.7 , 0.5 ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.5 , 0.2 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) 0.5 , 0.3 ] ,

C = [ ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.2 , 0.3 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.8 , 0.6 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) 0.4 , 0.4 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) 0.3 , 0.7 ] .

And the complement expert matrices are

A o = [ ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) 0.5 , 0.7 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.6 , 0.6 ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.7 , 0.7 ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.6 , 0.4 ] ,

B o = [ ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) 0.7 , 0.4 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) 0.3 , 0.5 ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.5 , 0.5 ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 0 , 1 ) 0.5 , 0.7 ] ,

C o = [ ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) 0.8 , 0.7 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.2 , 0.4 ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.6 , 0.6 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) 0.7 , 0.3 ] ,

We complete the remaining steps in the algorithm:

A + B + C = [ ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.5 , 0.3 ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.8 , 0.4 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.5 , 0.2 ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.5 , 0.3 ] ,

A o + B o + C o = [ ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.8 , 0.4 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.6 , 0.4 ( 1 , 0 ) ( 1 , 0 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) ( 1 , 0 ) 0.7 , 0.6 ( 1 , 0 ) ( 0 , 1 ) ( 0 , 1 ) ( 0 , 1 ) ( 1 , 0 ) ( 1 , 0 ) 0.7 , 0.3 ] ,

( A + B + C ) A d a = [ ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0 , 0.3 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0 , 0.4 ) ( 0.8 , 0 ) ( 0.8 , 0 ) ( 0.8 , 0 ) ( 0.8 , 0 ) ( 0.8 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ( 0.5 , 0 ) ] ,

( A o + B o + C o ) A d a = [ ( 0.8 , 0 ) ( 0.8 , 0 ) ( 0.8 , 0 ) ( 0 , 0.4 ) ( 0.8 , 0 ) ( 0.8 , 0 ) ( 0.6 , 0 ) ( 0 , 0.4 ) ( 0 , 0.4 ) ( 0.6 , 0 ) ( 0.6 , 0 ) ( 0.6 , 0 ) ( 0.7 , 0 ) ( 0.7 , 0 ) ( 0 , 0.6 ) ( 0.7 , 0 ) ( 0.7 , 0 ) ( 0.7 , 0 ) ( 0.7 , 0 ) ( 0 , 0.3 ) ( 0 , 0.3 ) ( 0 , 0.3 ) ( 0.7 , 0 ) ( 0.7 , 0 ) ] ,

V ( ( A + B + C ) A d a ) = [ 0.5 0.5 0.3 0.5 0.5 0.5 0.4 0.8 0.8 0.8 0.8 0.8 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 ] ,

V ( ( A o + B o + C o ) A d a ) = [ 0.8 0.8 0.8 0.4 0.8 0.8 0.6 0.4 0.4 0.6 0.6 0.6 0.7 0.7 0.6 0.7 0.7 0.7 0.7 0.3 0.3 0.3 0.7 0.7 ] ,

V ( ( A + B + C ) A d a ) V ( ( A o + B o + C o ) A d a ) = [ 0.3 0.3 1.1 0.9 0.3 0.3 1 1.2 1.2 0.2 0.2 0.2 0.2 0.2 1.1 1.2 0.2 0.2 0.2 0.8 0.8 0.8 0.2 0.2 ] .

Finally, we compute the s c o r e ( v i ) as follows:

s c o r e ( v 1 ) = 0.3 1 0.2 0.2 = 1.7 ,

s c o r e ( v 2 ) = 0.3 + 1.2 0.2 + 0.8 = 1.5 ,

s c o r e ( v 3 ) = 1.1 + 1.2 + 1.1 + 0.8 = 2 ,

s c o r e ( v 4 ) = 0.9 + 0.2 1.2 + 0.8 = 0.7 ,

,

s c o r e ( v 6 ) = 0.3 + 0.2 0.2 0.2 = 0.5 .

Due to

s c o r e ( v 3 ) > s c o r e ( v 2 ) > s c o r e ( v 4 ) > s c o r e ( v 5 ) = s c o r e ( v 6 ) > s c o r e ( v 1 ) ,

then the decision is v 3 .

7. Conclusion

We presented the idea of a generalized soft expert set in this paper and looked at some of its characteristics. On the generalized soft expert set, the complement, union, intersection, AND and OR operations have been defined. This theory is put to use to resolve a decision-making issue. We also present complement, addition, and subtraction operations as well as a definition of the generalized soft expert matrices. Last but not least, we show how the generalized soft expert matrices were used in a decision-making scenario. Two methods are applied in the paper to deal with the same decision problem. Although the conclusions are the same, the use of generalized soft expert matrices is significantly more straightforward when comparing the two methods.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Molodtsov, D. (1999) Soft Set Theory-First Results. Computers and Mathematics with Applications, 37, 19-31.
https://doi.org/10.1016/S0898-1221(99)00056-5
[2] Chen, D., Tsang, E.C.C., Yeung, D.S., et al. (2005) The Parameterization Reduction of Soft Sets and Its Applications. Computers and Mathematics with Applications, 49, 757-763.
https://doi.org/10.1016/j.camwa.2004.10.036
[3] Maji, P.K., Biswas, R. and Roy, A.R. (2003) Soft Set Theory. Computers and Mathematics with Applications, 45, 555-562.
https://doi.org/10.1016/S0898-1221(03)00016-6
[4] Maji, P.K., Roy, A.R. and Biswas, R. (2002) An Application of Soft Sets in a Decision Making Problem. Computers and Mathematics with Applications, 44, 1077-1083.
https://doi.org/10.1016/S0898-1221(02)00216-X
[5] Maji, P.K., Biswas, R. and Roy, A.R. (2001) Fuzzy Soft Sets. Journal of Fuzzy Mathematics, 9, 589-602.
[6] Roy, A.R. and Maji, P.K. (2007) A Fuzzy Soft Set Theoretic Approach to Decision Making Problems. Journal of Computational and Applied Mathematics, 203, 412-418.
https://doi.org/10.1016/j.cam.2006.04.008
[7] Alkhazaleh, S., Salleh, A.R. and Hassan, N. (2011) Soft Multisets Theory. Applied Mathematical Sciences, 5, 3561-3573.
[8] Alkhazaleh, S., Salleh, A.R. and Hassan, N. (2011) Possibility Fuzzy Soft Set. Advances in Decision Sciences, 2011, Article ID: 479756.
https://doi.org/10.1155/2011/479756
[9] Alkhazaleh, S., Salleh, A.R. and Hassan, N. (2011) Fuzzy Parameterized Interval-Valued Fuzzy Soft Set. Applied Mathematical Sciences, 5, 3335-3346.
[10] Majumdar, P. and Samanta, S.K. (2010) Generalised Fuzzy Soft Sets. Computers and Mathematics with Applications, 59, 1425-1432.
https://doi.org/10.1016/j.camwa.2009.12.006
[11] Alkhazaleh, S. and Salleh, A.R. (2011) Soft Expert Sets. Advances in Decision Sciences, 2011, Article ID: 757868.
https://doi.org/10.1155/2011/757868
[12] Alkhazaleh, S. and Salleh, A.R. (2014) Fuzzy Soft Expert Sets and Its Application. Applied Mathematics, 5, 1349-1368.
https://doi.org/10.4236/am.2014.59127
[13] Serdar, E. and Hilal, D. (2015) On Soft Expert Sets. Journal of New Theory, 9, 68-81.
[14] Hazaymeh, A.A. Abdullah, I.B., Balkhi, Z.T., et al. (2012) Generalized Fuzzy Soft Expert Set. Journal of Applied Mathematics, 2012, Article ID: 328195.
https://doi.org/10.1155/2012/328195
[15] Lancy, A.A. and Arockiarani, I. (2013) A Fusion of Soft Expert Set and Matrix Models. International Journal of Research in Engineering and Technology, 2, 530-535.
https://doi.org/10.15623/ijret.2013.0212088

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.