1784

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 10, OCTOBER 2014

Quantifiers Induced by Subjective Expected Value of Sample Information Kaihong Guo

Abstract—The ordered weighted averaging (OWA) operator provides a unified framework for multiattribute decision making (MADM) under uncertainty. In this paper, we attempt to tackle some issues arising from the quantifier guided aggregation using OWA operators. This allows us to consider a more general case involving the generation of quantifier targeted at the specified decision maker (DM) by using sample information. In order to do that, we first develop a repeatable interactive procedure in which with the given sample values, and the expected values the DM involved provides with personal preferences, we build nonlinear optimal models to extract from the DM information about his/her decision attitude in an OWA weighting vector form. After that, with the obtained attitudinal weighting vectors we suggest a suitable quantifier just for this DM by means of the piecewise linear interpolations. This obtained quantifier is totally derived from the behavior of the DM involved and thus inherently characterized by his/her own attitudinal character. Owing to the nature of this type of quantifier, we call it the subjective expected value of sample information-induced quantifier. We show some properties of the developed quantifier. We also prove the consistency of OWA aggregation guided by this type of quantifier. In contrast with parameterized quantifiers, our developed quantifiers are oriented toward the specified DMs with proper consideration of their decision attitudes or behavior characteristics, thus bringing about more intuitively appealing and convincing results in the quantifier guided OWA aggregation. Index Terms—OWA aggregation, quantifier, expected value of sample information (SEVSI).

subjective

I. Introduction FTER Bellman and Zadeh’s [5] pioneering work on fuzzy decision making, the fuzzy set theory introduced by Zadeh [43] has been extensively used as a tool to develop and model multiattribute decision making (MADM) problems. In this framework, the attributes are represented as fuzzy subsets over the space of alternatives and fuzzy set operators are used to aggregate the individual attributes to form the overall decision function. Classic logic uses only two quantifiers, there exists and for all, which are closely related to the OR and AND connectives, respectively. In fact, in natural language, there are many other linguistic quantifiers exemplified by terms, such as about 5, nearly half, almost all, as many as possible, and

A

Manuscript received October 22, 2012; revised July 2, 2013; accepted December 9, 2013. Date of publication January 9, 2014; date of current version September 12, 2014. This paper was recommended by Associate Editor Q. Shen. The author is with the School of Information, Liaoning University, Shenyang 110036, China (e-mail: [email protected]). Digital Object Identifier 10.1109/TCYB.2013.2295316

few. Zadeh [42] suggested a formal representation of these linguistic quantifiers using fuzzy sets. He further distinguished between two classes of linguistic quantifiers: absolute and relative. Absolute quantifiers, such as about 5 or more than 10, can be represented as a fuzzy subset Q of the nonnegative real numbers, R+ . In this representation, for any x ∈ R+ , we use Q(x) to indicate the degree to which x satisfies the concept conveyed by the linguistic quantifier. Relative quantifiers, such as most or at least half, can be expressed as a fuzzy subset Q of the unit interval I = [0, 1]. Again in this representation, for any x ∈ I, Q(x) indicates the degree to which the proportion x satisfies the concept conveyed by the term Q. Thus, any quantifier of natural language can be represented as an absolute quantifier or a relative one. Yager [39] further distinguished three categories of these relative quantifiers: regular increasing monotone (RIM) quantifier such as most, many, at least half; regular decreasing monotone (RDM) quantifier such as few, at most half; and regular unimodal (RUM) quantifier such as about half. He also pointed out that the RIM quantifier is the basis of all kinds of relative quantifiers since a RIM quantifier’s antonym is a RDM quantifier, and any RUM quantifier can be expressed as the intersection of a RIM and RDM quantifier. Liu [22], [23] proposed a formal representation for the RIM quantifier with generating function technique, based on which such a kind of equidifferent RIM quantifier was introduced. Currently, the fuzzy quantifier has become a basic tool in the modeling of uncertain systems and in the theory of computing with words, especially for the ordered weighted averaging (OWA) operator introduced by Yager [36], aiming to reveal the preference information in the aggregation of MADM. This OWA operator is different from the classical weighted average in that coefficients are not associated directly with a particular attribute but rather to an ordered position. Yager [33] investigated various different families of OWA aggregation operators, and Yager [39] used a fuzzy set representation of linguistic quantifiers to obtain decision functions in the form of OWA aggregations. Later, in the spirit of OWA operator for discrete data, Yager [38] further introduced a continuous OWA (C-OWA) operator that extended the OWA operator to the case in which the given argument was a continuous interval rather than a finite set of arguments. After Yager’s pioneering work, the OWA operator has been greatly extended from different aspects, some notable forms include the ordered weighted geometric (OWG) operator [32], the continuous OWG (C-OWG) operator [41], the induced OWG (IOWG)

c 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. 2168-2267  See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

GUO et al.: QUANTIFIERS INDUCED BY SEVSI

operator [9], the induced OWA (IOWA) operator [35], the generalized OWA (GOWA) operator [34], the induced ordered weighted averaging–weighted average (IOWAWA) operator [25], etc., which have been widely used in many areas, notably those related to decision making under uncertainty [6]–[10], [17]–[20], [28], [36], [38], [41], [44], [45]. Further studies on OWA operators can be found in [3], [21], [24], [26], [37], and [40]. In essence, the OWA operator is the inner product of an ordered input (or argument) vector and a weighting vector that acts as a collection of parameters determining the type of aggregation to be performed. In this sense, the OWA weighting vector plays a key role in the aggregation process. A number of techniques have been suggested for generating the weights associated with the operator [14], [31]. Yager [38], [39] suggested an effective approach to obtain the associated OWA weights by using linguistic quantifiers, and considered the extension of this approach to the more general case in which the argument values have important weights. O’Hagan [27] presented a maximum entropy approach, which involved a constrained nonlinear optimization problem. This resulting OWA operator is called the maximum entropy OWA (MEOWA) operator, and has been widely cited in followup studies on this aspect. Filev and Yager [12] explored the analytic properties of the MEOWA operator, and Full´er and Majlender [15] showed that this maximum entropy model could be transformed into a polynomial equation that could be solved analytically. Xu [31] proposed a normal distributionbased method to determine OWA weights in an attempt to reduce the influence of unfair arguments on decision results by weighting these arguments with small values. Roughly speaking, most studies in the literature on the determination of OWA weights, whether involving OWA operators themselves or their applications, can be divided into four categories: 1) the experience- or subjectivity-based approach [6], [7], [9], [25], [32]; 2) the learning-based approach using observed data [2], [13], [35]; 3) the optimization-based approach under a given orness level [1], [3], [4], [11], [12], [15], [21], [27], [29], [30], [40]; and 4) the quantifier guided approach [8], [10], [16]–[20], [22]–[24], [33], [36], [38], [39], [41], [44], [45]. Obviously, the first type of approach, despite simplicity, may lead to a subjective or arbitrary OWA aggregation. The second or third type of approach provides a firm foundation of mathematics for the determination of OWA weights. However, they are, in general, quite complex. Most of them suffer from the solution to a constrained nonlinear optimization problem or a high-order nonlinear algebraic equation. Furthermore, in some situations, a DM or analyzer may not effectively specify the associated aggregated values or the orness level in advance for many reasons, such as lack of knowledge or limited experience in problem domain. In more complicated cases, even experienced DMs or analyzers may fall into confusion and could not express themselves correctly. The fourth type of approach has also been widely used in study due to simplicity and convenience, and many analyses associated with it usually focus on the properties of some underlying quantifiers. It should be noted that almost all of the quantifiers used in such a case are parameterized ones

1785

with preassigned parameter values, regardless of whether these selected quantifiers or preassigned values are suitable for some particular DMs or not. In fact, not all of the preassigned quantifiers or parameter values could rationally reveal the diverse characteristics of DMs. Unsuitable quantifiers or assignment may lead to, to some extent, the counter-intuitive aggregation. For instance, Yager [39] considered the parameterized family of RIM quantifiers Q(x) = xt (t≥0) and particular function with t = 2 representing fuzzy linguistic quantifier “most of.” This function is strictly increasing but, when used with OWA or IOWA operators, associates high weighting values to low consistent values [20]. In this paper, we pay attention to the problems arising from the fourth type of approach mentioned above, and investigate a generation technique for a suitable quantifier closely associated with the specified DM by using sample information. In order to do that, we first develop an interactive procedure in which the DM involved is asked to provide his/her expected values for a collection of given ordered sample values. With the two kinds of observed data, one of which is the given sample values while the other is the corresponding subjective expected values the DM provides, we build nonlinear optimal models to extract from the DM information about his/her decision attitude or behavior characteristic in an OWA weighting vector form, thus building a stronger relationship between the given sample values and the attitude or behavior the DM performs. This testing procedure can be carried out for several rounds, until the distributions of these obtained attitudinal weighting vectors show a steady tendency toward some fixed positions. After that, with these attitudinal weighting vectors we suggest a suitable quantifier just for this DM by means of piecewise linear interpolations. This obtained quantifier is totally derived from the behavior of the DM involved and thus inherently characterized by his/her own attitudinal character. Owing to the nature of this type of quantifier, it can be referred to as the subjective expected value of sample information-induced quantifier (SEVSI)-induced quantifier. We show some properties of the developed quantifier, and prove the consistency of OWA aggregation guided by this type of quantifier. In contrast with parameterized quantifiers, our developed quantifiers are oriented toward the specified DMs with proper consideration of their decision attitudes or behavior characteristics, thus bringing about more intuitively appealing and convincing results in the quantifier guided OWA aggregation. This is the main motivation for this paper. The rest of this paper is organized as follows. Section II briefly introduces the OWA operator and its some features. In Section III, we first develop an interactive procedure which can be used to extract the specified DM information about his/her decision attitude or behavior characteristic, and then investigate a generation technique for the SEVSI induced quantifier in considerable detail. Section IV summarizes the steps of the generation technique for the developed quantifier and provides an approach to fuzzy MADM with decision attitude. Section V makes use of two numerical examples to illustrate and examine the developed method, followed by conclusions in Section VI.

1786

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 10, OCTOBER 2014

II. Preliminaries Yager [36] introduced the concept of the OWA operator, defined as follows. The OWA operator of dimension n is a mapping F : Rn → R with an associated weighting vector W = (w1 , w2 , . . ., wn )T such that n  wj yj FW (x1 , x2 , . . ., xn ) = j=1

where

n 

wj = 1, wj ∈ [0, 1], and yj is the jth largest of

j=1

xi (i = 1, 2, . . ., n). Using vector notations, we can express FW (x1 , x2 , . . ., xn ) = W T Y where Y = (y1 , y2 , . . ., yn )T is a vector whose components are the yj (j = 1, 2, . . ., n). We call Y the OWA argument vector. As we mentioned before, the type of OWA aggregation is determined by the weighting vector W. By selecting different weighting vectors, we can obtain different types of aggregation operator. Some notable examples of this operator are worth pointing out. If W ∗ = (1, 0, . . ., 0)T , then FW ∗ (x1 , x2 , . . ., xn ) = max {xi }. If W∗ = (0, 0, . . ., 1)T , then FW∗ (x1 , x2 , . . ., xn ) =

1≤i≤n

min {xi }. These are called the Max and Min type operators,

1≤i≤n

respectively. Also, they can be seen as special cases of a more general class of weights, wk = 1 and wj = 0 for j = k. We shall denote the associated weighting vector as W [k] . It is obvious that W [1] = W ∗ and W [n] = W∗ . Another important example of OWA is the simple average. We denote here the associated weighting vector as WAve = (1/n, 1/n, . . ., 1/n)T , n  xi . For other types and then have FWAve (x1 , x2 , . . ., xn ) = n1 i=1

of aggregation operator, such as the median and the olympic average, please refer to [33] and [37]. Yager [36] introduced two characterizing measures associated with an OWA weighting vector W. The first of these is called the attitudinal character that can be used to characterize an OWA weighting vector with respect to any distinction between preferences for large or small argument values. The attitudinal character is defined as n  n−j AC(W) = wj . (1) n −1 j=1 It can be shown that AC(W) ∈ [0, 1]. Moreover, larger values of AC(W), closer to 1, indicate a preference for larger argument values in the aggregation. While lower values of AC(W), closer to 0, indicate a preference for smaller argument values in the aggregation. Values of AC(W) in the middle, near 0.5, can be a reflection of no preference for either large or small argument values at all. Obviously, we see that AC(W ∗ ) = 1, AC(W∗ ) = 0, and AC(W  Ave ) = 0.5. More generally, we see that AC(W [k] ) = (n − k) (n − 1)(k = 1, 2, . . ., n). The second characterizing measure is called the dispersion that reflects how uniformly the weights are distributed. The dispersion is defined as n  Disp(W) = − wj ln wj . (2) j=1

It is well known that Disp(W) reaches its maximum of ln n when all wj = 1/n(j = 1, 2, . . ., n) and its minimum of zero when wk = 1 for some k and all other wj = 0. From this, we see that Disp(WAve ) = ln n and Disp(W [k] ) = 0. Yager [33], [36], [39] suggested an effective approach to obtain the OWA weighting vector via fuzzy linguistic quantifiers, especially the RIM quantifiers denoted by a fuzzy subset Q with the following properties: 1) Q(0) = 0, 2) Q(1) = 1, and 3) Q(x)≥Q(y) if x≥y. These quantifiers were denoted as basic unit-interval monotonic (BUM) functions in [38]. Some notable examples of these quantifiers Q have been mentioned in [38]. Using a RIM quantifier Q, we can obtain the OWA weights as     j j−1 wj = Q −Q , j = 1, 2, . . ., n. (3) n n It can be shown that these weights satisfy the conditions wj ∈ n  [0, 1] and wj = 1. The quantifier guided OWA aggregation j=1

can then be expressed as FQ (x1 , x2 , . . ., xn ) = FW (x1 , x2 , . . ., xn )   n     j j−1 Q = −Q yj . (4) n n j=1 Thus, we can see that in the quantifier guided OWA aggregation, the quantifier Q makes a critical difference. Not withstanding the above observation, the technique of quantifier guided aggregation can also be applied by other types of quantifier, such as RDM or RUM. Since the RIM quantifier is the basis of all kinds of relative quantifiers, all quantifiers used in this paper are assumed to be RIM. In a quantifier guided OWA aggregation, the measure of attitudinal character of the aggregation, i.e., (1), can be rewritten as   n     j j−1 n−j AC(Q) = Q . (5) −Q n −1 n n j=1 Further algebraic manipulation of the formula leads to the following simple form:   n−1 1  j AC(Q) = Q . (6) n − 1 j=1 n Obviously, if n → +∞, then the concept of attitudinal character can be associated directly with a quantifier Q. In this case, we let 1 λQ = lim AC(Q) = Q(x)dx. (7) n→∞

0

Yager [39] considered the extension of the quantifier guided OWA aggregation to the more general case in which the argument values have importance weights. Let A be a set of m alternatives, and let C be a set of n criteria with different importance weights. For ∀a ∈ A, let ci (a) ∈ R be the value of the alternative a with respect to the ith criterion, yj (a) be the jth largest of ci (a)(i = 1, 2, . . ., n), and uj (a) be the

GUO et al.: QUANTIFIERS INDUCED BY SEVSI

1787

importance associated with the criterion that has the value yj (a). The associated OWA weights can then be obtained by     Sj−1 (a) Sj (a) wj (a) = Q −Q , a ∈ A, j = 1, 2, . . ., n T T (8) j n   where Sj (a) = uk (a), and T = uk (a), the total sum of k=1

k=1

importance weights. Thus, for each alternative a ∈ A, the quantifier guided OWA aggregation can be expressed as FQ (c1 (a), c2 (a), . . ., cn (a)) = FW (c1 (a), c2 (a), . . ., cn (a))    n    Sj (a) Sj−1 (a) Q = −Q yj (a), a ∈ A. (9) T T j=1 It should be noted that the weights obtained by (8) will generally be different for each alternative a. This is because the ordering of ci (a)(i = 1, 2, . . ., n) will be different and in turn lead to different uj (a)(j = 1, 2, . . ., n). Furthermore, the degree of attitudinal character is also different for each alternative. In this case, for ∀a ∈ A we have n  n−j AC(Q, a) = wj (a) n−1 j=1      n  Sj (a) Sj−1 (a) n−j Q −Q . = T n−1 T j=1 (10) Similarly, by some algebraic manipulations, we have the following simple form   n−1 Sj (a) 1  Q AC(Q, a) = , a ∈ A. (11) n − 1 j=1 T From the above observation, we can see the important role a quantifier plays in the quantifier guided aggregation. For a specific DM, unsuitable quantifiers adopted may lead to the counter-intuitive aggregation, as we mentioned before. In the following, we will explore a generation technique for a suitable quantifier closely related to the specified DM by using sample information. With the nature of this type of quantifier in mind, we refer to it as the SEVSI induced quantifier.

At first, the analyzer provides the specified DM with a collection of multiattribute sample information to be evaluated, with no idea what the DM involved will be. This set of sample information should be characterized by conciseness and meaningfulness, and represented in a decision matrix form. Without loss of generality, we consider the given sample information with m alternatives

and n criteria, denoted by the decision matrix D = dij m×n , where dij ∈ R. Allowing for cost criteria, the decision matrix D should be further ¯ = d¯ ij normalized to be D , where m×n

d¯ ij =

⎧ d ⎪  ij ⎪ m ⎪  ⎪ dij2 ⎨

|j ∈ J +

i=1  , i = 1, 2, . . ., m, j = 1, 2, . . ., n. 1 d j ∈ J −  / ij ⎪ ⎪ m ⎪  ⎪ ⎩ (1/ dij )2 i=1

(12) Here, J + represents a set of benefit criteria, and J − represents a set of cost ones. It is clear that d¯ ij ∈ [0, 1](i = 1, 2, . . ., m; j= ¯ = d¯ ij 1, 2, . . ., n). For the normalized decision matrix D , m×n we rearrange the values of each criterion in descending order based on the idea of OWA aggregation. For convenience, the ¯ = reordered normalized decision matrix is still denoted by D

¯dij . m×n ¯ the specified DM Next, facing the given decision matrix D, is asked to provide his/her expected or satisfactory value for each criterion with personal preferences. The set of subjective expected values of criteria this DM provides can be expressed via a vector, denoted by ν¯ = (¯v1 , v¯ 2 , . . ., v¯ n )T where v¯ j ∈ [0, 1](j = 1, 2, . . ., n). The vector ν¯ as a collection of subjective expected values, in essence, builds a stronger relationship between the given sample information and the attitude or behavior the DM performs. While this attitude or behavior can scientifically demonstrate, from an ethological perspective, the mentality and personality of the DM involved in the decision making process. With this understanding, we calculate the square sum of weighted deviations of d¯ ij from v¯ j for each criterion based on the idea of OWA aggregation, denoted by cj =

m 

2 si2 d¯ ij − v¯ j , j = 1, 2, . . ., n

i=1

where si ≥0 (i = 1, 2, . . ., m) and III. SEVSI-Induced Quantifier

m 

si = 1.

i=1

A. Interactive Procedure for SEVSI Motivated by Delphi method, we consider here a relatively simple interactive procedure, which we can use to extract from the specified DM information about his/her decision attitude or behavior characteristic. Participants can be divided into three groups. The first of these serves as organizers in charge of setting goals, making requests, and providing desired results for testing. The second group acts as analyzers in charge of collecting sample observations, as well as testing results and handling some technical problems throughout this process. The last group is made up of the objects under test, namely, the specified DMs.

B. Attitudinal Information Extracted From SEVSI In order to extract from the DM information about his/her decision attitude during the procedure, we take into account the following constrained nonlinear optimization model: Minimize : Z =

n 

cj =

j=1

subject to :

m 

m  n 

2 si2 d¯ ij − v¯ j

i=1 j=1

si = 1,

i=1

si ≥0foralli = 1, 2, . . ., m.

(13)

1788

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 10, OCTOBER 2014

In order to obtain si (i = 1, 2, . . ., m), let L(s1 , s2 , . . ., sm , ρ) =

m  n 

2 si2 d¯ ij − v¯ j + ρ

i=1 j=1

 m 

 si − 1

i=1

(14) be the Lagrange function of the constrained optimization model (13), where ρ ∈ R. The partial derivatives of L can then be calculated as m ∂L  si − 1 = 0, = ∂ρ i=1 

2 ∂L = 2si d¯ ij − v¯ j + ρ = 0, i = 1, 2, . . ., m. ∂si j=1 n

From these results, we can easily get ρ = − m i=1

si =

n  j=1

2 n 

,

1

(d¯ ij −¯vj )2 j=1

(d¯ ij −¯vj )2

1 m  k=1

n  j=1

1

, i = 1, 2, . . ., m.

(15)

(d¯ kj −¯vj )2

Collectively, we can denote the set of weights si (i = 1, 2, . . ., m) as a vector S = (s1 , s2 , . . ., sm )T where si (i = 1, 2, . . ., m) are determined by (15). Before starting a discussion on S = (s1 , s2 , . . ., sm )T , we first review the component weights with respect to their positions in an OWA weighting vector. Clearly, the weights near the top of the vector are associated with the larger argument values while those near the bottom are associated with the smaller valued arguments. Those in the middle are associated with the middle valued arguments. Let us now look at the implications of the vector S = (s1 , s2 , . . ., sm )T and the resulting operators to get a better understanding of the process. We note that the values of each ¯ = d¯ ij criterion in D have been arranged in descending m×n order as required by the OWA aggregation. Allowing for the nature of the model (13), we can see that as the specified DM gives more preference to the larger valued arguments, more of the total weight is moving to the weights at the top. The vector W ∗ = (1, 0, . . ., 0)T is the extreme example of this, and the resulting operator is definitely the maximum. On the other hand, by giving more preference to the smaller valued arguments, more of the total weight is moving to the bottom. The vector W∗ = (0, 0, . . ., 1)T is extreme of this, and the resulting operator is definitely the minimum. We also note that if the DM makes a definite preference for the valued arguments in the middle between the large and small ones, then more of the total weight is uniformly distributed around the middle positions, too. In this case, the resulting operator can be regarded as a median type. Thus, we have very good reason to believe that the distribution of the obtained vector S = (s1 , s2 , . . ., sm )T , from the viewpoint of OWA aggregations, well reveals the decision attitude or behavior characteristic the DM has. In this sense, this piece of extracted information in an OWA weighting vector form

can derive an appropriate operator that is associated directly with the DM under test. The above testing procedure can be carried out under the control of organizers for several rounds. In each round, a different collection of multiattribute sample information is given in a decision matrix form with the same number of criteria. We shall the sample information for the kth

denote round as Dk = dijk m×n (k = 1, 2, . . ., l) where dijk ∈ R. Similarly, the normalized and rearranged sample data are denoted

¯ k = d¯ ijk by D , where d¯ ijk ∈ [0, 1], and the subjective m×n expected value vectors this DM provides are denoted by ν¯ k = (¯vk1 , v¯ k2 , . . ., v¯ kn )T , where v¯ kj ∈ [0, 1]. Using the preceding solutions to these observed data, we can obtain the attitudinal k T weighting vectors, denoted by S k = (s1k , s2k , . . ., sm ) for the kth round. It is worth noting that the distributions of the vectors k T S k = (s1k , s2k , . . ., sm ) may be scattered in the first few rounds. Nevertheless, in the next rounds with favorable information fed back and exchanging, significant weights will be increasingly k T concentrated on some fixed positions in S k = (s1k , s2k , . . ., sm ) . In this case, some further steps need to be taken to obtain a fittest vector Sˆ = (ˆs1 , sˆ 2 , . . ., sˆ m )T that best matches this k T collection of vectors S k = (s1k , s2k , . . ., sm ) (k = 1, 2, . . ., l). For instance, we may use the least squares fit.

C. SEVSI-Induced Quantifier Assume S = (s1 , s2 , . . ., sm )T is an attitudinal weighting vector associated directly with the specified DM by previous procedures. With the nature of this type of vector in mind, we now consider a more general case involving the generation of quantifier Q targeted at the DM involved. According to Yager [33], [36], [39], we can assign some values to this quantifier, i.e.  Q

i m

 =

i 

sk , i = 0, 1, 2, . . ., m.

(16)

k=1

In this way, we only get some discrete values of the quantifier Q. We must provide for the values of Q(x) between these fixed points. We are trying to tackle this problem with  the piecewise  linear interpolation as follows. For (i − 1) m≤x≤i m, we have Q(x) = = = =

i x− i−1 m i i−1 Q m −

i−1m m −(mx − i)Q  −i+  mi + (mx −(mx − i) Q m − si + (mx (mx − i)si + Q mi . x− mi i−1 i m −m

Q

i−1 m

+

1)Q mi − i + 1)Q mi

¯ = From from the given D

this result, we finally derive, d¯ ij m×n and ν¯ = (¯v1 , v¯ 2 , . . ., v¯ n )T , the SEVSI-induced quantifier QD,¯ ¯ ν for this DM, which is a piecewise linear function denoted by

QD,¯ ¯ ν (x) = (mx − i)si +

i  k=1

sk

i−1 i ≤x≤ , i = 1, 2, . . ., m m m (17)

GUO et al.: QUANTIFIERS INDUCED BY SEVSI

1789

where si (i = 1, 2, . . ., m) are determined by (15). We first investigate the attitudinal character associated with QD,¯ ¯ ν . Note that   i  i/ m  i/ m  sk dx ¯ ν (x)dx = (i−1) m (mx − i)si + (i−1)/ m QD,¯ / =

k=1

 i/ m

(i−1)/ m

(mx − i)si dx +

isi / = msi x2 |(i−1) /m − m +  i   = m1 sk − 21 si . i m

2

1 m

 i/ m

(i−1)/ m

i 

i 

Proof: From (18), it is clear that λQ = 1 −

=1−

sk dx

k=1

=1−

sk

QD,¯ ¯ ν (x)dx = =

1 m

=

=1−

m 



i 



QD,¯ sk − ¯ ν (x)dx = i=1 k=1    m  i m  1  1 1 sk − 2 = m (m − i + 1)si − 2 (i−1)/ m

i=1 k=1 m 1  isi m i=1

1 m

1 s 2 i

i=1

+

1 . 2m

Thus, the attitudinal character of aggregations performed by this DM can be measured by λQD,¯ ¯ ν) = ¯ ν = AC(QD,¯

(m − i + 1)si +

i=1

m 

1 m

m 

isi −

m 

1 m

si +

isi −

i=1

1 m

1 ; 2m

1 2m

1 2m m 

si +

i=1

1 2m

1 2m

i=1

1  1 isi + (18) m i=1 2m

Theorem 2: Let Q, Q be two SEVSI-induced quantifiers derived from the attitudinal weighting vectors S = T (s1 , s2 , . . ., sm )T and S = (s1 , s2 , . . ., sm ) from the same collection of sample data, respectively. For S, S , let ri = i i   sk , ri = sk , i = 1, 2, . . ., m. If ri ≥ri for any i = k=1

k=1

1, 2, . . ., m, then 1) FQ (X)≥FQ (X) for any argument vector X = (x1 , x2 , . . ., xn )T ; and 2) λQ ≥λQ . Proof: 1) For X = (x1 , x2 , . . ., xn )T , we suppose that x1 ≥x2 ≥ · · · ≥xn . From (4), we have n  FQ (X) = w j xj j=1    n

 Q nj − Q j−1 = xj n

m

1

QD,¯ ¯ ν (x)dx = 1− 0

i=1 m 

isi +

i=1

= 1 − λQ .

m   i/ m i=1 

1 m

i=1

k=1

From this result, we further have

0

m 

given that S¯ = (sm , sm−1 , . . ., s1 )T , we have λQ¯ = 1 − m1 (sm + 2sm−1 + 3sm−2 + · · · + ms1 ) + m  1 = 1 − m1 ism−i+1 + 2m

k=1

1

1 m

where si (i = 1, 2, . . ., m) are determined by (15). Some special cases are worth pointing out. As the DM involved gives more preference for larger sample values with higher SEVSI, then T the attitudinal weighting vector  S → (1, 0, . . ., 0) . In this case, we have λQD,¯ ¯ ν = 1 − 1 (2m). When m → +∞, then λQD,¯ = 1. This is clearly an optimistic attitude. On the ¯ ν other hand, by giving more preference for smaller sample values with lower SEVSI, we have S → (0, 0, . . ., 1)T and λQD,¯ ¯ ν = 1 (2m). When m → +∞, then λQD,¯ ¯ ν = 0. This is clearly pessimistic. If the DM gives a preference for sample values in the middle between the large and small ones, then the significant weights will be dispersed as evenly as possible around the middle positions in S. In this case, we have λQD,¯ ≈ 0.5. This is clearly a neutral attitude. ¯ ν When S = (1/m, 1/m, . . ., 1/m)T , we get λQD,¯ ¯ ν = 0.5, a completely neutrality. In particular, if the DM has a tendency to provide his SEVSI at random, the resulting vector S and value of λQD,¯ ¯ ν will present some kind of overall evaluation on the performance by the DM. However, the quantifier QD,¯ ¯ ν obtained in such a case is almost meaningless. We now look at the consistency of OWA aggregation guided by QD,¯ ¯ ν. Theorem 1: Let Q be a SEVSI induced quantifier derived from the attitudinal weighting vector S = (s1 , s2 , . . ., sm )T , and let λQ ∈ [0, 1] be the attitudinal character of OWA ¯ is derived aggregations guided by Q. If the quantifier Q ¯ from the reverse order of S, S = (sm , sm−1 , . . ., s1 )T , then λQ¯ = 1 − λQ .

=

j=1 n−1 

Q

j=1

Similarly, we get F (X) =

n−1 

Q

j=1

j n

(xj − xj+1 ) + xn .

  j Q (xj − xj+1 ) + xn . n

We then calculate FQ (X) − FQ (X) = where Q

j n

  n−1     j j Q − Q (xj − xj+1 ) n n j=1 (19)

= m·

i−1 j ≤ n ≤ mi , i m

and Q

j n

i  − i s i + sk k=1

= 1, 2, . . ., m, j = 1, 2, . . ., n

= m·

i−1 j ≤ n ≤ mi , i m

j n

j n

i  − i si + sk , k=1

= 1, 2, . . ., m, j = 1, 2, . . ., n.

Before we check the sign of (19), let us focus on the following calculation first:       i  j j j Q −Q = m · − i (si − si ) + (sk − sk ) n n n k=1   i−1  j = 1 + m · − i (si − si ) + (sk − sk ). n k=1 (20)

1790

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 10, OCTOBER 2014

Given the condition ri ≥ri for any i = 1, 2, . . ., m, it is known i−1  for sure that ri−1 ≥ri−1 , namely, (sk − sk )≥0. We also note k=1

≤ nj ⇔ 1 + m· nj − i≥0;hence, that nj ≤ mi ⇔ m · nj − i≤0, i−1 m j 0≤1 + m · n − i≤1. When si ≥si , then Q(j n) − Q (j n)≥0. i i   sk ≥ sk On the other hand, when si ≤si , given the fact that k=1

i−1 

sk )≥

k=1

si )

implies (sk − − (si − for any i = 1, 2, . . ., m, then      k=1 j j j − Q ≥ 1 + m · − i (si − si ) − (si − si ) Q n n n   j = m · − i (si − si )≥0. n  That is to say, for (20), whether si ≥si or si ≤si , Q(j n) − Q (j n)≥0 can always hold.   Let us return now to (19). Given that Q(j n) − Q (j n)≥0 and xj ≥xj+1 , it is certain that FQ (X)≥FQ (X). i i m m     2) Since sk ≥ sk (i = 1, 2, . . ., m), sk = sk = 1, k=1

then

k=1

k=1

k=1

s1 ≥s1 s1 + s2 ≥s1 + s2 s1 + s2 + s3 ≥s1 + s2 + s3 .. .

s1 + s2 + · · · + sm ≥s1 + s2 + · · · + sm .

Accumulating the above inequalities, we get ms1 + (m − 1)s2 + · · · + sm ≥ms1 + (m − 1)s2 + · · · + sm

i.e.,

m 

(m − i + 1)si ≥

i=1

m  i=1

(m − i + 1)si . After algebraic manip-

ulation of the formula, we get m m   isi ≤ isi . i=1

i=1

According to (18), it is certain that λQ ≥λQ .  T Theorem 3: Let S = (s1 , s2 , . . ., sm )T , S = (s1 , s2 , . . ., sm ) be two attitudinal weighting vectors determined by (15) from i  the same collection of sample data. For S, S , let ri = sk , ri

=

i 

k=1

sk ,

k=1

i = 1, 2, . . ., m. If

sk sl ≥sk sl

for any l≥k(k, l =

ri ≥ri

for any i = 1, 2, . . ., m. 1, 2, . . ., m), then Proof: Similar to [24, Th. 2]. Omitted. T Theorem 4: Let S = (s1 , s2 , . . ., sm )T , S = (s1 , s2 , . . ., sm ) be two attitudinal weighting vectors determined by (15) from i  the same collection of sample data. For S, S , let ri = sk , ri

=

i  k=1

k=1

sk ,

i = 1, 2, . . ., m. If sj − ri ≥ri

sj+1 ≥sj



sj+1

for any

for any i = 1, 2, . . ., m. j = 1, 2, . . ., m − 1, then Proof: Similar to [24, Th. 3]. Omitted. Theorem 5: Let Q, Q be two SEVSI-induced quantifiers. For any argument vectorX = (x1 , x2 , . . ., xn )T , FQ (X)≥FQ (X) and λQ ≥λQ if and only if Q(x)≥Q (x) for any x ∈ [0, 1]. Proof: Similar to [24, Th. 4]. Omitted.

Obviously, the SEVSI-induced quantifier is totally derived from the behavior of the DM involved and thus inherently characterized by his/her desired attitudinal character. Moreover, this type of quantifier has good properties and consistency of resulting OWA aggregations, as we discussed and proved before. We believe that such a quantifier is vital to guarantee, in theory as well as in application, high efficiency in the quantifier guided OWA aggregation, thus bringing about more intuitively appealing and convincing results. It can also help predict the consequences for the DM involved for his/her future decisions, and with good reason. IV. Approach to Fuzzy MADM With Decision Attitude We now summarize the steps of the generation technique for the developed quantifier, and provide an approach to fuzzy MADM with decision attitude as follows. Step 1. Select an appropriate quantifier for the specified DM, then go to Step 6. If there is no available quantifier for this DM, we may generate a specific one for him by sample data as follows. Step 2. Prepare a collection of multiattribute sample information

in a decision matrix form, denoted by D = dij m×n . Use (12) to normalize the given sample data and then rearrange the values of each criterion ¯ = d¯ ij in descending order, denoted by D where m×n d¯ ij ∈ [0, 1].

¯ = d¯ ij Step 3. According to D , ask the DM to provide m×n his expected or satisfactory value for each criterion with personal preferences. The set of subjective expected values is denoted by a vector ν¯ = (¯v1 , v¯ 2 , . . ., v¯ n )T where v¯ j ∈ [0, 1](j

= 1, 2, . . ., n). ¯ = d¯ ij and ν¯ = Step 4. With the observed data D m×n (¯v1 , v¯ 2 , . . ., v¯ n )T , construct the constrained nonlinear optimization model (13), and then use (15) to determine the attitudinal weighting vector S = (s1 , s2 , . . ., sm )T , the distribution of which implies the decision attitude or behavior characteristic the DM has. Step 4 . If necessary, the Step 2 ∼ Step 4 can be implemented for several rounds, until the distributions of these obtained attitudinal weighting vectors show a steady tendency toward some fixed positions. In each round, a different collection of multiattribute sample information is given in a decision matrix form with the same number of criteria. For the set of different attitudinal weighting vectors obtained in each round, we may use the least squares fit to get a fittest one that best matches this set of vectors. Step 5. With the obtained S = (s1 , s2 , . . ., sm )T , use (17) to generate a specific quantifier QD,¯ ¯ ν for this DM. If necessary, use (18) to measure the attitudinal character of aggregations performed by this DM. Select this quantifier to guide the OWA aggregation for this DM involved for his future decisions. Step 6. Finally, with the selected quantifier, use (4) to implement the quantifier guided aggregation. In those

GUO et al.: QUANTIFIERS INDUCED BY SEVSI

TABLE I Specifications of Alternative Prototype Missiles

1791

¯ = d¯ ij , ask the DM to provide his Step 3. According to D 5×4 expected or satisfactory value for each criterion with personal preferences. Assume the DM will provide the set of expected values as ν¯ = (0.45, 0.45, 0.38, 0.35)T . ¯ with ν¯ , we can see that this DM shows a Comparing D preference for sample values in the middle between the large and small ones. This is clearly a neutral attitude. ¯ and ν¯ , construct the conStep 4. With the observed data D strained nonlinear optimization model (13), and then use (15) to obtain an attitudinal weighting vector as

general cases, in which the criteria are associated with different importance weights, (9) should be applied.

V. Illustrative Examples In this section, we utilize two numerical examples (adapted from [18]) to illustrate and examine the developed quantifier, respectively. Example 1: Quantifier generation. Step 1. Assume there is no available quantifier for the DM involved. We decide to generate a specific one for him by our developed method. Step 2. Prepare a collection of multiattribute sample information involving the assessments for five prototype missiles, the specifications of which are shown in Table I. Obviously, the attribute Price (C4 ) is a cost criterion and the others are benefit. For exact values in Table I, we use (12) to normalize them; while for intervals d˜ ij = [d˜ ijL , d˜ ijU ], we denote their normalized values as d¯ ij = [d¯ ijL , d¯ ijU ], where ⎧ d˜ ijL ⎪  |j ∈ J + ⎪ ⎪ 5   U 2 ⎪ ⎪ ˜ dkj ⎨ k=1 , i = 1, 2, . . ., 5. d¯ ijL = U  ⎪  1 d˜ ij j ∈ J − ⎪ ⎪ 2 ⎪ 5   ⎪  L ⎩ 1 d˜ kj k=1

d¯ ijU =

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨



d˜ ijU

5   k=1

L d˜ kj

2

|j ∈ J +

, j = 1, 2, 3, 4.  1 d˜ ijL ⎪ j ∈ J − ⎪  ⎪    ⎪ 5 ⎪ ⎩  1 d˜ U 2 k=1

kj

Here, J + represents a set of benefit criteria, and J − represents a set of cost ones. It is clear that d¯ ij ⊆ [0, 1](i = 1, 2, . . ., 5; j = 1, 2, 3, 4). We shall take here the inferior limits of normalized intervals for criteria C3 and C4 . After rearranging the values of each criterion in descending order, we get ⎤ ⎡ 0.574 0.494 0.511 0.397 ⎢ 0.478 0.460 0.465 0.376 ⎥ ⎥ ⎢

⎢ 0.421 0.443 0.325 0.376 ⎥ . ¯ = d¯ ij D = ⎥ ⎢ 5×4 ⎣ 0.382 0.426 0.279 0.362 ⎦ 0.344 0.409 0.279 0.345

S = (0.058, 0.244, 0.467, 0.138, 0.093)T . We can clearly see that the significant weights are evenly dispersed around the middle positions in the vector S, implying that this DM is characterized by a neutral attitude. Step 5. With the obtained vector S, use (17) to generate the specific quantifier QD,¯ ¯ ν for this DM, shown as ⎧ 0.2922x 0.0≤x≤0.2 ⎪ ⎪ ⎪ ⎪ ⎨ 1.2202x − 0.1856 0.2≤x≤0.4 2.3349x − 0.6315 0.4≤x≤0.6 . QD,¯ ¯ ν (x) = ⎪ ⎪ 0.6896x + 0.3557 0.6≤x≤0.8 ⎪ ⎪ ⎩ 0.4632x + 0.5368 0.8≤x≤1.0 Furthermore, we may use (18) to measure the attitudinal character of aggregations performed by this DM, that is λQD,¯ ¯ ν = 0.508 which confirms the aforementioned conclusion concerning decision attitude. We will select this quantifier to guide the OWA aggregation for this DM involved for his future decisions. ⵧ Let us look at some different

SEVSI the DM provides for ¯ = d¯ ij the same sample data D . Assume the DM provides 5×4 his set of expected values as ν¯ = (0.35, 0.40, 0.30, 0.35)T which is showing a preference for lower sample values. With this vector, we then have S = (0.004, 0.009, 0.053, 0.190, 0.744)T . Clearly, the distribution of S presents a one-sided tendency toward the bottom, implying that this DM is characterized by a pessimistic attitude. In this case, the specific quantifier Q D,¯ ¯ ν is represented as ⎧ 0.0205x 0.0≤x≤0.2 ⎪ ⎪ ⎪ ⎪ ⎨ 0.0453x − 0.0050 0.2≤x≤0.4 0.2648x − 0.0928 0.4≤x≤0.6 . Q D,¯ ¯ ν (x) = ⎪ ⎪ 0.9492x − 0.5034 0.6≤x≤0.8 ⎪ ⎪ ⎩ 3.7202x − 2.7202 0.8≤x≤1.0 The attitudinal character of aggregations performed by this DM can then be measured as λQ D,¯ = 0.168 ¯ ν which supports the conclusion concerning decision attitude.

1792

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 10, OCTOBER 2014

We let here Xa2 = (0.739, 0.730, 0.279)T , Xa1 = (0.691, 0.713, 0.756)T Xa3 = (0.672, 0.705, 0.733)T , Xa4 = (0.711, 0.722, 0.325)T . To make a preliminary evaluation of ai (i = 1, 2, 3, 4), we will select the quantifier QD,¯ ¯ ν derived from Example 1 to guide the OWA aggregation. We first use (3) to calculate the OWA weighting vector, denoted by W = (0.221, 0.594, 0.185)T which is clearly an indication of preference for argument values in the middle between the large and small ones. Equation (4) can then be used to implement the aggregation for each alternative, that is FQD,¯ ¯ ν (Xa1 ) = 0.718, FQD,¯ ¯ ν (Xa2 ) = 0.649 FQD,¯ ¯ ν (Xa3 ) = 0.705, FQD,¯ ¯ ν (Xa4 ) = 0.642. Thus, we have the ranking of alternatives guided by QD,¯ ¯ ν (a reflection of neutral attitude with λQD,¯ ¯ ν = 0.508), that is a1  a3  a2  a4 . Fig. 1. fied.

Geometrical representation of SEVSI-induced quantifiers exempli-

Alternatively, if the DM provides his set of expected values as ν¯ = (0.55, 0.49, 0.50, 0.38)T which is an indication of preference for higher sample values, then we have S = (0.848, 0.116, 0.017, 0.010, 0.009)T . Clearly, the distribution of S this time presents a one-sided tendency toward the top, implying that this DM is characterized by an optimistic attitude. The specific quantifier Q D,¯ ¯ ν is then represented as ⎧ 4.2393x 0.0≤x≤0.2 ⎪ ⎪ ⎪ ⎪ ⎨ 0.5799x + 0.7319 0.2≤x≤0.4 0.0858x + 0.9295 0.4≤x≤0.6 . Q D,¯ ¯ ν (x) = ⎪ ⎪ 0.0521x + 0.9497 0.6≤x≤0.8 ⎪ ⎪ ⎩ 0.0429x + 0.9571 0.8≤x≤1.0 The attitudinal character of aggregations performed by this DM can then be measured as λQ D,¯ = 0.857 ¯ ν which supports, again, the conclusion concerning decision attitude. Geometrical representation of the exemplified quantifiers is shown in Fig. 1 in an effort to make our SEVSI induced quantifiers quite understood. Example 2: Quantifier examination. Assume there is a normalized multiattribute decision matrix with four alternatives ai i = 1, 2, 3, 4 and 3 benefit criteria, shown as ⎡ ⎤ 0.691 0.713 0.756 ⎢ 0.739 0.730 0.279 ⎥

⎥ P = pij 4×3 = ⎢ ⎣ 0.672 0.705 0.733 ⎦ . 0.711 0.722 0.325

Let us look at other cases involving

different quantifiers for the same decision matrix P = pij 4×3 . When selecting Q D,¯ ¯ ν, we then have the corresponding OWA weighting vector W = (0.010, 0.119, 0.871)T which is clearly an indication of preference for lower argument values. The aggregation for each alternative will be FQ D,¯ (Xa1 ) = 0.694, FQ D,¯ (Xa2 ) = 0.337 ¯ ν ¯ ν (X FQ D,¯ ) = 0.677, F (X a3 a4 ) = 0.375. QD,¯ ¯ ν ¯ ν Thus, we have the ranking of alternatives guided by Q D,¯ ¯ ν (a reflection of pessimistic attitude with λQ D,¯ = 0.168), that is ¯ ν a1  a3  a4  a2 . Alternatively, when selecting Q D,¯ ¯ ν , we have the corresponding OWA weighting vector W = (0.925, 0.059, 0.016)T which is clearly an indication of preference for higher argument values. In this case, the aggregation for each alternative will be FQ D,¯ (Xa1 ) = 0.752, FQ D,¯ (Xa2 ) = 0.731 ¯ ν ¯ ν (X ) = 0.715. (X ) = 0.730, F FQ D,¯ a3 a4 QD,¯ ¯ ν ¯ ν Thus, we have the ranking of alternatives guided by Q D,¯ ¯ ν (a = 0.857), that is reflection of optimistic attitude with λQ D,¯ ¯ ν a1  a2  a3  a4 . Let us look at some aggregations guided by other type of quantifier to make a direct comparison. For convenience, we consider here the parameterized family of RIM quantifiers Q(x) = xt (t≥0) introduced in [39]. More details on different aggregations guided by them are shown in Table II. Geometrical representation of quantifiers with different parameters is shown in Fig. 2. From the Table II and Fig. 2, we clearly see that the quantifier with t1 = 2 is a reflection of pessimistic attitude with exactly the same ranking of alternatives as ours guided

GUO et al.: QUANTIFIERS INDUCED BY SEVSI

TABLE II Quantifier Guided OWA Aggregations With Different Parameters

1793

As mentioned at the beginning of this paper, all quantifiers used in this paper are assumed to be RIM. The use of a RIM quantifier to guide the aggregation essentially implies that the more criteria satisfied the better the solution. In this sense, our developed quantifiers may represent fuzzy linguistic quantifier “most of,” though their geometrical representations (shown in Fig. 1) do not look as smooth as parameterized quantifiers’ (shown in Fig. 2). In fact, in view of the diverse personalities of people in real world, it is quite difficult to find a proper DM who has a certain kind of decision attitude that can exactly match a smooth curve in Fig. 2. Due to this, these parameterized quantifiers seem to be over idealized and therefore may suffer from some limitations in real world applications mainly concerning the treatment of some attitudes beyond thought. But the fact is that we develop a repeatable interactive procedure which we can use to extract from DMs information about their decision attitudes or behavior characteristics in an OWA weighting vector form. Thus, our resulting quantifiers can be used more efficiently and flexibly to handle such a complex case in which DMs are allowed to have some unusual even extreme decision attitudes. VI. Conclusion

Fig. 2.

Geometrical representation of parameterized quantifiers exemplified.

by Q D,¯ ¯ ν . The quantifier with t2 = 1 is a reflection of completely neutral attitude, and the resulting aggregation is the simple average. Those quantifiers with t < 1 are definitely an indication of optimistic attitude. The closer t is to zero, the more optimistic the aggregation to be performed. We also note that the ranking of alternatives under t4 = 0.07 is different from that under t5 = 0.02, exactly the same as ours guided by Q D,¯ ¯ ν , though both parameterized quantifiers are definitely a reflection of extremely optimistic attitude. For the type of attitude on this level, we do not believe there is significant difference between these two parameterized quantifiers that may lead to various rankings. By contrast, our developed quantifiers are oriented toward the specified DMs with proper consideration of their decision attitudes or behavior characteristics, thus bringing about more intuitively appealing and convincing results.

In view of some issues arising from the quantifier guided aggregation using OWA operators, we consider in this paper a generation technique for an appropriate quantifier closely associated with the specified DM by using sample information. We call it the SEVSI-induced quantifier. We first develop a relatively simple interactive procedure in which the DM involved is asked to provide his/her expected values with personal preferences for a collection of given sample information. With these sample values and expected values, we build nonlinear optimal models to extract from the DM information about his/her decision attitude or behavior characteristic in an OWA weighting vector form. This testing procedure can be carried out for several rounds, until the distributions of these obtained attitudinal vectors show a steady tendency toward some fixed positions. After that, with these attitudinal vectors we suggest a suitable quantifier just for this DM by means of piecewise linear interpolations. The developed quantifier is totally derived from the behavior of the DM involved and thus inherently characterized by his/her own attitudinal character. We show some properties of the developed quantifier, and prove the consistency of OWA aggregation guided by this type of quantifier. Experimental studies show that most existing parameterized quantifiers seem to be over idealized and therefore may suffer from some limitations in the real world applications mainly concerning the treatment of some attitudes beyond thought. Our developed quantifier, as an alternative, can be used more efficiently and flexibly to handle such a complex case in which DMs are allowed to have some unusual even extreme decision attitudes. Therefore, we believe that such a quantifier is vital to guarantee high efficiency in the quantifier guided OWA aggregation, thus bringing about more intuitively appealing and convincing results. Also, it can help predict the consequences for the DM involved for his/her future decisions, and with good reason.

1794

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 10, OCTOBER 2014

Acknowledgment This work was partially supported by the Young Research Foundation of Social Science of the Ministry of Education of China under Grant 10YJC630063, the Ph.D. Research Startup Foundation of Liaoning Province under Grant 20121030, and the Ph.D. Research Startup Foundation of Liaoning University. The author would like to thank the Editor-in-Chief, Prof. E. Santos, Jr., the Associate Editor, and the three anonymous reviewers for their constructive comments and suggestions, which have greatly improved the presentation of this paper.

References [1] B. S. Ahn, “Parameterized OWA operator weights: An extreme point approach,” Int. J. Approx. Reason., vol. 51, no. 7, pp. 820–831, 2010. [2] B. S. Ahn, “Preference relation approach for obtaining OWA operators weights,” Int. J. Approx. Reason., vol. 47, no. 2, pp. 166–178, 2008. [3] B. S. Ahn, “Some quantifier functions from weighting functions with constant value of orness,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 2, pp. 540–546, Apr. 2008. [4] B. S. Ahn and H. Park, “Least-squared ordered weighted averaging operator weights,” Int. J. Intell. Syst., vol. 23, no. 1, pp. 33–49, 2008. [5] R. E. Bellman and L. A. Zadeh, “Decision-making in a fuzzy environment,” Manag. Sci., vol. 17, no. 4, pp. 141–164, 1970. [6] Q. W. Cao and J. Wu, “The extended COWG operators and their application to multiple attributive group decision making problems with interval numbers,” Appl. Math. Model., vol. 35, no. 5, pp. 2075–2086, 2011. [7] S. M. Chen and S. J. Niou, “Fuzzy multiple attributes group decisionmaking based on fuzzy induced OWA operators,” Expert Syst. Applicat., vol. 38, no. 4, pp. 4097–4108, 2011. [8] H. Y. Chen and L. G Zhou, “An approach to group decision making with interval fuzzy preference relations based on induced generalized continuous ordered weighted averaging operator,” Expert Syst. Applicat., vol. 38, no. 10, pp. 13432–13440, 2011. [9] F. Chiclana, E. Herrera-Viedma, F. Herrera, and S. Alonso, “Induced ordered weighted geometric operators and their use in the aggregation of multiplicative preference relations,” Int. J. Intell. Syst., vol. 19, no. 3, pp. 233–255, 2004. [10] F. Chiclana, E. Herrera-Viedma, F. Herrera, and S. Alonso, “Some induced ordered weighted averaging operators and their use for solving group decision-making problems based on fuzzy preference relations,” Eur. J. Oper. Res., vol. 182, no. 1, pp. 383–399, 2007. [11] A. Emrouznejad and G. R. Amin, “Improving minimax disparity model to determine the OWA operator weights,” Inf. Sci., vol. 180, no. 8, pp. 1477–1485, 2010. [12] D. Filev and R. R. Yager, “Analytic properties of maximal entropy OWA operators,” Inf. Sci., vol. 85, nos. 1–3, pp. 11–27, 1995. [13] D. Filev and R. R. Yager, “On the issue of obtaining OWA operator weights,” Fuzzy Sets Syst., vol. 94, no. 2, pp. 157–169, 1998. [14] R. Full´er, “On obtaining OWA operator weights: A short survey of recent developments,” in Proc. 5th IEEE Int. Conf. Comput. Cybern., Oct. 2007, pp. 241–244. [15] R. Full´er and P. Majlender, “An analytic approach for obtaining maximal entropy OWA operator weights,” Fuzzy Sets Syst., vol. 124, no. 1, pp. 53–57, 2001. [16] K. H. Guo, “Amount of information and attitudinal based method for ranking Atanassov’s intuitionistic fuzzy values,” IEEE Trans. Fuzzy Syst., vol. 21, no. 6, pp. 1–12, 2013. [17] K. H. Guo and W. L. Li, “A C-OWA operator-based method for aggregating intuitionistic fuzzy information and its application to decision making under uncertainty,” Int. J. Dig. Content Technol. Applicat., vol. 4, no. 7, pp. 140–147, 2010. [18] K. H. Guo and W. L. Li, “An attitudinal-based method for constructing intuitionistic fuzzy information in hybrid MADM under uncertainty,” Inf. Sci., vol. 208, pp. 28–38, Nov. 2012. [19] E. Herrera-Viedma, S. Alonso, F. Chiclana, and F. Herrera, “A consensus model for group decision making with incomplete fuzzy preference relations,” IEEE Trans. Fuzzy Syst., vol. 15, no. 5, pp. 863–877, 2007. [20] E. Herrera-Viedma, F. Chiclana, F. Herrera, and S. Alonso, “Group decision-making model with incomplete fuzzy preference relations based on additive consistency,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 1, pp. 176–189, Feb. 2007.

[21] X. W. Liu, “A general model of parameterized OWA aggregation with given orness level,” Int. J. Approx. Reason., vol. 48, no. 2, pp. 598–627, 2008. [22] X. W. Liu, “On the properties of equidifferent RIM quantifier with generating function,” Int. J. General Syst., vol. 34, no. 5, pp. 579–594, 2005. [23] X. W. Liu, “Some properties of the weighted OWA operator,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 36, no. 1, pp. 118–127, Feb. 2006. [24] X. W. Liu and S. L. Han, “Orness and parameterized RIM quantifier aggregation with OWA operators: A summary,” Int. J. Approx. Reason., vol. 48, no. 1, pp. 77–97, 2008. [25] J. M. Merig´o, “A unified model between the weighted average and the induced OWA operator,” Expert Syst. Applicat., vol. 38, no. 9, pp. 11560–11572, 2011. [26] J. M. Merig´o, “Probabilities in the OWA operator,” Expert Syst. Applicat., vol. 39, no. 13, pp. 11456–11467, 2012. [27] M. O’Hagan, “Aggregating template or rule antecedents in real-time expert systems with fuzzy set logic,” in Proc. 22nd Annu. IEEE Asilomar Conf. Signals, Syst. Comput., 1988, pp. 681–689. [28] Y. M. Wang and K. S. Chin, “The use of OWA operator weights for cross-efficiency aggregation,” Omega, vol. 39, no. 5, pp. 493–503, 2011. [29] Y. M. Wang and C. Parkan, “A minimax disparity approach for obtaining OWA operator weights,” Inf. Sci., vol. 175, nos. 1–2, pp. 20–29, 2005. [30] Y. M. Wang and C. Parkan, “A preemptive goal programming method for aggregating OWA operator weights in group decision making,” Inf. Sci., vol. 177, no. 8, pp. 1867–1877, 2007. [31] Z. S. Xu, “An overview of methods for determining OWA weights,” Int. J. Intell. Syst., vol. 20, no. 8, pp. 843–865, 2005. [32] Z. S. Xu and Q. L. Da, “The ordered weighted geometric averaging operators,” Int. J. Intell. Syst., vol. 17, no. 7, pp. 709–716, 2002. [33] R. R. Yager, “Families of OWA operators,” Fuzzy Sets Syst., vol. 59, no. 2, pp. 125–143, 1993. [34] R. R. Yager, “Generalized OWA aggregation operators,” Fuzzy Optimiz. Decision Making, vol. 2, no. 1, pp. 93–107, 2004. [35] R. R. Yager, “Induced aggregation operators,” Fuzzy Sets Syst., vol. 137, no. 1, pp. 59–69, 2003. [36] R. R. Yager, “On ordered weighted averaging aggregation operators in multi-criteria decision making,” IEEE Trans. Syst., Man, Cybern., vol. 18, no. 1, pp. 183–190, Jan./Feb. 1988. [37] R. R. Yager, “On the dispersion measure of OWA operators,” Inf. Sci., vol. 179, no. 22, pp. 3908–3919, 2009. [38] R. R. Yager, “OWA aggregation over a continuous interval argument with applications to decision making,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 34, no. 5, pp. 1952–1963, Oct. 2004. [39] R. R. Yager, “Quantifier guided aggregation using OWA operators,” Int. J. Intell. Syst., vol. 11, no. 1, pp. 49–73, 1996. [40] R. R. Yager, “Weighted maximum entropy OWA aggregation with application to decision making under risk,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 3, pp. 555–564, May 2009. [41] R. R. Yager and Z. S. Xu, “The continuous ordered weighted geometric operator and its application to decision making,” Fuzzy Sets Syst., vol. 157, no. 10, pp. 1393–1402, 2006. [42] L. A. Zadeh, “A computational approach to fuzzy quantifiers in natural languages,” Comput. Math. Applicat., vol. 9, no. 1, pp. 149–184, 1983. [43] L. A. Zadeh, “Fuzzy sets,” Inf. Control, vol. 8, no. 3, pp. 338–353, 1965. [44] M. Zarghami and F. Szidarovszky, “Revising the OWA operator for multicriteria decision making problems under uncertainty,” Eur. J. Oper. Res., vol. 198, no. 1, pp. 259–265, 2009. [45] L. G. Zhou and H. Y. Chen, “Continuous generalized OWA operator and its application to decision making,” Fuzzy Sets Syst., vol. 168, no. 1, pp. 18–34, 2011. Kaihong Guo received the Ph.D. degree in management science and engineering from the Dalian University of Technology, Dalian, China, in 2011. He is currently a Lecturer with the School of Information, Liaoning University, Shenyang, China. He has published many papers in journals, including the IEEE Transactions on Fuzzy Systems, the Information Sciences, the Expert Systems with Applications, the International Journal of Innovative Computing, Information and Control, and the Chinese Journal of Management Sciences. His current research interests include information fusion and decision making under uncertainty and has been funded by the Ministry of Education of China and the Department of Science and Technology of Liaoning Province, China.

Quantifiers induced by subjective expected value of sample information.

The ordered weighted averaging (OWA) operator provides a unified framework for multiattribute decision making (MADM) under uncertainty. In this paper,...
5MB Sizes 6 Downloads 3 Views