This article was downloaded by: [Queensland University of Technology] On: 31 October 2014, At: 05:28 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Multivariate Behavioral Research Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hmbr20

Using the Analytic Hierarchy Process to Analyze Multiattribute Decisions Eric E. Spires Published online: 10 Jun 2010.

To cite this article: Eric E. Spires (1991) Using the Analytic Hierarchy Process to Analyze Multiattribute Decisions, Multivariate Behavioral Research, 26:2, 345-361, DOI: 10.1207/s15327906mbr2602_8 To link to this article: http://dx.doi.org/10.1207/s15327906mbr2602_8

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Multivariate Behavioral Research, 26 (2), 345-361 Copyright O 1991, Lawrence Erlbaum Associates, Tnc.

Using the Analytic Hierarchy Process to Analyze Multiattribute Decisions Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

Eric E. Spires Faculty of Accounting and MIS The Ohio State University

Since its introduction a decade ago, the Analytic Hierarchy Process (AHP) has been used by decision makers in many contexts. The vast majority of these applications has involved allocating resources or making choices from among alternatives. This article discusses how AHP may be used in a different manner: to assist researchers in the analysis of decisions. AHP is briefly compared with other decision-analysis techniques, such as multiattribute utility measurement, conjoint measurement, and general linear models (regression and analysis of variance). Some of the possible insights into decision processes that can be obtained using AHP are illustrated with data gathered from practicing auditors of financial statements.

The Analytic Hierarchy Process (AHP) (Saaty, 1980) is a technique used to evaluate multiattribute decision alternatives. It involves the derivation ofpriority weights, positive numbers that sum to one and reflect the value or importance of the alternatives. AHP has been successfully applied in many diverse areas (see Zahedi, 1986) and much work has been done to improve the method's practical applicability (e.g., Weiss & Rao, 1987) and analytic properties (e.g., Jong, 1984). To date, AHP has typically been used to help decision makers choose a course of action or allocate resources. Although these applications have shown AHP to be a useful decision aid, their emphasis on aiding decisions has caused AHP to be virtually ignored by judgment and decision making researchers. In the current study, AHP is discussed as a method for analyzing decisions, thus making it of more direct use to researchers. Three ways in which AHP can enable researchers to analyze decisions and gain insights into decision processes are illustrated. In the next section, AHP is briefly described and compared to three other techniques of analyzing multiattribute decisions: (a) multiattribute utility measurement, (b) conjoint measurement, and (c) general linear models. In subsequent sections, methods by which AHP can be used to analyze decisions are presented. The methods are illustrated using data collected from practicing auditors of financial statements. Each section is organized such that, for readers not interested in the auditing application, illustrations can be skipped without loss of continuity.

APRIL 1991

345

E. Spires

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

The Analytic Hierarchy Process AHP requires development of a hierarchy that describes the decision or evaluation to bemade. Decision alternativesform the lowest level of the hierarchy and the general objective of the decision is the highest level. The intermediate levels (usually ranging from one to four in number) represent various levels of attributes of the decision. Strict application of AHP requires that attributes within each intermediate level be independent, although many applications of AHP have not addressed this requirement (Kamenetzky, 1982). As an example of a hierarchy, assume a family is choosing where to go on vacation. The lowest level of this hierarchy would consist of various vacation sites and the uppermost level would be the general objective success of the vacation. The single intermediate level would consist of attributes that would be considered in assessing the vacation's success, such as relaxation opportunities, social atmosphere, and learning opportunities.' Figure 1presents the hierarchy. To use AHP to choose a vacation site, elements in each level of the hierarchy are compared on a pairwise basis with respect to each element of the next higher level of the hierarchy. For example, the vacation attributes (level 2) would be compared on a pairwise basis in terms of their importance in making the vacation a success. Similarly, the four vacation sites would be compared on a pairwise basis with respect to RE (relaxation opportunities), then with respect to SO, etc. Because there are four vacation sites, 4(3)/2 = 6 pairwise comparisons would be made with respect to each element of the attributes of vacation success level of the hierarchy. The pairwisecomparisons are made using anine-point intensity-of-importance scale. In this scale, a value of "1" implies that the two attributes (or alternatives)

OVERALL OBJECTIVE OF VACATION

Success of vacation

RE

1

SO

2

ATTRIBUTES OF VACATION SUCCESS

LE

3

4

VACATION SITES

RE = relaxation opportunities SO = social atmosphere LE = learning opportunities

Figure 1 Vacation Hierarchy Of course, there are many other attributes, such as affordability, that one might consider in choosing a vacation site. Only three were chosen for simplicity. 346

MULTIVARIATE BEHAVIORAL RESEARCH

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

E.Spires are of equal importance, a value of "5" implies that one attribute (or alternative) is strongly more important than the other, and a value of "9" implies absolute importance(see Saaty, 1980,for furtherdescription). Becausepairwisecomparisons are used, a judge using AHP is not required to explicitly define a measurement scale. AHP generally requires that all possible pairs be compared using the ninepoint scale. Because all possible pairs are compared, redundant information is obtained. For example, if a judge assigns avalue of "3" in comparing vacation site 1 to vacation site 2 for relaxation opportunities, and assigns a value of "2" in comparing vacation site 2 to vacation site 3, then (assuming perfect consistency) a value of "6" would be assigned in comparing vacation site 1to vacation site 3. Although the relationship between vacation sites 1 and 3 can be calculated in this manner, the judge also directly compares vacation sites 1 and 3, thus providing redundant information. This redundant information adds to the robustness of the priority weights (discussed in next paragraph) and is used to assess consistency of judgments (see Saaty, 1980). To summarize thus far, elements in level h of the hierarchy (h = 2 to v9where v is the number of levels in the hierarchy and 1represents the highest (i.e., singlenode) level) are compared on a painvise basis with respect to each element in level h-1. For each level, n sets of comparisons (where n is the number of elements in hierarchy level h-1) are required. Priority weights are calci~latedfor each set of comparisons by solving a maximum eigenvalue problem. Define the resulting vector of weights as wt,where i is an element in hierarchy level h-1 (i = 1 to n). The weights in these vectors sum to 1 and represent the priority or importance of each item as revealed by the paired comparisons. Each wfis a column in the performance matrix W . In terms of Figure 1, W would be a 3 x 1matrix (i.e., a 3-element column vector; three attributes evaluated along one criterion) and W would be a 4 x 3 nnatrix (four vacation sites evaluated along three attributes). The priority weights are combined using an hierarchical composition scheme, as follows:

where U is ap-element column vector of final priority or importance weights and p i s the number of elements in the lowest level of the hierarchy. In terms of Figure 1, equation 1 would be

and Uwould have four elements (corresponding to the fourvacationsites). U, then, is avector of priority weights that is based on the decision attributes and represents the priority of each vacation site (lowest level) in being a success (highest level). APRIL 1991

347

E. Spires

Applications of AHP have included portfolio selection (Saaty, Rogers, & Pell, 1980), energy policy analysis (Gholamnezhad & Saaty, 1982), microcomputer selection (Arbel & Seidmann, 1984), capital expenditure decisions (Lusk, 1979), and many others. Virtually all of these applications involve making choices among alternatives or allocating resources.

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

Relation of AHP to Other Multiattribute Methods AHP is only one of many multiattribute decision techniques. Three more widely used techniques are multiattribute utility measurement, conjoint measurement, and general linear model approaches (e.g., multiple regression and analysis of variance). Each technique is briefly described below. Then similarities and differences between AHP and these other techniques are discussed to accomplish two purposes: (a) to place AHP in context of more mainstream judgment and decision making research and (b) to point out possible benefits and detriments of AHP relative to the other methods. Multiattributeutility measurement (Keeney & Raiffa, 1976)generallyrequires assessment of (a) a unidimensional utility function for each attribute and (b) attribute weights. In terms of the vacation hierarchy (Figure I), a judge would assess unidimensional utility functions of the vacation sites on each of the three attributes RE, SO and LE. The judge also would assess attribute weights (importance weights) for the three second-level attributes. The utility for each vacation site may be calculated by summing the products of the utility of the site on each attribute and the attribute weight. These final utilities are similar in interpretation to the weights in AHP's U vector (see Jensen, 1983; Kamenetzky, 1982; Weiss & Rao, 1987). Conjoint measurement (Krantz, Luce, Suppes, & Tversky, 1971; Luce & Tukey ,1964) involves determining whether there exist scales of measurement (for both the dependent and the independent variables) that satisfy a proposed composition rule (e.g., additive). Axiomatic conjoint measurement is used to evaluate the fit of different composition rules (usually in three variables). Numeric conjoint measurement produces part-worth functions, which are similar to attribute importance weights in AHP, for a particular composition rule. The method requires judges to evaluate cases in which the attributes have been systematically varied. That is, judges make case-based judgments, as opposed to the attribute-based judgments required in AHP and multiattribute utility measurement. General linear models, consisting primarily of multiple regression and analysis of variance approaches, also require judges to evaluate cases in which attributes have been systematically varied. These methods produce coefficients or weights that are similar to attribute importance weights in AHP, although if the 348

MULTIVARIATE BEHAVIORAL RESEARCH

E. Spires

predictorvariables in amultiple regression model are intercorrelated,interpretation of the weights is problematic (Darlington, 1968). The general linear models approaches allow, to a certain degree, assessment of how attributes are combined.

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

Comparison of the Methods The primary characteristic that distinguishes AHP from multiattribute utility measurement, conjoint measurement, and general linear models is AHP7suse of painvise comparisons. By using pairwise comparisons, judges are not required to explicitly define a measurement scale for each attribute. Thus, AHP is very useful for attributesthat have ill-defined scales. Such attributesinclude social atmosphere, ability, pleasure, trustworthiness, and a host of others. The premise underlying the use of painvise comparisons is that it is easier (and possibly more accurate), for example, to rate vacation site 1 relative to vacation site 2 in terns of social atmosphere than it is to devise a social atmosphere rating scale for allvacation sites. The other methods require, in essence, that rating scales be explicitly defined. Wheneverjudgments are elicited,judgmental inconsistency must beconsidered in assessing the validity of the findings. Conjoint measurement and the general linear models have well defined mechanisms for measuring judgmental inconsistency, whereas multiattribute utility measurement (relatively speaking) neglects judgmental inconsistency (Kamenetzky, 1982; Schoemaker and Waid, 1982). AHP has a standardized measure for judgmental inconsistency, although the cutoff for assessing inconsistency appears ad hoc and it is not clear what should be done if judgments are inconsistent (see Jensen, 1984). The ease of using AHP relative to the other methods and the relative quality of results obtained using the methods are empirical issues that have not been adequately studied. However, Schoemaker and Waid (1982) compared AHP, multiattributeutility measurement, andmultiple regression'smethodsforeliciting attribute weights. They reported that the three methods, for all practical purposes, performed equally well (p. 192), although there were substantial differences across individuals. Also, subjects perceived AHP as being less difficult to apply and more trustworthy in capturing attribute preferences than the other two methods, although differences in trustworthiness were not statistically significant (1). 191). The purpose of the above discussion is not to suggest that one method is superior to others, but rather to suggest that each method may have advantages in particular situations. AHP is not usually viewed as a decision analysis technique, but as an aid to decision makers in making complex decisions. Identifying potential benefitsofAHPand illustrating (below) how AHPcan beused to analyze decisions (similar to other techniques in some cases) indicates that B H P may be viewed as an additional viable method with which to study judgment.

APRIL 1991

349

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

E. Spires

Three ways in which AHP can be used to analyze decisions are presented. The first is based on Jensen (1983) and involves assessing the descriptive validity of the hierarchical composition scheme implicit in AHP. The second involves estimating intermediate level priority weights. This may be useful, as illustrated below, in analyzing decisions that are normally made without explicitly using a hierarchy (but that could be made using a hierarchy) and in testing hypotheses about intermediate level priority weights. The third involves analyzing the effects of combining lowest level outcomes (i.e., implementing more than one decision alternative). As shown below, this can be used to address hypotheses about interactions between two (or more) decision alternatives. The next section provides background information necessary to understand the auditing-based illustrations of how AHP can be used to analyze multiattribute decisions. It and the illustrations may be skipped without loss of continuity by readers more interested in the method than in the auditing illustration. Subsequent sections describe the three types of analysis. Auditors' Evaluations of Internal Control The primary purpose of audits of financial statements is to assess whether the financial statements are free of material error. Auditors make this assessment by gathering evidence about the financial statements. An audit may be approached in two ways. First, auditors can gather evidence that relates directly to the financial statements. Alternatively, they can gather evidence about the process by which the financial statements were prepared. Under this second alternative, auditors reason that if the process of statement preparation is strong and likely to prevent errors from occurring (i.e., the process is well controlled), not as much direct evidence about the financial statements would have to be gathered, thus making for a more efficient audit. The assessment of the process of financial statement preparation is generally referred to as evaluation of internal control. To evaluate internal control, auditorsgatherevidenceusing auditingprocedures called tests of controls (TCs). The purpose of using TCs is to gain assurance about two objectives of internal control: (a) that the controls have been applied properly (i.e., without error) and (b) that the controls have been applied by independent auditee employees. Auditee employees are employees of the company whose financial statements are being audited and a control is a procedure performed by the auditee that either prevents or detects errors from occurring in the financial statements. If controls are applied by auditee employees who are independent of other auditee employees (the second objective of internal control), errors have a better chance of being prevented or detected. Auditors can use fiveTCs (both singly and in combination) to gather evidence about the objectives of internal controI: 350

MULTIVARIATE BEHAVIORAL RESEARCH

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

E. Spires

1.Document inspection (D) - inspection of documents for indication that a control has been performed by particular auditee employees. 2. Inquiry (I) - discussion with auditee employees about the performance of a control. 3. Observation (0)- observation of the performance of a control. 4. Reperformance(R) -performanceby auditors of the same control pedormed by the auditee. 5. Scanning (S) - quick reviews by auditors of documents for o6vious errors that relate to a control. These TCs differ in the strength of evidence they produce. Ira accord with recent auditing research interest in strength of various auditing procedures (see Spires & Yardley, 1989), the hierarchy in Figure 2 was developed to measure strength of TCs. The strength attributes in the hierarchy are based on authoritative auditing standards. Ten auditors evaluated the hierarchy, using a method and scalesimilar to those used in AHP. The auditors were chosen from the six largest public accounting firms in the United States and each auditor had experience using TCs. Their responses are used below to illustrate ways in which AHP can be used to analyze decisions.

Control is operating x1

VA

CONTROL OBJECTIVES

x2

CO I

D

OVERALL OBJECTIVE OF TCs

0

VE R

STRENGTH AnRIBUTES S

TESTS OFCONTROLS (TCs)

x l = control performed properly x2 = control performed independently VA = validity

CO = coverage VE = verifiability Figure 2 Test of Control (TC) S~trengthHierarchy

APRIL 1991

D = document inspection I = inquiry 0 = observation R = reperformance S = scanning

E. Spires

UsingAHP to Analyze Decisions Testing the Descriptive Validity of the Aggregation Scheme

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

Method When combining multiple attributes in a hierarchy, the matrix multiplication of AHP (seeEquation 1)implies a compensatory aggregation scheme. That is, low weights on some attributes can be offset by high weights on other attributes. In the vacation hierarchy (Figure I), a low weight on LE for vacation site 1, for example, could be compensated for by high weights on RE and SO. That is, vacation site 1 could have the highest U vector weight (on which the decision is based), even though it had the lowest weight on one of the attributes. As Jensen (1983) illustrates, this can lead to a final outcome (U) vector that may not be entirely compatible with the judge's values, especially if the judge's utility is not consistent with the compensatory aggregation scheme. Jensen recommends having the judge painvise compare the lowest-level elements with respect to the general objective while considering all attributes simultaneously, thus avoiding the AHP aggregation altogether. How this subjective aggregation might be used depends on the purpose of employing AHP. If the purpose is simply to choose the best alternative, one need determine only whether the top-ranked alternative in the deduced final vector (i.e., deduced using the AHP scheme) is the same as the top-ranked alternative in the elicited final vector (i.e., the subjective aggregation vector). This is the purpose in Jensen7s(1983) study. Rather than being used to make choices, however, AHP might be used to test formal models of judgments and decisions processes. For example, a researcher may be interestedin whether the compensatory aggregation scheme is descriptively valid. For this purpose of AHP, the deduced and elicited vectors must be compared in more detail. The root mean square deviation (RMS) or median absolute deviation about the median (MAD) can be used to assess how similar the vectors are (Saaty, 1980, pp. 37-39). Both of these measure the deviation of a set of numbers from another set. To standardize and assess the significance of the measures, Saaty recommends dividing them by Ilp, where p is the number of elements in the U vectors being compared ( p = 4 for the vacation hierarchy). If one or both of the standardized measures are less than .lo, the vectors are similar (Saaty, 1980, p. 39).2

Saaty also has tried the chi-square goodness-of-fit test to assess similarity of vectors, but has not found it useful (1977, p.247). 352

MULTIVARIATE BEHAVIORAL RESEARCH

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

If the vectors are found to be similar, the compensatory model wollld be considered descriptive. If the vectors are not similar, noncompensatory (or partially noncompensatory)models, such as conjunctive,disjunctive,lexicographic, or elimination-by-aspectsmodels (a type of lexicographic model) might be more descriptive. (See Tversky, 1969, 1972; Einhorn, 1970; and Payrte, 1976 for discussion of noncornpensatorymodels.) Comparison of the deduced vector with the elicited vector with reference to the intermediate level weights would help determine which type of aggregation model is most descriptive. Illustration To illustrate this use of AHP, consider the auditors' average vectors in Table 1.3 The deduced vector is based on the AHP compensatory aggregation scheme, whereas the elicited vector is based on a subjective aggregationscheme." The rank order of the two vectors is the same and the standardized MAD is .05, which is less than the .I0 cutoff. Thus, the vectors are similar. In this case, one could conclude that the compensatory aggregation scheme inherent in AHP is descriptive of how the auditors combined the attributes into overall strength ratings for TCs.

Table 1 Comparison of Compensatory with Subjective Agregation TC"

Deduced vectorb

Elicited vector"

D I 0 R S

.44 .17 .17 .14

.49 .17 .17

-

.08

.10 .07

D = document inspection; I = inquiry; 0 =observation; R = reperformance; S = scanning. bAHP compensatory aggregation. 'Subjective aggregation. a

The vectors in Table 1 represent arithmetic means for the ten respondents. Saaty (31980, p. 68) recommends using geometric means if there is more than one judge, but Saaty and Vargas (1980) use arithmetic means. (See also Aczel & Saaty, 1983.) In the present case, the results are about the same whether arithmetic or geometric means are used. The same analysis could, of course, be done on an individual judge basis, instead of using means. As discussed later, respondents were not requested to compare directly the control objectives (second level of the Figure 2 hierarchy). The deduced vector in Table 1 uses for illustrative purposes a priority vector of c.5,.5> for the second level of the hierarchy. In the next section of this article, the actual weights implied by the auditors' judgments are estimated. APRIL 1991

353

E. Spires

Estimating Intermediate Level Priority Weights

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

Method A premise that underlies most AHP applications, as well as other decision aids, is that a judge is generally not able (or finds it very difficult) to evaluate the alternatives (lowest level) directly with respect to the general objective (highest level), but that evaluations involving intermediate levels can be made. In some situations, however, a judge might be accustomed to making direct evaluations on thegeneral objective, without explicitly making intermediate level evaluations. In these situations, AHP may not be needed as a decision aid, but still may be used to analyze decisions. For example, AHP may be used to assess the implied importance of intermediate level attributes. To illustrate, consider the vacation hierarchy in Figure 1. If a judge rated the four vacation sites directly on "success of vacation7'and also rated the vacation sites on each of the three second-level attributes, AHP could be used to assess the judge's preferences about the relative importance of RE, SO and LE. The following expression, derived from AHP methodology, could be used to estimate the weights for RE, SO and LE:

where E = a 4-element vector of measurement errors, 0 = the "overall" vector (the W judge's direct assessment of the vacation sites on the overall objective), = a 4 x 3 matrix of weights for the vacation sites on the three attributes (based on the judge's painvise comparisons), and @ = a 3-element vector of estimated weights for the three attributes. The vector would be estimated by minimizing the sum of the absolute values of the elements in E. Of course, other measures, such as squared error (the sum of the squares of the elements in E), could also be used as the minimization criterion. This use of AHP could be especially useful when judges lack the willingness or ability to make explicit tradeoffs required by pairwise comparisons (see, e.g., Shapira, 1981), but nonetheless, implicitly make such tradeoffs when choosing alternatives. In the vacation hierarchy, for example, a judge may not be willing or able to assess the importance of learning opportunities, but in ranking the vacation sites, the judge would reveal his or her preferences (as measured by Equation 3). Also, in some cases, a judge may assign one set of weights when explicitly confronted, but the judge's actions may imply a different set of weights. For example, Saaty, Rogers, and Pell's (1980) hierarchical scheme for selecting stock 354

MULTIVARIATE BEHAVIORAL RESEARCH

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

E. Spires

portfolios requires, among others, weights for several investor objectives (e.g., profit, security, excitement). A researcher interested in studying how well realworld decisions reflect a judge's expressed beliefs, might take a judge's existing portfolio and estimate (using the procedure illustrated above) the implied inve:stor objective weights. These weights could then be compared to the judge's explicit statement of his or her investor objectiveweights. This processmight be especially useful for comparing explicitly stated attitudes toward risk with actual actions that reflect risk.5 This use of AHP r~equiresthat the hierarchy be well specified and that the judge be well acquainted with the implied measurement scale of AHP. If, for example, attributes that are normally used in making a decision are missing from the hierarchy, the equation 3 minimization procedure may yield weights that. are inaccurate. Illustration

In terms of the hierarchy in Figure 2, auditors normally (in an audit) judge directly the strength of evidence obtained from TCs (i.e., make a direct general assessment of the lowest-level elements with respect to the highest level). Asking auditors directly about relative importance of attributes (intermediate levels) may cause responses not to reflect auditors7normal judgments. First, auditors may not be able to adequately analyze their judgments. Second, because they are being asked in a controlled setting to make unnatural types of judgments, they may tend to respond in the manner they believe they should respond, which may not be descriptive of their actualjudgments in an audit. That is, demand effectsmight bias the responses. Discussions with practicing auditors and pretesting the AHP questionnaire revealed that auditors had difficulty evaluating the two control objectives, but could evaluate the strength attributes. Therefore, the auditors ere not requested to directly compare the control objectives. An hypothesis of interest was whether auditors' judgments of TC strength reflect statements in most firms7auditing manuals and several auditing textbooks that the independently objective is generally more important than the properZy objective. Because thie auditors did not assess directly the relative importance of c f k u%!jki%~-, ~ik 1$7pib?k L.VCLIV mn* k test'~u'uYrcc21Y; rbfeaa; irle rCHT methodology was used to derive estimates of the priority weights for the objectives. These implied priority weights were used to test the hypothesis. Table 2 shows the relevant data that were gathered from the auditors. In essence, the auditors evaluated a hierarchy for each control objective ~(vectolrsY, Differences between attitudes toward risk and actions that reflect risk might have several causes. For example, a judge may not be able to adequately express risk preferences or ajudge may incorrectly assess the risk of various investments.

APRIL 1991

355

E. Spires

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

and Y, in Table 2). As mentioned previously, the auditors also assessed directly the TCs on the general objective (vector 0 in Table 2). Equation 4 was used to estimate the control objective priority weights:

where E = a 5-element vector of measurement errors, 0 = the "overall" vector (the auditors' direct assessments of the TCs on the overall objective), Y i= the vectors of priorities for properly (i=l) and independently (i=2)(These are columns of the matrix formed by WW.), = the estimated weights for the properly (i=l) and independently (i=2) control objectives (These together form @). The vector I% was estimated by minimizing the sum of the absolute values of the elements in E. The resulting fi (transposed) was . That is, the independently objective was weighted much more heavily than the properly objective. This is consistent with the hypothesis discussed above, which suggests that auditor judgments are consistent with auditor training and auditing m a n ~ a l s . ~

4

Table 2 Test of Control fTC) Ratings by Control Objective Vectorsa

TCb

a

1

Y2

0

Calculated

Y,= properly control objective; Y, = independently control objective; 0 = overall objective. D = document inspection; I = inquiry; 0 = observation; R = reperformance; S = scanning.

Because the Y, priority weight for R is zero, Y, and Y,, in essence, have different numbers of elements. Thus, elements in Y, are (unjustly) weighted relatively more than elements in Y,. Saaty (1980, pp. 42-43) recommends either using 0's (as done in the text) or adjusting priority weights in Y, by the ratio of the number of nonzeroelements in the twovectors(5/4). Ifweights of Y, are adjusted, minimization of E yields priority weights of .30 and .70 for properly and independently respectively. The conclusion about the hypothesis does not change. 356

MULTIVARIATE BEHAVIORAL RESEARCH

E. Spires

One could carry the analysis further by using the estimated weights in [email protected] to calculate an overall vector (similar to U). The resulting vector is shown in Table 2 (labeled "calculated"). This is very close to the "overal1"vector, as standardized RMS (.07) and MAD (.00) are both less than .lo. This suggests that measurement error (E in equations 3 and 4) is small, lending more support to the validity of the weights. estimated [email protected]

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

Assessing Effects of Combining Alternatives Method

Most AHP applications involve the choice of one alternative (lowest-level element) to implement. Often, however, decision-makers may want to implement more than one alternative. Researchers may be interested in how alternatives interact when considered in combination, and how this interaction affects judges7 choices. In some cases, an additive combination model might be appropriate in assessing combinations of alternatives. In other cases, however, it may not be sufficient simply to add the individual alternatives' priority weights to estimate the priority weight of acombination; that is, anonadditive model would be appropriate. AHP can be used to assess the effects of combining alternatives. Additive and nonadditive combination models, which are discussed in this section, should be distinguished from compensatory and noncompensatory aggregation schemes, which were discussed earlier. The combination models take place over alternatives (lowest level of a hierarchy) and deal with how alternatives are combined within attributes. Aggregation scliemes, on the other hand, take place over attributes (middle levels of a hierarchy). Combination models are of interest only when a judge considers alternatives singly and in combination, whereas aggregation schemes are an inherent part of AHP whenever a hierarchy has more than two levels. There are several contexts in which analysis of combination models might be useful. For example, in selecting an investment portfolio, more than one investment alternative is generally chosen to maximize profit. Because of possible interactions between some alternatives, it may not be optimal simply to choose the alternatives with the highest priority (i.e., U) weights. For example, two types of stock investments may be the two highest-rated individual types of investments, but their combination would not be very useful for maximizing profit because they would react the same way to changes in the economy. In other words, in evaluating this combination of investments along the profit attribute, one would hypothesize that interactions exist. Specifically, the wt weight (the weight of the alternatives along the profit attribute) for the two stock investments together would probably be less than the sum of the whiweights for the individual stock investments. APRIL 1991

357

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

E. Spires

Depending on how alternatives were combined along the other decision attributes, the interaction within the profit attribute could cause a similar effect for the final priority (U) weights. That is, U weights for the combination would be less than the sum of the U weights of the individual stock investments. To illustrate how AHP can be used to assess the effect of combining alternatives, consider again the vacation hierarchy (Figure 1). Assume that because vacation sites 1 and 2 are inexpensive, the family could afford to go to both. Then the hierarchy would be modified to include five lowest-level alternatives: sites 1,2,3 and 4 individually and the combination of sites 1and 2 (denoted site 12). If one were interested in assessing how a judge combined vacation sites 1 and 2 considering all attributes, the U weights could be compared. Define ui as the U weight for single alternative i (i = 1 top), uj as the U weight for single alternativej (j = 1 top, i +j), and uiias the U weight for the combination of alternatives i and j. If uVis approximately equal to (ui + ui), then an additive model is descriptive. If uij is substantially greater than (ui + uj), then a superadditive combination model holds. Conversely, if uijis substantially less than (ui + uj), then a subadditive combination model holds. A measure of the degree of nonadditivity is given by u,J(ui + u,). Although assessing effects of combining alternatives using U weights may be valuable in some situations, the cause of any nonadditivity cannot be assessed by examining U weights only. Rather, w; weights (i.e., weights along individual attributes) must be analyzed. The method of analysis using w; weights is the same as illustrated above using U weights. It is possible that alternatives are combined under a nonadditive model for some attributes, but not for others. In such cases, the effect on final U weights depends on both (a) the degree of nonadditivity within the attributes and (b) the attribute weights themselves. For example, if a strongly superadditive combination model is associated with an attribute that has a small weight, the effect of the superadditivity on the U weights will not be large. This use of AHP requires that all alternatives (both single and combinations) of interest be included in the hierarchy. That is, conclusions about the combination models are specific to the alternatives included. Illustration It is usually the case in auditing that more than one type of evidence is needed to provide appropriate audit assurance. That is, auditors normally use more than one TC at a time. It stands to reason that some combinations of TCs might provide more benefits than others because of the characteristics of the TCs, especially within the validity strength attribute. For example, R and S both address the same underlying component of validity: the quality of the performance of the control. Thus, R and S would probably be characterized by a subadditive combination 358

MULTIVARIATE BEHAVIORAL RESEARCH

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

E. Spires

model. On the other hand, D and R involve different components of validity in that D deals with whether the control was performed (as opposed to the quality of performance). Thus, D and R might be characterized by an additive, or even superadditive, combination model. AHP can be used to assess whether auditors combine the TCs additively or whether nonadditivity is more descriptive. The auditors who evaluated the TCs considered on an individual basis also evaluated the TCs when considered in combination. To help the auditors associate the AHP task with their real-world task, only commonly used TC combinations were included and comparisons were made directly on th~eoverall objective of TCs. The combinations were DRYDS, 10, IOS and D (by itself), where DR is the combination of D and R, etc. D was included in the set of combinations because it is commonly used by itself and it serves as a standardizing element for computing U. Note that instead of requiring one set of comparisons for single TCs and another set for TC combinations, it would have been possible to assess the five single TCs and the four TCs combinations usingone hierarchy (i.e., the lowest level would havenineelernents). Table 3 (next page) presents the elicited vector of TC combination ratings, to which the ratings for single TCs (the elicited vector in Table 1) have been concatenated (see Saaty, 1980, pp. 80-83). The original u,weights (Table '1) were rescaled using the common element D to normalize the vector in Table 3 so that weights sum to 1. That is, absoluteweights for single TCs are not the same as those in Table 1, but relative weights for single TCs are the same as in Table 1(e:.g., for D and S, .49/.07 from Table 1equals .091/.013 from Table 3). This rescaling and concatenation of vectors is required only because the single TCs and TC combinations were judged in separate sets of comparisons. If a single hierarchy with nine elements in the lowest level wereused, as discussed above, rescaling and concatenation would not be necessary. Adding various u, weights and comparing them to elicited weights for a combination reveals that the magnitudes of differences between some pairs of TC combinations are not the same. This is most apparent in the comparison of DR to DS. Additive weights show the two to be nearly the same (.091+ .019 = .I10 for DR; .091+ .013 = .lo4 for DS), whereas the elicited combination ratings indicate a much larger difference (.408 for DR; .I99 for DS). Thus, it seems apparent that an additive model does not describe the auditors' method of combining TCs. To provide further insight on how the auditors combined TCs, Table 3 also presents measures of nonadditivity for the four combinations, computed as u,J(u, + u,), as described above. Superadditivity is associated with most of the combinations (measures of nonadditivity greater than I), with the additive model seemingly most descriptive for 10. This abundance of superadditive models for TC combinations commonly used in practice suggests that auditors may be choosing combinations that provide synergistic benefit. APRIL 1991

359

E. Spires

Table 3 Test of Control (TC)Combinations

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

TC(s)"

Ratings

Measure of Nonadditivity

" DR = combination of D and R; DS = combination of D and S; 1 0 s = combination of I, 0 and S; D = document inspection (by itself); I 0 = combination of I and 0; I = inquiry (by itself); 0 = observation (by itself); R = reperformance (by itself); S = scanning (by itself).

As mentioned above, auditors' judgments of the TC alternatives were made directly on the overall objective (highest level of the hierarchy). This allows analysis of combination models considering all attributes, illustrated earlier using U weights. However, the attributes that cause the nonadditivity cannot be determined because no w weights were elicited for the combinations.

Conclusion Saaty's (1980) AHP originally was conceived as a decision aid that is especially useful for complex decisions which include attributes that are difficult to measure using traditional measurement scales. In this article, AHP was viewed as a method for analyzing decisions. AHP can be used to test specific hypotheses about decision processes and to describe and gain insights into decision processes. Although these uses were illustrated using data gathered in an auditing context, they can be applied in many other contexts. References Aczel, J., & Saaty, T. L. (1983). Procedures for synthesizing ratio judgments. Journal of Mathematical Psychology, 27,93-102. Arbel, A., & Seidmann, A. (1984). Selecting a microcomputer for process control and data acquisition. IIE Transactions, 16,73-80.

360

MULTIVARIATE BEHAVIORAL RESEARCH

Downloaded by [Queensland University of Technology] at 05:28 31 October 2014

E. Spires Darlington, R. B. (1968). Multiple regression in psychological research and practice. Psychological Bulletin, 69, 161-182. Einhorn, H. J. (1970). The use of nonlinear, noncompensatory models in decision making. Psychological Bulletin, 73,221 -230. Gholamnezhad, H., & S,aaty,T. L. (1982). A desired energy mix for the United States in the year 2000: An analytic hierarchy approach. International Journal of Policy Analysis and Information Systems, 6,47-64. Jensen, R. E. (1983). Aggregation (composition) schema for eigenvector scaling of criteria priorities in hierarchical structures. Multivariate Behavioral Research, 18,6344.. Jensen, R. E. (1984). An alternative scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 28,317-332. Jong, P. de. (1984). A statistical approach to Saaty's scaling method for priorities. Journal of Mathematical Psychology, 28,467-478. Kamenetzky, R. D. (1982). The relationship between the analytic hierarchy process and the additive value function. Decision Sciences, 13, 702-713. Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. New York: John Wiley. Krantz, D. H., Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundationsofmeasurement, Vol. I. New York: Academic Press. Luce, R. D., & Tukey, J. W. (1964). Simultaneous conjoint measurement: A new type of fundamental measurement. Journal of Mathematical Psychology, 1,1-27. Lusk, E. J. (1979). Analysis of hospital capital decision alternatives: A priority assignment model. Journal of the Operational Research Society, 30,439-448. Payne, J. W. (1976). Task complexity and contingent processing in decision making: An information search and protocol analysis. OrganizationalBehaviorandHumanPerformance, 16,366-387. Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15,234-281. Saaty, T. L. (1980). The analytic hierarchy process. New York: McGraw-Hill. Saaty,T. L., Rogers, P. C., &Pell, R. (1980). Portfolio selection through hierarchies. TheJournal of Portfolio Management, 6, 16-21. Saaty,T. L., & Vargas, L. G. (1980). Hierarchical analysisof behavior in competition: prediction in chess. Behavioral Science, 25, 180-191. Schoemaker, I". J. H.,& Waid, C. C. (1982). An experimental comparison of different approaches to determining weights in additive utility models. Management Science, 28(2), 182-196. Shapira, Z. (1981). Making trade-offs between job attributes. Organizationad Behavior and Human Performamce, 28,331-355. Spires, E. E., & Yardley, J. (1989). Empirical studies on the reliability of auditing procedures. Journal of Accounting Literature, 8,49-75. Tversky, A. (1969). Intransitivity of preferences. PsychoIogical Review, 76,31-48. Tversky, A. (1972). Elimination by aspects: A theory of choice. PsychologicalReview, 79,281299. Weiss, E. N., & Rao, V.R. (1987). AHPdesign issues for large-scale systems.DecisionSciences, 18,43-61. Zahedi, F. (1986). The analytic hierarchy process - A survey of the method and its applications. Interfaces, 16, 96-108.

APRIL 1991

361

Using the Analytic Hierarchy Process to Analyze Multiattribute Decisions.

Since its introduction a decade ago, the Analytic Hierarchy Process (AHP) has been used by decision makers in many contexts. The vast majority of thes...
1MB Sizes 1 Downloads 12 Views