© 2014 American Psychological Association 1045-3830/14/S12.00 http://dx.doi.org/10.1037/spq0000009

School Psychology Quarterly 2014, Vol. 29, No. 4, 536-552

Exploratory and Confirmatory Factor Analyses of the WISC-IV With Gifted Students Ellen W. Rowe, Jessica Dandridge, Alexandra Pawlush, Dawna F. Thompson, and David E. Ferrier George Mason University These 2 studies investigated the factor structure of the Wechsler Intelligence Scale for Children-4th edition (WISC-IV; Wechsler, 2003a) with exploratory factor analysis (EFA; Study 1) and confirmatory factor analysis (CFA; Study 2) among 2 independent samples of gifted students. The EFA sample consisted of 225 children who were referred for a cognitive assessment as part of the application for gifted programming in their schools. The CFA sample consisted of 181 students who were tested the following year. All students included in the analyses were either accepted to school-based gifted programs following the assessment or were already participating in one. Across the 2 studies, there were approximately equal numbers of boys (205) and girls (201) with a mean age of 8-years-old. The mean composite scores for both samples varied from high average to superior and evidenced substantial differences among the index scores. In the EFA, the 2-, 3-, 4-, and 5-factor solutions were considered. The first 3 models, as well as an additional model reflecting the Verbal and Performance (V-P) IQ scores from previous versions of the WISC, were then compared using CFA. The CFA fit indices and parameter estimates supported the 4-factor, first-order WISC-IV model. This is the measurement model that includes the 4 index scores. Parameter estimates for the higher-order model reflecting g suggested that combining factors into a single, overarching score may not be the best way to represent the varying cognitive scores of gifted students. Keywords: gifted identification, gifted students, assessment, WISC-IV, factor analysis

The No Child Left Behind (NCLB) Act of 2002 provides a federal definition for gifted students, and intellectual giftedness is a type of giftedness specified in the definition.1 Although NCLB presents a definition for gifted students, as McIntosh, Dixon, and Pierson (2012) note, the federal law does not delineate a process for the identification of gifted students. Instead, the process of identifying gifted students is at the discretion of state departments of education or local education authorities (LEAs). Although

This article was published Online First September 22, 2014. Ellen W. Rowe, Jessica Dandridge, Alexandra Pawlush, Dawna F. Thompson, and David E. Ferrier, Department of Psychology, George Mason University. Further information about the data sets is available from the first author. Correspondence concerning this article should be ad­ dressed to Ellen W. Rowe, Department of Psychology, George Mason University, 10340 Democracy Lane, Suite 202, Fairfax, VA 22030. E-mail: [email protected]

some differences exist from school system to school system, the federal definition is the basis for most state definitions. Therefore, there are also many consistencies in the identification of gifted students (Worrell & Erwin, 2011). For example, an assessment of some form is fre­ quently a core component of the process (Na­ tional Association for Gifted Children [NAGC], 2008), and standardized intelligence tests are used regularly (Volker & Smerbeck, 2009; Winner, 2000; Worrell & Erwin, 2011). Among individually administered intelligence tests, the Wechsler Intelligence Scale for Children-4th edition (WISC-IV; Wechsler, 2003a) is the most popular for this purpose (Rimm, Gilman, & Silverman, 2008; Sparrow, Pfeiffer, & New­ man, 2005; Volker & Smerbeck, 2009). More-

1 NCLB highlights several types of giftedness including creative and artistic, but because the focus of this article is intellectually or cognitively gifted students, we use the term “gifted” to denote intellectually gifted.

536

GIFTED WISC-IV STRUCTURE

over, previous editions of Wechsler child scales have been used to identify gifted students for decades. Despite the fact that the WISC-IV is the most commonly used intelligence test for the identi­ fication of gifted students, debate remains about which composite score from the WISC-IV is best suited for this purpose. Among the score options recommended for gifted students are the Full Scale IQ (FSIQ), the General Ability Index (GAI; Prifitera, Weiss, & Saklofske, 1998), as well as the Verbal Comprehension Index (VCI) or Perceptual Reasoning Index (PRI) used indi­ vidually (NAGC, 2010; Rimm et al., 2008; Sparrow et al., 2005; Volker & Smerbeck, 2009). In fact, the GAI is frequently referenced as the most appropriate composite for use with gifted students. On the WISC-IV, the GAI is obtained from the verbal comprehension and perceptual reasoning index subtest scores. Thus, the GAI is considered a measure of cognitive ability that minimizes the impact of working memory and processing speed, as scores for these scales are not included. One reason given for using the GAI is that scores for the Working Memory Index (WMI) and Processing Speed Index (PSI) are often lower than scores for the VCI and PRI among gifted students (NAGC, 2010; Rimm et al., 2008). However, the reason­ ing behind the recommendation of selecting stu­ dents for gifted programs based upon only VCI and/or PRI scores because those scores tend to be the highest among gifted students seems somewhat circular. Moreover, there has been little research with the WISC-IV among gifted students, so the construct validity of these com­ posite scores remains unknown. A number of studies explored the factor structure of the Wechsler child scales among gifted students with previous versions of the measure. In these studies, the question was raised as to whether or not the intelligence of gifted students is qualitatively different from, and not simply higher than, that of students who are not gifted (Williams, McIntosh, Dixon, Newton, & Youman, 2010). The early studies with gifted students used the Wechsler Intelli­ gence Scale for Children-Revised (WISC-R; Wechsler, 1974) and typically employed ex­ ploratory factor analyses (EFA). Among these studies, Karnes and Brown (1980) found sup­ port for two- and three-factor solutions, but they concluded that the two-factor solution resem­

537

bling the Verbal and Performance (V-P) IQ scores was the most stable. Macmann, Plasket, Barnett, and Siler (1991) also found support for a two-factor solution with the WISC-R. As was the case for Karnes and Brown (1980), the two-factor solution was parallel to V -P IQ scales. Subsequently, Watkins, Greenawalt, and Marcell (2002) investigated the factor structure of the Wechsler Intelligence Scale for Children3rd edition (WISC-III; Wechsler, 1991) with data from 505 students enrolled in gifted pro­ grams via EFA. In their analyses, Watkins et al. included the 10 core subtests from the WISCIII, which were the same core subtests as the WISC-R. Watkins and his colleagues also in­ terpreted a two-factor solution with factors that were similar to verbal comprehension and per­ ceptual organization. In this two-factor solution, arithmetic and coding did not load on either of the two factors. Arithmetic and coding had not aligned clearly with one factor or another in the WISC-R analyses either. Likewise, digit span was not included in any of these analyses be­ cause it was a supplemental subtest on the WISC-R and WISC-III. These findings led Watkins et al. to conclude that the WISC-III was essentially a two-factor test among gifted students, and they recommended use of the GAI for the identification of gifted students. The WISC-III was relatively similar to the WISC-R, and the findings of Watkins et al. (2002) are fairly consistent with research using the WISC-R. It is worth noting that most of the subtests analyzed in these studies with the WISC-R and WISC-III are measures of verbal or visual-spatial abilities. In the framework of the Cattell-Hom-Carroll theory (CHC; Carroll, 1993; Cattell, 1943; Horn & Cattell, 1966; McGrew, 2009) most verbal comprehension tasks are considered crystallized abilities (Gc), and the visual-spatial are visual processing (Gv) abilities. Arithmetic and coding, however, are not considered measures of Gc or Gv abilities (Alfonso, Flanagan, & Radwan, 2005; Carroll, 1993; Flanagan, Alfonso, & Ortiz, 2012). Ac­ cording to Alfonso et al., coding is predomi­ nantly a measure of processing speed (Gs), and arithmetic seems to measure a combination of abilities including working memory (Gsm) and fluid reasoning (Gf) (Keith, Fine, Taub, Reyn­ olds, & Kranzler, 2006). Therefore, the finding that arithmetic and coding subtests did not load

538

ROWE, DANDRIDGE, PAWLUSH, THOMPSON, AND FERRIER

on verbal- and perceptual-oriented factors is not surprising in retrospect. Although previous versions of the Wechsler child scales were generally similar to one an­ other, the WISC-IV (Wechsler, 2003a) is sub­ stantially different from its predecessors. In fact, five of the 10 core subtests on the WISC-IV were not core subtests on the WISCIII. As noted in the WISC-IV Technical Manual (Wechsler, 2003b), one of the primary reasons for many of the changes was to bring the mea­ sure more in line with current theories of and research regarding cognitive abilities, particu­ larly CHC theory (Keith et al., 2006). Thus, subtests were added to the WISC-IV in order to increase the measurement of Gf, Gsm, and Gs. The resulting core subtests on WISC-IV include three subtests measuring Gc or verbal compre­ hension and two subtests each for Gsm and Gs (Keith et al., 2006). The perceptual reasoning composite includes subtests measuring both Gf and Gv. Block design is a measure of Gv, and matrix reasoning and picture concepts are con­ sidered measures of Gf (Alfonso et al., 2005; Keith et al., 2006). Moreover, one now inter­ prets the four index scores (VCI, PRI, WMI, and PSI) instead of the V -P IQ scores. As a result of the substantial changes to the Wechsler child measure, the structure of the WISC-IV with gifted students remains in question. There are no factor analytic studies of the WISC-IV with cognitively gifted students, but several studies have considered the factor pat­ tern underlying the WISC-IV with various sam­ ples of children. The first of these appears in the WISC-IV Technical Manual (Wechsler, 2003b). The WISC-IV developers considered the factor structure of the measure using both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) with the normative sample. The developers hypothesized a four-factor model re­ flecting the four index composites, the VCI, PRI, WMI, and PSI (Wechsler, 2003b). The EFA analyses of the core subtests generally supported the hypothesized model. In the anal­ ysis with the total normative sample, the core subtests loaded on the hypothesized factor, and none of the subtests evidenced a substantial secondary loading. Among the 6- to 7-yearolds, however, picture concepts loaded on the VCI and the PRI almost equally, and neither loading was particularly high (.21 and .20, re­ spectively).

In the CFA with core subtests, the WISC-IV developers considered four different models (Wechsler, 2003b). In Model 1, all subtests loaded on one general factor. Model 2 was similar to the V -P IQs from previous Wechsler tests. For Model 2, the first factor consisted of the verbal comprehension and the working memory subtests, and the second factor con­ sisted of the perceptual reasoning and the pro­ cessing speed subtests. In Model 3, the VCI and the PRI were designated as factors, but working memory and processing speed subtests were combined for a third factor. Model 4 reflected the four-factor model of the WISC-IV that is consistent with the four index scores. The de­ velopers then compared the goodness-of-fit in­ dices across models and ages. At all ages and with the entire normative sample, the goodness-of fit indices supported the four-factor model as the one that best fit the data (Wechsler, 2003b). Keith, Fine, Taub, Reynolds, and Kranzler (2006) also conducted a series of CFAs with the normative sample, but Keith and his colleagues analyzed the five supplementary subtests in ad­ dition to the core subtests. The fit indices sug­ gested that the four-factor model provided a good fit. However, Keith and his colleagues went on to argue that a five-factor model which included a specific visual processing (Gv) factor provided a better fit with the data. Although this model is more consistent with CHC theory, one must administer at least one supplemental sub­ test (picture completion) or another measure of Gv in order to obtain two visual processing indicators. Three additional studies have examined the WISC-IV factor structure with samples of re­ ferred children (Bodin, Pardini, Burns, & Ste­ vens, 2009; Watkins, 2010; Watkins, Wilson, Kotz, Carbone, & Babula, 2006), and all of these studies provided support for the fourfactor model as outlined in the WISC-IV Tech­ nical Manual (Wechsler, 2003b). In summary, all studies to date with the WISC-IV core sub­ tests suggest that the four-factor model reflect­ ing the four index composites provides a good fit with the data. At the same time, none of these studies con­ sisted entirely of gifted students, and as stated before, some have questioned if the intelligence of gifted students is fundamentally different from that of nongifted students (Williams et al.,

GIFTED WISC-IV STRUCTURE

539

2010). One way in which the intelligence of exploratory and confirmatory factor analyses gifted students has been shown to be different is with two independent samples of students par­ characterized by Spearman’s Law of Diminish­ ticipating in gifted programming in their ing Returns (SLODR; Spearman, 1927). Re­ schools. search with intelligence test data typically yields substantial positive correlations among General Method variables, and it was these positive correlations or “positive manifold” that led Spearman Overview (1904) to propose the concept of a general in­ Because this is the first factor analytic study telligence. Then again, Spearman (1927) also observed that the correlations among tests be­ of the WISC-IV with gifted students, we began came smaller among individuals of higher abil­ with EFA (Study 1) in order to explore potential ity, and he labeled this phenomenon the “law of factor models. Subsequently, we used CFA with diminishing returns” (Deary & Pagliari, 1991). an independent sample of gifted students to SLODR was demonstrated with the WISC-R compare the resulting models (Study 2). The (Detterman & Daniel, 1989; Jensen, 2003) and data for both studies consists of WISC-IV subsequently with several current versions of scores from participants who were tested for intelligence tests (Reynolds, Hajovsky, Niilek- and accepted into gifted programs in their sela, & Keith, 2011; Reynolds & Keith, 2007). schools or from students already participating in The reasons for SLODR are unclear, but the school-based gifted programs. Students in implication is that among individuals with Study 1 were tested in 2007, and those in Study higher g, g is less important, in relation to more 2 were tested in 2008. individualized cognitive abilities (Jensen, 2003; Reynolds & Keith, 2007). Not all researchers Instrument have found evidence of SLODR (e.g., SakThe WISC-IV is an individually administered lofske, Yang, Zhu, & Austin, 2008), but follow­ measure for assessing children’s cognitive abil­ ing their broad review of the literature on the ity or abilities (Wechsler, 2003a). The core bat­ topic, Hartmann and Reuter (2006) concluded tery consists of 10 subtests that combine to yield that the majority of research supports SLODR. four composite index scores and one composite Therefore, SLODR may be a way in which the FSIQ score. The WISC-IV Technical Manual intelligence of high-ability students is different (Wechsler, 2003b) presents substantial informa­ from that of students with more average abili­ tion regarding the development, reliability, and ties. validity of the instrument. The normative data Another reason for examining the factor consist of test scores for 2,200 children ages 6 structure of the WISC-IV among gifted students through 16 years 11 months. The internal con­ is found in the Standards for Educational and sistency of the FSIQ score with the normative Psychological Testing (AERA, APA, & NCME, data was .97. The internal consistency values 1999). According to the standards of our pro­ were .94 for the VCI, .92 for the PRI, .92 for the fession, the construct validity of a measure WMI, and .88 for the PSI. As mentioned previ­ should be established with the individuals for ously, the CFA fit indices supported the fourwhom the test is to be used. Given that the factor measurement model (RMSEA .04, TLI WISC-IV is the most popular measure for the .98). The correlations with other cognitive mea­ identification of intellectually gifted children sures were high and provided further corrobo­ (NAGC, 2010; Rimm et al., 2008; Sparrow et ration of the validity of the instrum ent al., 2005; Volker & Smerbeck, 2009), the factor (Wechsler, 2003b). structure of the WISC-IV should be examined with a sample of gifted students. Additionally, Procedure as noted previously, debate continues as to the score best suited for the identification of gifted All participating children were referred for students. The construct validity of the measure and received an individual cognitive assessment has direct implications for this debate. The goal at a university training clinic as part of the of the current study, then, is to examine the participants’ application for gifted and talented underlying factors of the WISC-IV using both (GT) programming in the local schools. The

ROWE, DANDRIDGE, PAWLUSH, THOMPSON, AND FERRIER

540

clinic provides training for graduate students in school and clinical psychology and conducts individual cognitive assessments as part of a contract with local schools. Graduate students in school and clinical psychology with previous graduate-level training in assessment adminis­ tered the tests. Standardized administration and scoring procedures as outlined in the WISC-IV Administration and Scoring Manual (Wechsler, 2003a) were followed. Psychology faculty or licensed supervisors provided supervision of the assessments. Zhu, Cayton, Weiss, and Gabel (2008) have provided extended norms for the WISC-IV in the WISC-IV Technical Report #7. These au­ thors propose that the extended norms can be useful in distinguishing highly gifted students (cognitive score > 150) from gifted students (cognitive scores between 130 and 150). It is worth noting that the highest FSIQ across all of our scores was 148, and only 4% of the students across both samples had FSIQ scores over 140. We did not use the WISC-IV extended norms, then, as very few of our participants qualified for these norms, and they would likely change the scores only slightly. At the cognitive evaluation, parents were of­ fered the option of signing a consent form for possible inclusion in future research. Only stu­ dent scores with parental consent for participa­ tion were included in the analyses, and the study was conducted in compliance with the univer­ sity’s Institutional Review Board. Study 1 (Exploratory Factor Analyses) Participants The participants for the EFA were 225 ele­ mentary-age children who were referred for and received a cognitive evaluation in 2007 as part of the application process for school-based gifted services. Participants were selected in one of two ways. The majority (73%) were included in the study because we contacted their parents by phone subsequent to the evaluation, and a parent confirmed that the student had been selected for and was participating in gifted pro­ gramming. Decisions about gifted inclusion were made by the local school systems, and most schools located near the university use a multisource, multimethod, individualized ap­ proach for identifying gifted students. In other

words, they consider teacher recommendations, rating scales, and achievement in addition to standardized test scores. The remaining 27% of the sample had documentation in their file that they were already receiving school-based ser­ vices for gifted students. Some of these students were moving into or out of the area and were seeking an updated evaluation. Others were in­ terested in participating in regional or national programs for gifted students that required cur­ rent test scores. The average age of the students in Study 1 was 8 years 10 months with a range of 6- to 12-years-old. There were 114 girls (51%) and 111 boys (49%). Parents were asked to com­ plete a questionnaire about their child’s race and ethnicity, and home language. Sixty-two per­ cent of parents identified their child as White, 21% Asian, 10% Other, and 2% Hispanic. The remaining 5% of parents did not respond to the question regarding race and ethnicity. English was the primary language for most participants (84%) followed by Chinese (3%) and French (3%). Additional primary languages included Korean, Telugu, and Hindi. Second home lan­ guages included Chinese, Korean, Spanish, Ar­ abic, and Russian. We did not collect data on household income in 2007, but we have no reason to doubt that it would be different from subsequent years when we have collected that information. For participants in Study 2, 54% of parents indicated a household income of $ 100,000 and above, and 31 % did not respond to the question regarding income. Seven percent of parents marked the income range of $80,000 to $99,999, and 4% marked $60,000 to $79,999. Although a majority of participants are from high-income families, the community from which the sample was drawn is a suburban, metropolitan area with a high cost of living. The median income of the surrounding county is approximately $103,000. Data Analyses We began the analyses with descriptive sta­ tistics for the 10 core WISC-IV subtests and the composite scores. Because the only compos­ ite scores for which normative data are available are those derived from the core battery, we elected to analyze only core subtests. We used principal axis factor analysis with the correla­ tion matrix to identify underlying factors. Be-

GIFTED WISC-IV STRUCTURE

cause cognitive subtest scores tend to be corre­ lated with one another, we rotated the results with an oblique rotation (Direct oblimin with delta = 0). Our use of the principal axis factor extraction and an oblique rotation are consistent with the analyses in the WISC-IV Technical Manual (Wechsler, 2003b). As Henson and Roberts (2006) recommend, we used multiple methods to determine the number of factors to retain, including the eigenvalue > 1 rule (EV > 1; Kaiser, 1960), scree test (Cattell, 1966), par­ allel analysis (PA; Horn, 1965), minimum av­ erage partial procedure (MAP; Velicer, 1976), and previous research. All EFA analyses were conducted in SPSS 18. Results and Discussion As indicated in Table 1, the mean index and subtest scores for participants in Study 1 were significantly higher than average. The mean VCI, PRI, and FSIQ scores were in the superior range, and mean scores for the WMI and PSI were in the high average range. The standard deviations were lower than average, which is likely due to the restricted nature of scores for gifted students. The scores for some subtests were slightly skewed but the highest skew value was -.3 5 for similarities (standard error .16). None of the values for kurtosis were more than twice the standard error (.32).

541

These scores are consistent with other sam­ ples of gifted students (NAGC, 2010; Rimm et al., 2008), and the composite scores are higher than the sample of gifted students presented in the WISC-IV Technical Manual (Wechsler, 2003b), where the FSIQ score was 123.5. The scores also follow a common pattern among gifted students with higher scores for verbal comprehension and perceptual reasoning and the lowest score for processing speed (e.g., Rimm et al., 2008; Wechsler, 2003b). The various indicators for the number of fac­ tors to retain yielded different results. The EV > 1 rule indicated three factors, but the value for the fourth factor was .999. The scree test suggested three to five factors. The PA suggested a minimum of three factors, but the eigenvalue for the fourth factor from our data was only slightly lower than that from the av­ erage random fourth factor. Thus, the PA results suggested either three or four factors. The MAP procedure indicated two factors. Consequently, we elected to examine factor solutions two through five. Our criterion for tenable factors was at least two variables with loadings greater than or equal to .30. Based on this criterion, we elimi­ nated the five-factor solution because the fifth factor had only one variable/subtest (similari­ ties) with a loading above .30. Subtest loadings

Table 1

Descriptive Statistics fo r WISC-IV Composites and Subtests, Study / (n = 225) Composite

Mean

Median

SD

Skew

Kurtosis

WISC VCI WISC PRI WISC WMI WISC PSI WISC FSIQ Similarities Vocabulary Comprehension Block design Picture concepts Matrix reasoning Digit span Letter-number sequencing Coding Symbol search

127.15 122.14 116.56 111.72 126.15 15.01 14.60 14.22 12.45 13.64 14.56 12.51 13.49 11.38 12.64

128 123 116 112 126 15 15 14 13 14 15 13 13 11 13

11.43 9.93 11.82 12.03 8.01 2.22 2.08 2.56 2.34 2.24 2.53 2.68 1.91 2.84 2.28

-.25 -.0 4 .16 .29 .16 -.3 5 -.01 -.0 5 -.1 4 -.0 8 .02 -.0 8 .22 .28 .31

-.41 -.23 -.21 -.15 .11 -.2 4 -.23 -.32 -.02 -.35 -.56 .02 .11 -.2 2 .20

Note. WISC = Wechsler Intelligence Scale for Children-Fourth Edition; VCI = Verbal Comprehension Index; PRI = Perceptual Reasoning Index; WMI = Working Memory Index; PSI = Processing Speed Index; FSIQ = Full Scale IQ.

ROWE, DANDRIDGE, PAWLUSH, THOMPSON, AND FERRIER

542

Table 2 Pattern Matrices for EFA Solutions2

Similarities Vocabulary Comprehension Block design Picture concepts Matrix reasoning Digit span Letter-number sequencing Coding Symbol search Note.

I .71 .66 .63

.11 .09 -.0 2 .13 .02 -.1 7 -.23

II .15 .05 -.1 4 .19 .27 .37 .54 .65 .45 .42

Four factors

Three factors

Two factors Subtest

I .75 .64 .61

.08 .08 -.0 6 .16 .05 -.1 5 -.23

II

m

.25 .03 -.1 7 -.0 9 .19 .12

-.1 0 .09 .09

.59 .72 .42 .30

-.0 3 -.0 3 .04 .18

.57

.16 .56

I .37

.04 -.1 6 -.07 .19 .08 .54 .74

.22 -.0 7

II

III

IV

-.6 0 -.6 9 -.6 9

-.0 6 .02 .02

-.07 -.03 .11 -.0 7 .09 .05 -.05

.53

-.1 7 .06 .10 .00 .00 .03 .08 .06

.18 .63

.02 .03 -.0 3 .04

.38 .84

Factor loadings greater than .30 are in bold.

from the pattern matrices for the remaining fac­ tor solutions are presented in Table 2. The first factor to emerge in the two-factor solution was a verbal comprehension factor. On the second factor, all remaining subtests except picture concepts and block design had loadings above .30. Picture concepts had a loading that approached .30, but the loading for block design was only .19. Together these two factors ac­ counted for approximately 41% of the variance. Given that the GAI composite consists of sub­ tests scores from the VCI and the PRI only, the two-factor solution from the WISC-IV with this sample of gifted students does not reflect the GAI. In fact, letter-number sequencing (LNS) had the highest loading on the second factor, and two of the perceptual reasoning subtests, block design and picture concepts did not have loadings above .30. In the three-factor solution, the verbal com­ prehension factor again emerged as the first factor. The second factor was a combination of the working memory and processing speed sub­ tests, and the third factor was a modified per­ ceptual reasoning factor. In this solution, pic­ ture concepts did not load on any factor above .20, but its highest loading (.19) was on the working memory/processing speed factor. This factor solution accounted for approximately 53% of the variance. In the four-factor solution, a processing speed factor emerged in addition to verbal compre­ hension, working memory and perceptual rea­ soning factors. Together, the four factors ac­ counted for 63.18% of the variance. In this solution, the working memory factor emerged

first, and accounted for the largest percentage of the variance. Also of note was the fact that picture concepts did not load on any factor above .20, and its loadings on the working memory and perceptual reasoning factors were roughly equivalent. Similarities had a secondary loading on the working memory factor, but its highest loading was clearly on the verbal com­ prehension factor. Overall, the factors that emerged across the three solutions (two, three, and four factors) were relatively clear-cut, and the variables (sub­ tests) tended to have only one primary loading. With one exception, the subtests also loaded on the factor composite to which they contribute in the measurement model. The exception was pic­ ture concepts. After examining the first three factor solutions, but before examining the fivefactor solution, we hypothesized that picture concepts might emerge as the fifth factor. Al­ though the highest loading in the five-factor solution for picture concepts (.24) was on the fifth factor, similarities actually had the highest loading on that factor (.31). In previous research with the WISC, the first factor to emerge was consistently a Gc or verbal comprehension factor (e.g., Karnes & Brown, 1980; Macmann et al., 1991; Watkins et ah, 2002; Watkins et ah, 2006). In the four-factor solution, however, a working memory, Gsm, factor emerged first and accounted for the larg­ est amount of variance. Of the four indices, the 2 Copies of the structure matrix are available from the first author.

GIFTED WISC-IV STRUCTURE

WMI also had the second highest correlation with the FSIQ (see Table 3). Overall, the four-factor solution is similar to the index score composites and provides rela­ tively clear-cut factors. Moreover, all methods used to indicate the number of factors to retain except the MAP procedure included a fourfactor option. Henson and Roberts (2006) noted that parallel analysis is the most accurate pro­ cedure, and this procedure suggested three to four factors. At the same time, it is not clear that the four-factor model, or any model, provides a good fit for the data of test scores from gifted students. To address this issue we conducted CFA with test scores from gifted students. Before moving to the next study, however, we should point out that the correlations among subtest scores and among the compos­ ite scores (see Table 3) in our sample were much lower than those found in the WISC-IV Technical Manual (Wechsler, 2003b). The mean correlation among core subtests with the normative data was .42, yet ours was .15. These lower correlations may be the result of the restricted range of scores often found among samples of gifted students. In such cases, researchers sometimes recommend cor­ recting the subtest correlations with a formula such as that suggested by Alexander, Carson, Alliger, and Carr (1987) and conducting the factor analyses on the adjusted, higher corre­ lations. We considered this approach but elected to present and utilize results with the actual correlations, as the goal for the EFA was to explore possible factor solutions for the CFA. Besides, as Kline (2010) notes, cor­ recting for restricted range implies that one wishes to generalize the results to a popula-

Table 3 Correlations Among WISC-IV Composite Scores, Sample 1 (n = 225) Variables

VCI

PRI

WMI

PSI

FSIQ

VCI PRI WMI PSI FSIQ

1.00 .10 .11 -.13 .54"

1.00 .21” .15* .64**

1.00 .28" .63"

1.00 .50"

1.00

Note. VCI = Verbal Comprehension Index; PRI = Per­ ceptual Reasoning Index; WMI = Working Memory Index; PSI = Processing Speed Index; FSIQ = Full Scale IQ. ’ p < .05. " p < .01.

543

tion with unrestricted range. That is not the case for this research, where the results are intended to apply to gifted or high-ability students. The lower correlations could also be the result of Spearman’s Law of Diminishing Returns (SLODR), which would be consistent with findings among higher ability students using other standardized measures of intelli­ gence. Study 2 (Confirmatory Factor Analyses) Participants The participants were 181 elementary-age students who were referred for cognitive testing as part of the application for gifted programs in 2008. As was the case in Study 1, only students who were selected for gifted services through their schools or who were already in schoolbased gifted programs were included in the analyses. The students ranged in age from 6- to 12years-old at the time of testing, with a mean age of 8 years 7 months. There were almost equal numbers of boys (52%) and girls (48%). As in Study 1, parents completed a questionnaire about their child’s race and ethnicity, home language, as well as household income. Sixtythree percent of parents identified their child as White, 26% Asian, 5% Other, and 2% Hispanic. The remaining parents did not respond to the questions regarding race and ethnicity. The ma­ jority of parents (82%) indicated that English was the primary language at home. Other pri­ mary languages included Chinese (5%), Korean (4%), and Spanish (1%). At the same time, 17% of the sample also spoke a second language other than English at home. As mentioned pre­ viously, a majority of parents (54%) indicated a household income of $100,000 or above. Data Analyses We again began the analyses with basic de­ scriptive information for the variables among the participants in Study 2. We used SPSS for the univariate descriptive analyses and correlations among the observed variables. We assessed mul­ tivariate normality with AMOS 19, and the CFA models were run on the variance/covariance ma­ trix of the subtest/variables in Amos 19 using the maximum likelihood estimation method (ML).

ROWE, DANDRIDGE, PAWLUSH, THOMPSON, AND FERRIER

544

We then delineated two-, three-, and four-factor models. Our EFA results did not suggest a tradi­ tional V-P IQ model (see Table 2 for EFA factor loadings from pattern matrices). Nonetheless, this model has considerable support from previous versions of the WISC (Karnes & Brown, 1980; Macmann et al., 1991), so we examined the Ver­ bal and Performance model from previous re­ search in addition to the two-factor model that emerged from our EFA. In the V-P IQ model, the verbal comprehension and working memory sub­ tests were constrained to one factor, and the per­ ceptual reasoning and processing speed subtests were constrained to the second factor. In the twofactor model from our EFA, two subtests (block design and picture concepts) did not have signif­ icant loadings on either factor, but both had higher loading on the nonverbal factor. Thus, this model consisted of a verbal comprehension factor, with all remaining subtests constrained to the second factor. The three-factor model consisted of verbal comprehension, perceptual reasoning, and a com­ bined working memory/processing speed factors. The hypothesized four-factor model reflected the structure of the four index scores. Although pic­ ture concepts did not have a loading greater than .27 in any EFA solution, it was constrained to the perceptual reasoning factor. After running the first-order solutions, we analyzed a second-order solution that included a higher-order factor repre­ senting general intelligence (g).

Following Kline’s (2011) recommendations, we used several measures of fit including the x2 test, the root mean square error of approximation (RMSEA), and the comparative fit index (CFI). According to Hu and Bentler (1999) values ap­ proaching < .06 for the RMSEA, and approach­ ing > .95 for the CFI indicate a good fit, and values > .90 on the CFI indicate an adequate fit. Because we wished to compare models, we also included the Akaike information criterion (AIC). According to Kline, the model with the smallest AIC is the model most likely to replicate. Results and Discussion As Table 4 reveals, the mean composite scores for this sample of gifted students ranged from the superior range for the VCI and PRI to the high average range for the WMI and PSI. The mean FSIQ score was in the superior range. Other re­ searchers have demonstrated that differences in intelligence score composites for gifted students can be considerable (Lohman, Gambrell, & Fakin, 2008; Rowe, Miller, Ebenstein, & Thompson, 2012; Winner, 2000), and this is evident in the mean scores for our data sets. Again, the mean scores followed a pattern of higher VCI and PRI with substantially lower PSI scores. Because this pattern appeared in both of our samples and is mentioned frequently in the literature and in other studies of gifted assessment (NAGC, 2010; Rimm

Table 4

Descriptive Statistics fo r WISC-IV Composites and Subtests, Study 2 (n

=

181)

Composite

Mean

Median

SD

Skew

Kurtosis

WISC VCI WISC PRI WISC WMI WISC PSI WISC FSIQ Similarities Vocabulary Comprehension Block design Picture concepts Matrix reasoning Digit span Letter-number sequencing Coding Symbol search

126.20 124.14 116.66 112.26 126.71 15.18 14.25 13.93 13.29 13.47 14.83 12.31 13.69 11.74 12.48

128 125 116 112 126 15 14 14 13 14 15 12 14 12 13

11.07 9.57 9.45 12.51 7.86 2.14 2.19 2.57 2.08 2.27 2.38 2.14 1.79 2.93 2.23

-.41 -.2 0 -.1 2 -.31 -.4 6 -.5 6 -.3 0 -.2 2 -.1 7 -.4 2 .00 -.1 9 .03 -.11 -.21

-.1 8 -.2 3 -.4 4 -.1 7 .59 .13 -.1 8 .36 -.0 9 .73 -.7 0 -.13 -.1 3 -.4 4 .21

Note. WISC = Wechsler Intelligence Scale for Children-Fourth Edition; VCI = Verbal Comprehension Index; PRI = Perceptual Reasoning Index; WMI = Working Memory Index; PSI = Processing Speed Index; FSIQ = Full Scale IQ.

GIFTED WISC-IV STRUCTURE

et al., 2008; Volker & Smerbeck, 2009; Wechsler, 2003b), we do not think it is unique to our data. In fact, scores for the gifted sample presented in the WISC-IV Technical Manual (Wechsler, 2003b) also followed this pattern. Because one consistently interprets the four index composites on the WISC-IV, it may be that the discrepancies among composite scores with gifted students, particularly those for the VCI and PSI, are more evident. However, ques­ tions about speediness among gifted students on standardized tests have been discussed for de­ cades. Kaufman (1992), for example, brought up the issue of speediness in his review of the WTSC-HI with gifted children. Kaufman noted that the coding subtest, primarily a test of motor speed, was historically the lowest subtest score for gifted children. Such was the case in our data where the mean coding scores were 11.38 and 11.74. As Kaufman pointed out, lower scores for processing speed may be the result of personality or behavioral factors such as a ten­ dency to work carefully and check one’s work. Lower coding scores in our study as well as those mentioned by Kaufman could also be related to motor skills or motor development. Whatever the reason, processing speed scores that are considerably lower than the VCI appear to be common among gifted students. As was the case in Study 1, the standard deviations in Study 2 were smaller than aver­ age. Although scores for the similarities and picture concepts subtests were somewhat nega­ tively skewed (see Table 4), the values are less than + / —1. The only value for kurtosis that was + / — twice the standard error was picture concepts. This is likely due to two low scores on this subtest. The kurtosis value for LNS was initially 2.74. However, there was an outlying score on this subtest. On the test protocol, the examiner had written that the child did not appear to understand the directions, so we de­ leted the score. This resulted in a LNS kurtosis value of —.13. The critical ratio for the index of multivariate kurtosis was 1.05, which indicates that the data are relatively multivariate normal. The D2 values were all within a relatively small range suggesting that there were no serious mul­ tivariate outliers. The correlations for WISC-IV index and FSIQ scores for Study 2 are found in Table 5. The correlations between the index scores and the FSIQ were relatively high, but the correlations

545

Table 5

Correlations Among WISC-IV Composite Scores, Sample 2 (n — 181) Variables

VCI

PRI

WMI

PSI

FSIQ

VCI PRI WMI PSI FSIQ

1.00 .12 .10 .01 .58**

1.00 .36** .21** .70**

1.00 .21** .58**

1.00 .57**

1.00

Note. VCI = Verbal Comprehension Index; PRI = Per­ ceptual Reasoning Index; WMI = Working Memory Index; PSI = Processing Speed Index; FSIQ = Full Scale IQ. **p < .01.

among the index scores were lower than those for the normative data. In fact, all correlations with our Study 2 data between the VCI and the other three index scores were not significant, and the correlations among the other three in­ dices were significant. The same patterns of relationships existed for the scores in Study 1 and may be a function of SLODR, the restricted range of scores in our samples, regression to the mean, or a combination of all three. In terms of regression to the mean, the higher the VCI score, the more likely the remaining index scores will regress to the mean and demonstrate less of a relationship with the VCI. The fit index values for the two-, three-, and four-factor models are found in Table 6. As indicated in Table 6, the y2 statistics for all first-order models were significant at the .05 level, but the four-factor model had the smallest X2 value among the first-order models. Because of problems associated with the y2 (e.g., sensi­ tivity to sample size), however, researchers of­ ten look to additional fit indices in evaluating their models. Among the first-order models, the only model with fit index values suggesting a good or acceptable fit was the four-factor model (CFI .92, RMSEA .05). Additionally, the AIC value for the four-factor model was lower than the two-factor and three-factor models. The fit values for the four-factor, second-order model that includes a g factor were very similar to those for the four-factor first-order model, but the x2 value was smaller and not significant at the .05. Additionally, the lowest AIC value was for the second-order model. Based on fit values, then, the four-factor, second-order fol­ lowed by the four-factor, first-order models pro­ vided the best fit for the data.

ROWE, DANDRIDGE, PAWLUSH, THOMPSON, AND FERRIER

546

Table 6 CFA Fit Values fo r Four Alternative WISC-IV Models (n = 181) Models

df

x2

P RMSEA CFI

Two-factor (reflecting verbal and performance IQ) 34 195.32 .00 34 73.24 .00 Two-factor (EFA results) 32 64.35 .00 Three-factor 29 44.00 .04 Four-factor, first-order 31 44.91 .05 Four-factor, second-order

.11 .08 .08 .05 .05

.62 .79 .83 .92 .93

AIC 167.32 135.24 130.35 116.00 112.91

Note. RMSEA = Root Mean Square Error of Approximation; CFI = Comparative Fit Index; AIC = Akaike Information Criterion.

After considering the fit indices, we exam­ ined the modification indices and standardized residuals in order to determine if there were possible parameter modifications that were meaningful and could improve the fit of the model. The only modifications that would im­ prove the model fit were between error terms; consequently, they did not seem meaningful. Examination of the residuals also did not sug­ gest alterations to the model. Consideration of the fit indices is a crucial first step, but as Kline (2011) cautions, exami­ nation of the parameter estimates is equally important. Figure 1 contains the four-factor, first-order model with standardized regression weights and factor correlations, and Table 7 contains the R2 values. Several parameter esti­ mates are noteworthy. To begin, the R2 value for picture concepts was .16. The R2 value is the amount of variance in picture concepts that is explained by the latent, perceptual reasoning factor and indicates whether or not an observed variable (subtest) is a good measure of the un­ derlying construct. This lower R2 value, to­ gether with the finding from the EFA that the picture concepts subtest did not load on the perceptual reasoning factor, suggests that pic­ ture concepts was not a strong indicator of per­ ceptual reasoning in these two samples of gifted students. As noted in Table 7, though, the R2 value for matrix reasoning was .52. Thus, ma­ trix reasoning was a sound measure of percep­ tual reasoning in this sample. Matrix reasoning is considered a quintessen­ tial measure of fluid reasoning, and picture con­ cepts was designed to measure Gf. Therefore, one would expect a factor that explains a large amount of variance in matrix reasoning, to also explain significant variance in picture concepts. It may be that age is an issue with regard to the

findings for picture concepts. As mentioned pre­ viously, in the EFA analyses with the youngest age group in the WISC-IV Technical Manual (Wechsler, 2003b), picture concepts loaded on both the VCI and PRI, but neither loading was particularly high (.21 and .20, respectively). At the same time, the EFA loading for the 8- to 10-year-old normative group in the WISC-IV Technical Manual (Wechsler, 2003b) was pri­ marily on the PRI (.39), with a secondary load­ ing of .21 on working memory. The PRI loading increased to .57 in the 11- to 13-year-old nor­ mative group. Picture concepts, then, may be­ come a stronger measure of perceptual reason­ ing as students approach adolescence. The average age in the current sample was 8-yearsold, and the majority of students (66%, Study 1; 72%, Study 2) were less than 9-years-old. It may be that picture concepts is not yet a strong measure of perceptual reasoning at this age. In addition to the factor loadings, the corre­ lations among the factors are presented in Fig­ ure 1. Of note are the correlations between the verbal comprehension factor and the other three factors. The association between the verbal fac­ tor and the processing speed factor was not significant, and the associations with the other two factors were small. Again, these lower fac­ tor correlations may be a manifestation of SFODR. Reynolds, Hajovsky, Niileksela, and Keith (2011) also obtained lower latent factor correlations among students with higher overall cognitive ability on the Differential Ability Scales-2nd edition (DAS-II; Elliott, 2007). However, in a recent study of the StanfordBinet Intelligence Scale-5th edition (SB5; Roid, 2003) with high-achieving students, the latent factors were highly correlated (Williams et al., 2010). In the SB5 study, the correlations between the Gc and Gf factors were so high that

GIFTED WISC-IV STRUCTURE

547

.46

Figure 1. Four-factor, first-order model for the Wechsler Intelligence Scale for Children4th edition with gifted students. VC = verbal comprehension; PR = perceptual reasoning; WM = working memory; PS = processing speed.

the researchers combined those factors. Then again, the mean SB5 composite scores ranged from 104 to 114, and most of the mean subtest scores were in the average range. This may be a reason for the differences from our results. Fur­ thermore, Williams, McIntosh, Dixon, Newton, and Youman (2010) were not comparing the correlations among students of different levels of ability. The factor correlations in this study also have ramifications for the parameter estimates of the second-order model (see Figure 2). The higherorder factor in this model represents g, or gen­

eral intelligence, and many researchers have documented high associations between g and verbal comprehension or Gc factors with lower associations between g and Gs, processing speed factors (e.g., Carroll, 1993). Bodin, Pardini, Bums, and Stevens (2009) and Watkins (2010), for instance, obtained estimates of .87 and .89, respectively, between g and the verbal comprehension factors with the WISC-IV among samples of referred children. The esti­ mation between g and verbal comprehension with the WISC-IV normative data was .88 (Keith et al., 2006). In the current study, how-

ROWE, DANDRIDGE, PAWLUSH, THOMPSON, AND FERRIER

548

Summary and Concluding Discussion

Table 7 R2 Values fo r Observed Variables on Respective Latent Factor Observed subtest variable (Respective latent factor)

R2

Similarities (VC) Vocabulary (VC) Comprehension (VC) Block design (PR) Picture concepts (PR) Matrix reasoning (PR) Letter-number sequencing (WM) Digit span (WM) Coding (PS) Symbol search (PS)

.46 .41 .25 .20 .16 .52 .28 .24 .28 .69

Note. VC = verbal comprehension; PR = perceptual rea­ soning; WM = working memory; PS = processing speed.

ever, the estimate from g to the verbal factor was .16. Because the remaining three factors in this study had moderate to high correla­ tions with one another, the parameter estimate between g and processing speed is higher (.45). Thus, although the fit indices for the CFA second-order model in our study suggest a good fit between the model and the data, the parameter estimates, particularly the one from g to the verbal comprehension factor, are not co n sisten t w ith extant research on the WISC-IV (Bodin et al„ 2009; Keith et ah, 2006; Watkins, 2010) or CHC research (e.g., Carroll, 1993) regarding the relationship be­ tween broad, stratum II factors and the hier­ archical, stratum III (g) factor. Once more, a possible explanation for these findings is SLODR, and these findings are consistent with those of Reynolds et al. (2011) with the DAS-II. In fact, the parameter estimates in this study for the hierarchical, second-order model suggest that combining the Level II factors (VC, PR, WM, PS) into an overarching factor (g) can have the impact of minimizing Gc, or the verbal comprehension factor. As a result, this hierar­ chical model may not be the best representation of cognitive abilities for gifted students. In terms of measurement, this suggests that com­ bining index scores into a FSIQ could reduce the salience of very high VCI scores. Moreover, this would be the case for individual students with one or more large index score differences.

Together these analyses provide support for the four-factor, first-order WISC-IV model among gifted students. In general, the fourfactor model emerged from the EFA analyses as suggested in the WISC-IV Technical Manual (Wechsler, 2003b). The fit indices from the CFA also suggested that the four-factor, firstorder model provided an acceptable fit with the data and was a better fit than the two- or threefactor models. At the same time, parameter es­ timates from the second-order, hierarchical model suggested that combining subtest or composite scores into a single, overarching WISC-IV score may not be the best representa­ tion of performance for many gifted students. Instead of combining the scores, the best repre­ sentation for many gifted students may be the four index scores considered independently. Following CHC theory, this implies that the focus for gifted students should be on the CHC broad, stratum II abilities instead of g. This recommendation echoes those of McIntosh et al. (2012). Moreover, as Reynolds et al. (2011) propose, given findings of SLODR with higher overall cognitive ability, it could be that the degree to which psychologists place interpretive weight on the overall, stratum III global score depends upon ability level. Other scoring options discussed frequently for use among gifted students include the GAI or the VCI or PRI used individually (NAGC, 2010; Rimm et al., 2008; Sparrow et al., 2005; Volker & Smerbeck, 2009; Watkins et al., 2002). In this study, a model reflecting the GAI did not emerge from the EFA, and the twofactor model reflective of the old V -P IQs did not fit the data well in the CFA. Additionally, while all three subtests contributing to the VCI appeared to be strong measures of the underly­ ing construct, the analyses suggested that pic­ ture concepts is not a strong measure of percep­ tual reasoning with these data sets. Given this finding, interpretation and placement decisions based solely upon the PRI are not supported. Even though the structure of the VCI is sup­ ported, placement decision made on this index score alone could be limited in the degree of variance explained in cognitive abilities. In the EFA four-factor solution, it was the working memory factor that first emerged and accounted for the largest percentage of the variance.

GIFTED WISC-IV STRUCTURE

549

.49

Figure 2. Four-factor, second-order model for the Wechsler Intelligence Scale for Children4th edition with gifted students. VC = verbal comprehension; PR = perceptual reasoning; WM = working memory; PS = processing speed; g = general intelligence.

Overall, these findings support a more nuanced process for considering cognitive test scores in the identification of gifted students. In not supporting a single, overall score, the results also do not support use of a single, cut-off value for identifying gifted students. Others have also cautioned against the use of cut-scores in the identification of gifted students, given the mea­ surement error inherent in any test score (McIn­ tosh, Dixon, & Pierson, 2012; Worrell & Erwin, 2011). In fact, multimodal assessment, which includes different types of information from different sources, represents best practices in

assessment (NAGC, 2008). As noted by McIn­ tosh et al. (2012) as well as by Worrell and Erwin (2011), this standard also applies to the identification of gifted students where teacher recommendations, rating scales, performance assessments, day-to-day achievement, and test scores reflecting different aspects of cognitive abilities are all potentially important. Although this research does provide con­ struct validity support for a four-factor model of the WISC-IV among gifted students, it is not without limitations. To begin, most of the stu­ dents in our study were in second or third grade

550

ROWE, DANDRTOGE, PAWLUSH, THOMPSON, AND FERRIER

and from a relatively affluent metropolitan sub­ urb. As a result, our findings may not generalize to all gifted students. Additional research on the factor structure of the WISC-IV with gifted students of different ages and from diverse cul­ tural and economic backgrounds is needed. One potential question related to age and develop­ ment is whether picture concepts becomes a stronger measure of perceptual reasoning among older students. We also recognize that even though cognitive scores are frequently a criterion for the identification of intellectually gifted students, cognitive test scores often do not predict even a majority of the variance in academic outcomes. Future research should ex­ plore the degree to which other variables such as measures of child motivation and teacher ratings of academic potential compare with scores from standardized cognitive tests. Finally, these results are limited to the con­ struct validity of the instrument and do not address the criterion-related validity. In other words, our results support interpretation of the four-factor model of the WISC-IV, but they do not provide information about the degree to which the individual factors predict academic or other forms of achievement. At the same time, research by Rowe, Kingsley, and Thompson (2010) as well as by Rowe et al., 2012 revealed that scores for the VCI, PRI, and WMI were predictive of subsequent achievement in read­ ing or mathematics or both. However, scores for processing speed did not account for unique variance in the prediction of reading or math scores (Rowe et al., 2010, 2012). Furthermore, Wai, Lubinski, and Benbow (2009) demon­ strated the importance of spatial ability in pre­ dicting students’ selection of classes and careers in science, technology, engineering, and math­ ematics. In spite of limitations, this is the first factor analytic study of the WISC-IV among gifted students. Because the WISC-IV is very different from previous versions of the Wechsler child scales, and it is used frequently for the identi­ fication of gifted students, an empirical study of this type provides a unique contribution to the literature regarding the assessment of cognitive abilities with high-achieving students. More­ over, this research should provide guidance to school psychologists in their roles as practitio­ ners and as consultants to parents as well as to their colleagues in education who are involved

in the process of gifted identification. This is particularly relevant in a profession such as school psychology which adheres to a tenet of data-based decision making.

References Alexander, R. A., Carson, K. P., Alliger, G. M., & Carr, L. (1987). Correcting doubly truncated cor­ relations: An improved approximation for correct­ ing the bivariate normal correlation when trunca­ tion has occurred on both variables. Educational and Psychological Measurement, 47, 309-315. doi: 10.1177/0013164487472002 Alfonso, V. C., Flanagan, D. P., & Radwan, S. (2005). The impact of the Cattell-Horn-Carroll the­ ory on test development and interpretation of cog­ nitive and academic abilities. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2nd ed., pp. 185-202). New York, NY: Guilford Press. A m erican Educational R esearch A ssociation (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (1999). Standards fo r educa­ tional and psychological testing (3rd ed.). Wash­ ington, DC: AERA. Bodin, D., Pardini, D. A., Bums, T. G., & Stevens, A. B. (2009). Higher order factor structure of the WISC-IV in a clinical neuropsychological sample. Child Neuropsychology, 15, 417-424. doi: 10.1080/09297040802603661 Carroll, J. B. (1993). Human cognitive abilities: A sur­ vey o f factor analytic studies. New York, NY: Cam­ bridge University Press. doi:10.1017/CB097 80511571312 Cattell, R. B. (1943). The measurement of adult intelligence. Psychological Bulletin, 40, 153-193. doi: 10.1037/h0059973 Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1, 245276. doi: 10.1207/sl5327906mbr0102_10 Deary, I. J., & Pagliari, C. (1991). The strength of g at different levels of ability: Have Detterman and Daniel rediscovered Spearman’s “law of diminish­ ing returns?” Intelligence, 15, 247-250. doi: 10.1016/0160-2896(91 )90033-A Detterman, D. K., & Daniel, M. H. (1989). Correla­ tions of mental tests with each other and with cognitive variables are highest for low IQ groups. Intelligence, 13, 349-359. doi:10.1016/S01602896(89)80007-8 Elliott, C. D. (2007). Differential ability scales-2nd ed. San Antonio, TX: Pearson. Flanagan, D. P., Alfonso, V. C., & Ortiz, S. O. (2012). The cross-battery assessment approach: An overview, historical perspective, and current

GIFTED WISC-IV STRUCTURE

551

directions. In D. P. Flanagan & P. L. Harrison Macmann, G. M., Plasket, C. M., Barnett, D. W., & (Eds.), Contemporary intellectual assessment: Siler, R. F. (1991). Factor structure of the WISC-R Theories, tests, and issues (3rd ed., pp. 459-483). for children of superior intelligence. Journal o f New York, NY: Guilford Press. School Psychology, 29, 19-36. doi:10.1016/0022Hartmann, P., & Reuter, M. (2006). Spearman’s “law 4405(91)90012-G of diminishing returns” tested with two methods. McGrew, K. S. (2009). CHC theory and the human Intelligence, 34, 47-62. doi: 10.1016/j.intell.2005 cognitive abilities project: Standing on the shoul­ .06.002 ders of the giants of psychometric intelligence Henson, R. K., & Roberts, J. K. (2006). Use of explor­ research. Intelligence, 37, 1-10. doi:10.1016/j atory factor analysis in published research: Common .intell.2008.08.004 errors and some comments on improved practice. McIntosh, D. E., Dixon, F. A., & Pierson, E. E. Educational and Psychological Measurement, 66, (2012). Use of intelligence tests in the identifica­ 393-416. doi: 10.1177/0013164405282485 tion of giftedness. In D. P. Flanagan & P. L. Horn, J. L. (1965). A rationale and test for the num­ Harrison (Eds.), Contemporary intellectual assess­ ber of factors in factor analysis. Psychometrika, ment: Theories, tests, and issues (3rd ed., pp. 62330, 179-185. doi: 10.1007/BF02289447 642). New York, NY: Guilford Press. Horn, J. L., & Cattell, R. B. (1966). Refinement and National Association for Gifted Children. (2008, Oc­ test of the theory fluid and crystallized intelligence. tober). The role o f assessments in the identification Journal o f Educational Psychology, 57, 253-270. o f gifted students. Retrieved from http://nagc.org/ doi:10.1037/h0023816 index.aspx?id=4022 Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit National Association for Gifted Children. (2010, indexes in covariance structure analysis: Conven­ March). Use o f the WISC-IVfo r gifted identification. tional criteria versus new alternatives. Structural Retrieved from http://nagc.org/index.aspx?id=2455 Equation Modeling, 6, 1-55. doi:10.1080/107055 No Child Left Behind Act of 2002, P. L. 107-110, 19909540118 Title IX, Part A, Section 9101 (22), p. 544, 20 Jensen, A. R. (2003). Regularities in Spearman’s law U.S. C. 7802. of diminishing returns. Intelligence, 31, 95-105. Prifitera, A., Weiss, L. G., & Saklofske, D. H. (1998). doi: 10.1016/SO160-2896(01 )00094-0 The WISC-m in context. In A. Prifitera & D. H. Kaiser, H. F. (1960). The application of electronic Saklofske (Eds.), WISC-III clinical use and inter­ computers to factor analysis. Educational and Psy­ pretation: Scientist-practitioner perspectives (pp. chological M easurement, 20, 141-151. doi: 1-38). New York, NY: Academic Press, doi: 10.1177/001316446002000116 10.1016/B978-012564930-8/50002-4 Karnes, F. A., & Brown, K. E. (1980). Factor anal­ Reynolds, M. R., Hajovsky, D. B., Niileksela, C. R., ysis of the WISC-R for the gifted. Journal o f & Keith, T. Z. (2011). Spearman’s law of dimin­ Educational Psychology, 72, 197-199. doi: ishing returns and the DAS-II; Do g effects on 10.1037/0022-0663.72.2.197 subtest scores depend on the level of g? School Kaufman, A. S. (1992). Evaluation of the WISC-III Psychology Quarterly, 26, 275-289. doi:10.1037/ and WPPSI-R for gifted children. Roeper Review, a0026190 14, 154-158. doi:10.1080/02783199209553413 Reynolds, M. R., & Keith, T. Z. (2007). Spearman’s Keith, T. Z., Fine, J. G., Taub, G. E., Reynolds, law of diminishing returns in hierarchical models M. R., & Kranzler, J. H. (2006). Higher order, of intelligence for children and adolescents. Intel­ multisample, confirmatory factor analysis of the ligence, 35, 267-281. doi:10.1016/j.intell.2006.08 Wechsler Intelligence Scale for Children, 4th edi­ .002 tion: What does it measure? School Psychology Rimm, S., Gilman, B„ & Silverman, L. (2008). NonReview, 35, 108-127. traditional applications of traditional testing. In Kline, R. B. (2010). Promise and pitfalls of structural J. L. Van Tassel-Baska (Ed.), Alternative assess­ equation modeling in gifted research. In B. ments with gifted and talented students (pp. 175— Thompson & R. F. Subotnik (Eds.), Methodologies 202). Waco, TX: Prufrock Press. fo r conducting research on giftedness (pp. 147- Roid, G. H. (2003). Stanford-Binet Intelligence 169). Washington, DC: American Psychological Scales, 5th edition, examiner’s manual. Itasca, IL: Association. doi:10.1037/12079-007 Riverside. Kline, R. B. (2011). Principles and practice o f struc­ Rowe, E. W., Kingsley, J. M., & Thompson, D. F. tural equation modeling (3rd ed.). New York, NY: (2010). Predictive ability of the General Ability Guilford Press. Index (GAI) versus the full scale IQ among gifted Lohman, D. F., Gambrell, J., & Lakin, J. (2008). The referrals. School Psychology Quarterly, 25, 119— commonality of extreme discrepancies in the abil­ 128. doi: 10.1037/a0020148 ity profiles of academically gifted students. Psy­ Rowe, E. W., Miller, C., Ebenstein, L. A., & Thomp­ chology Science Quarterly, 50, 269-282. son, D. F. (2012). Cognitive predictors of reading

552

ROWE, DANDRIDGE, PAWLUSH, THOMPSON, AND FERRIER

and math achievement among gifted referrals. School Psychology Quarterly, 27, 144-153. doi: 10.1037/a0029941 Saklofske, D. H., Yang, Z., Zhu, J., & Austin, E. J. (2008). Spearman’s law of diminishing returns in informative samples for the WISC-IV and WAISIII. Journal o f Individual Differences, 29, 57-69. doi: 10.1027/1614-0001.29.2.57 Sparrow, S. S., Pfeiffer, S. I., & Newman, T. M. (2005). Assessment of children who are gifted with the WISC-IV. In A. Prifitera, D. H. Saklofske, & L. G. Weiss (Eds.), WISC-IV clinical use and interpretation: Scientist-practitioner perspectives (pp. 281-298). Waltham, MA: Academic Press, doi: 10.1016/B978-012564931 -5/50009-8 Spearman, C. (1904). “General intelligence,” objec­ tively determined and measured. The American Journal o f Psychology, 15, 201-293. doi: 10.2307/ 1412107 Spearman, C. (1927). The abilities o f man: Their nature and measurement. New York, NY: Mac­ millan. Velicer, W. F. (1976). The relation between factor score estimates, image scores and principal com­ ponent scores. Educational and Psychological Measurement, 36, 149-159. doi: 10.1177/00131 6447603600114 Volker, M. A., & Smerbeck, A. M. (2009). Identifi­ cation of gifted students with the WISC-IV. In D. P. Flanagan & A. S. Kaufman (Eds.), Essentials o f WISC-IV assessment (2nd ed., pp. 262-276). Hoboken, NJ: Wiley. Wai, J., Lubinski, D., & Benbow, C. P. (2009). Spatial ability of STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. Journal o f Educational Psychology, 101, 817-835. doi:10.1037/a0016127 Watkins, M. W. (2010). Structure of the Wechsler Intelligence Scale for Children, 4th edition among a national sample of referred students. Psycholog­ ical Assessment, 22, 782-787. doi: 10.1037/a002 0043 Watkins, M. W., Greenawalt, C. G., & Marcell, C. M. (2002). Factor structure of the Wechsler Intelligence Scale for Children, 3rd edition among gifted students.

Educational and Psychological Measurement, 62, 164-172. doi: 10.1177/0013164402062001011 Watkins, M. W., Wilson, S. M., Kotz, K. M., Car­ bone, M. C., & Babula, T. (2006). Factor structure of the Wechsler Intelligence Scale for Children, 4th edition among referred students. Educational and Psychological Measurement, 66, 975-983. doi: 10.1177/0013164406288168 Wechsler, D. (1974). Manual fo r the Wechsler Intel­ ligence Scale fo r Children, revised (WISC-R). New York, NY: The Psychological Corporation. Wechsler, D. (1991). Manual fo r the Wechsler Intel­ ligence Scale fo r Children, 3rd edition (WISC-III). San Antonio, TX: The Psychological Corporation. Wechsler, D. (2003a). Wechsler Intelligence Scale fo r Children, 4th edition (WISC-TV) administration and scoring manual. San Antonio, TX: The Psy­ chological Corporation. Wechsler, D. (2003b). Wechsler Intelligence Scale fo r Children, 4th edition (WISC-IV) technical and interpretive manual. San Antonio, TX: The Psy­ chological Corporation. Williams, T. H., McIntosh, D. E., Dixon, F., Newton, J. H„ & Youman, E. (2010). A confirmatory factor analysis of the Stanford-Binet Intelligence Scales, 5th edition, with a high achieving sample. Psychol­ ogy in the Schools, 47, 1071-1083. doi:10.1002/ pits.20525 Winner, E. (2000). The origins and ends of gifted­ ness. American Psychologist, 55, 159-169. doi: 10.1037/0003-066X.55.1.159 Worrell, F. C., & Erwin, J. O. (2011). Best practices in identifying students for gifted and talented ed­ ucation programs. Journal o f Applied School Psy­ chology, 27, 319-340. doi: 10.1080/15377903 .2011.615817 Zhu, J., Cayton, T., Weiss, L., & Gabel, A. (2008). Wechsler Intelligence Scale for Children, 4th edition (WISC-TV) Tech. Rep. No. #7: WISC-IV extended norms. Retrieved from http://www.pearsonassessments .com/NR/rdonlyres/C 1C 19227-BC79-46D9-B43C8E4A114F7ElFZ0AVISCIV_TechReport_7.pdf Received July 24, 2012 Revision received October 12, 2012 Accepted October 16, 2012 ■

Copyright of School Psychology Quarterly is the property of American Psychological Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Exploratory and confirmatory factor analyses of the WISC-IV with gifted students.

These 2 studies investigated the factor structure of the Wechsler Intelligence Scale for Children-4th edition (WISC-IV; Wechsler, 2003a) with explorat...
10MB Sizes 0 Downloads 14 Views