School Psychology Quarterly 2015, Vol. 30, No. 4, 534 –552

© 2014 American Psychological Association 1045-3830/15/$12.00 http://dx.doi.org/10.1037/spq0000088

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Assessing Adolescents’ Positive Psychological Functioning at School: Development and Validation of the Student Subjective Wellbeing Questionnaire Tyler L. Renshaw and Anna C. J. Long

Clayton R. Cook

Louisiana State University

University of Washington

This study reports on the initial development and validation of the Student Subjective Wellbeing Questionnaire (SSWQ) with a sample of 1,002 students in Grades 6 – 8. The SSWQ is a 16-item self-report instrument for assessing youths’ subjective wellbeing at school, which is operationalized via 4 subscales measuring school connectedness, academic efficacy, joy of learning, and educational purpose. The conceptualization and development of the SSWQ’s subscales and items are described, and results from a series of preliminary psychometric analyses are reported. Findings indicated that the SSWQ was characterized by 4 conceptually sound latent factors, that these 4 first-order factors were robust indicators of a single second-order factor (i.e., student subjective wellbeing), that all subscales and the composite scale demonstrated at least adequate construct reliability and internal consistency, and that the estimated latent-means for all first-order and second-order factors were invariant across gender. Moreover, results from bivariate correlations and a latent-variable path analysis provided evidence in support of the construct validity of the SSWQ’s scales and latent factors, showing strong associations with other student wellbeing indicators (i.e., school prosociality and academic perseverance), while findings from binary logistic regressions demonstrated that overall student subjective wellbeing levels, based on composite scores from the SSWQ, were mildly to-strongly associated with a variety of self-endorsed risk factors (e.g., aggression and self-harm) and protective factors (e.g., social support and physical exercise). Implications for theory, research, and the practice of school psychology are discussed. Keywords: subjective wellbeing, positive psychology, protective factors, risk factors, measurement

During the past decade, positive psychology has made its way into the schools and onto the pages of school psychology’s scholarly literature (Chafouleas & Bray, 2004; Furlong, Gil-

This article was published Online First September 1, 2014. Tyler L. Renshaw and Anna C. J. Long, Department of Psychology, Louisiana State University; Clayton R. Cook, Department of Educational Psychology, University of Washington. The authors declare no financial interests or conflicts regarding the research presented herein concerning the development of the Student Subjective Wellbeing Questionnaire. Correspondence concerning this article should be addressed to Tyler L. Renshaw, Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA 70803. E-mail: [email protected]

man, & Huebner, 2014; Huebner & Gilman, 2003). According to Seligman and Csikszentmihalyi (2000), who coauthored the foundational article introducing the field, positive psychology is the scientific study of positive (or socially desirable) emotions, character traits, and the institutions that enable and cultivate them. Positive psychology has also been defined more broadly as the scientific study of the good life (Peterson & Park, 2003), psychological health (Peterson & Seligman, 2004), “ordinary human strengths and virtues” (Sheldon & King, 2001, p. 216), and “the conditions or processes that contribute to flourishing or optimal functioning” (Gable & Haidt, 2005, p. 103). Synthesizing these definitions, we conceptualize positive psychology as the scientific study of a more basic and common social construct— one that

534

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

was used in psychology prior to the birth of positive psychology (cf. Ryff, 1989), is fundamental to the field of public health, and is still used in contemporary school psychology research and practice: wellbeing. Wellbeing is a metaconstruct that encompasses all aspects of healthy and successful living, including psychological, economic, physical, and other domains. For the purposes of this work, we are solely concerned with subjective wellbeing, which has been historically operationalized via self-reports of positive affectivity and life satisfaction (Diener, Oishi, Lucas, 2009). Yet in recent years, conceptualizations of subjective wellbeing have expanded to include engagement, relationship, meaning, and accomplishment domains, some of which are also measurable by informant-rated or performancebased measures (Seligman, 2011). Such a broad conceptualization of wellbeing is consistent with how school psychologists have conceptualized the construct from the beginning—noting that a school-based positive psychology could embrace traditional subjective indicators (e.g., life satisfaction; Huebner, Suldo, Smith, & McKnight, 2004), while also promoting academic competence, persistence, and success via behavioral skill instruction (Martens & Witt, 2004), relational supports (Sheridan, Warnes, Cowan, Schemm, & Clarke, 2004), and environmental enhancements (Jenson, Olympia, Farley, & Clarke, 2004). Thus, targeting and cultivating student wellbeing is “old hat” for school psychology. However, a thorough review of the empirical literature suggests that school psychologists’ efforts to promote student wellbeing have been rather lopsided over the years, with much more attention paid to informant-rated and performance-based indicators (e.g., teacher ratings or direct observations of positive social behavior and academic skill proficiency), and far less consideration given to subjective indicators (e.g., selfreports of the same). Assessing Student Wellbeing Given that the purpose of public education is to help students acquire academic skills that will enable them to successfully take part in and contribute to society, it is no surprise that the primary wellbeing indicators valued in the schools are performance-based scholastic

535

achievement outcomes (i.e., report-card grades and standardized test scores), and the secondary wellbeing indicators of interest are typically observable behaviors that have been demonstrated to facilitate or hamper academic achievement (Terjesen, Jacofsky, Froh, & DiGiuseppe, 2004). That said, recent reviews have indicated that youths’ subjective wellbeing has concurrent and predictive validity with academic achievement outcomes (Renshaw et al., 2014) as well as with a variety of common correlates of school success (e.g., physical health, substance use, and goal-directed behavior; Proctor, Linley, & Maltby, 2009). Popular press has also provided evidence to suggest that both scholars and caregivers believe youths’ subjective wellbeing should be a primary outcome of public schooling (e.g., Sahlberg, 2011; Tough, 2012). Yet the most compelling evidence in favor of assessing students’ subjective wellbeing is likely derived from state-level research on the effects of school climate— operationalized as “the collective, subjective appraisal of individuals’ experiences within their own local school environments” (O’Malley, Katz, Renshaw, & Furlong, 2013, p. 318)— on school-level academic outcomes. Such research has shown, for example, that greater levels of student subjective wellbeing are predictive of greater likelihoods of schoolwide academic success, especially for higherrisk youth attending disadvantaged schools (Voight, Austin, & Hanson, 2013). Taken together, then, the upshot of the contemporary scholarship warrants greater emphasis on students’ subjective wellbeing indicators in applied research and practice. And although the sources of such research vary, we propose that the evidence is consistent with and intelligible within Fredrickson’s (2013) broaden-and-build theory, which, applied in the school context, posits that students’ experiences of subjective wellbeing result in broadened awareness of their environment, which, in turn, enables them to more effectively build creativity, problemsolving, and other skills necessary for successful performance. In response to such compelling evidence, a new line of research has arisen to develop and validate more robust measures of students’ subjective wellbeing, moving beyond the traditional assessment of isolated wellbeing indicators (e.g., life satisfaction; Huebner, Hills,

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

536

RENSHAW, LONG, AND COOK

Siddall, & Gilman, 2014) to the measurement of multidimensional and multilevel subjective wellbeing constructs (e.g., covitality; Furlong, You, Renshaw, Smith, & O’Malley, 2014; You et al., 2013). Additionally, a particular thread of this new wave of research has sought to develop domain-specific (as opposed to domain-general) measures that target youths’ school-specific subjective wellbeing. For example, the Positive Experiences at School Scale (PEASS) was developed to measure four school-specific constructs: gratitude, zest, optimism, and persistence (Furlong, You, Renshaw, O’Malley, & Rebelez, 2013). Although school-specific wellbeing measures have been part-and-parcel of school climate research, beyond the PEASS, most assessment of students’ subjective wellbeing has been domain general. That said, a case for privileging school-specific indicators was recently made in a study with college students, which found that undergraduates’ schoolspecific wellbeing was a substantially stronger predictor of academic achievement and other quality-of-life outcomes than was their domaingeneral wellbeing (Renshaw & Bolognino, 2014). Yet given the sparse research on the subject, there is currently a lack of consensus regarding the nature and structure of youths’ school-specific subjective wellbeing, which contrasts with the typically well-defined structure of students’ informant-rated and performance-based wellbeing (e.g., schoolwide behavioral expectations and common-core academic proficiency standards). For these reasons, some scholars have noted that, to truly take root and be fruitful in schools, assessment of students’ subjective wellbeing must be more than just psychometrically robust and socially feasible—it must also be coherently conceptualized and operationalized as complimentary to current informant-rated and performance-based assessments of student wellbeing (Clonan, Chafouleas, McDougal, & Riley-Tilman, 2004). Purposes of the Present Study The proximal purpose of the present study was to develop and establish the technical adequacy of a brief, multidimensional, domainspecific measure of adolescents’ school-specific positive psychological functioning: the Student Subjective Wellbeing Questionnaire (SSWQ). The ultimate purpose of the present study, then,

was to test and refine an instrument that could be used in conjunction with informant-rated or performance-based wellbeing measures, as well as with subjective problem measures (assessing various behavioral risks and psychopathology), to gauge the comprehensive wellbeing of individual students, classes, and schools within multitiered systems of support (MTSS; cf. Renshaw et al., 2014). Although a school-specific measure of this nature is available for primaryage students (i.e., the PEASS; Furlong et al., 2013), prior to this study there was no such instrument available for adolescents—rather, only domain-general, multidimensional subjective wellbeing measures existed (e.g., the Social and Emotional Health Survey; Furlong et al., 2014; You et al., 2013). Given that a domaingeneral approach to measurement is incongruent with the problem-solving model that undergirds best practices in school psychological and educational service delivery (cf. Ervin, Gimpel Peacock, & Merrell, 2010; Hawkins, Barnett, Morrison, & Musti-Rao, 2010), we posited that a new measure of student subjective wellbeing was needed in order to efficiently and effectively use the construct within school-based assessment-to-intervention practice. Moreover, given that schoolwide prevention and promotion programming intends to both reduce student problems and promote student wellbeing, yet systematic intervention planning and evaluation efforts often fail to measure the latter, we further posited that a new measure was warranted to enable more well-rounded assessments of the effects of these common educational practices (e.g., positive behavior interventions and supports, social– emotional learning, and integrative efforts; cf. Domitrovich et al., 2010). Thus, development and validation of the SSWQ was primarily undertaken as an effort to advance school psychological practice specifically and educational assessment more broadly. Given these purposes, the aims of the present study were threefold and intended to support the construct validation of the SSWQ. First, we aimed to conceptualize the metaconstruct of student subjective wellbeing and its subconstructs, and then to operationalize these subconstructs via drafting test scales and items. Next, we aimed to investigate the initial psychometric properties of the pilot version of the SSWQ by testing the measure on a target sample of middle school students. And finally, we aimed to sim-

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

ulate the utility of the SSWQ as a classification instrument by investigating the usefulness of overall student subjective wellbeing levels, derived from SSWQ composite scores, for predicting the presence of various risk and protective factors. Given these aims, we hypothesized that the SSWQ would be characterized by a conceptually sound and psychometrically robust multidimensional latent-trait structure, that its subscales and the composite scale would be positively associated with other student subjective wellbeing measures (i.e., school prosociality and academic perseverance), and that the classification statuses yielded by the composite score would be strongly predictive of youths’ self-endorsements of various risk factors (e.g., aggression and self-harm) and protective factors (e.g., social support and physical exercise). Method Participants The target sample consisted of adolescents in Grades 6 – 8 attending two public middle schools located in a midsize urban city within the southern region of the United States. At the time of this study, the combined enrollment of School A (SA) and School B (SB) was approximately 1,600 students. Administrators at both schools stated that they were implementing schoolwide positive behavior intervention and supports, yet no intervention fidelity or outcome data were available. Demographic data derived from the school district’s records indicated that students enrolled in both schools had similar racial/ ethnic backgrounds, with the majority identifying as Black (SA ⫽ 63%, SB ⫽ 73%) and the minority identifying as White (SA ⫽ 26%, SB ⫽ 14%) or as other and multiple ethnicities (SA ⫽ 11%, SB ⫽ 13%). District data also indicated that the majority of students at both schools received free or reduced lunch (SA ⫽ 76%, SB ⫽ 70%). Although all students at both schools were considered eligible to participate in this study, consent and usable selfreport surveys (i.e., characterized by few missing responses and plausible response patterns) were received from less than two thirds of the sampling pool, resulting in a final sample of 1,002 participants. For data analysis purposes, this initial sample was split into two

537

subsamples using the random-sampling function in SPSS version 20, resulting in Subsample 1 (SS1) and Subsample 2 (SS2) consisting of approximately 50% of the total participants (SS1 n ⫽ 500, SS2 n ⫽ 502). About half of the students in both subsamples were female (SS1 ⫽ 49.6%, SS2 ⫽ 48.6%), and nearly one third of each subsample was comprised of sixth graders (SS1 ⫽ 32.6%, SS 2 ⫽ 28.9%), seventh graders (SS 1 ⫽ 35.4%, SS 2 ⫽ 37%), and eighth graders (SS1 ⫽ 32%, SS2 ⫽ 34.1%), respectively. SSWQ Development The process used to develop the Student Subjective Wellbeing Questionnaire was based on Clark and Watson’s (1995) basic principles for quality scale development, which are derived from Loveinger’s (1957) seminal monograph on the subject, informed by a systematic review of measurement studies published in Psychological Assessment, and reflective of the test development guidelines offered in the Standards for Educational and Psychological Testing (Joint Committee on Standards for Educational & Psychological Testing, 1999) and other contemporary references (e.g., DeVellis, 2003). Clark and Watson posited that the primary concern in measure development is construct validity, which encompasses the many subtypes of validity as well as traditional notions of reliability, and that such validity is constructed via establishing substantive, structural, and external validity evidences. Establishing substantive validity evidence typically consists of two subprocesses: determining the nature and scope of the construct of interest, and creating an item structure and pool. Next, structural validity is established through testing the measure on a target sample and evaluating the item distributions, latent structure, internal consistency, and construct boundaries using descriptive, factor analytic, reliability, measurement invariance, and concurrent or predictive correlational analyses. Lastly, if structural validity evidence is obtained, then external validity evidence is established via testing the generalizability of the measure’s structure with diverse samples, its relations with other convergent and discriminant measures, and its utility in applied contexts.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

538

RENSHAW, LONG, AND COOK

Substantive validity. As the initial step in the measure development process, we outlined the nature and scope of the type of measure we intended to develop: a brief (i.e., 20 item or less), multidimensional measure of adolescents’ positive psychological functioning at school. These length and content determinations were primarily grounded in feasibility and incremental validity considerations, as the scale was envisioned to be a stand-alone representation of students’ subjective wellbeing that might be used for schoolwide screening, progress monitoring, and decision-making purposes within MTSS. Next, we conceptualized the nature of the metaconstruct to be assessed by the measure (i.e., student subjective wellbeing), which we ultimately defined as youths’ self-perceptions of healthy and successful living at school. Following, we reviewed the relevant literature regarding this metaconstruct and generated a list of previously researched subconstructs that fell within the scope of this metaconstruct or, alternatively, seemed closely related to it. As noted above, student subjective wellbeing indicators have been researched using both single-scale and multidimensional measures, and thus this review intentionally focused on item content— above and beyond the actual constructs noted to underlie the measures—for the purposes of identifying unique and shared variance among relevant subconstructs. Findings from this literature reviewed yielded 13 conceptually distinct school-specific subjective wellbeing subconstructs: connectedness (e.g., Furlong, O’Brennan, & You, 2011), satisfaction (e.g., Huebner & McCullough, 2000), gratitude (Furlong et al., 2013), optimism (Furlong et al., 2013), zest (Furlong et al., 2013), meaningful participation (Jennings, 2003), prosocial behavior (Furlong et al., 2013), persistence (Furlong et al., 2013), peer relationships (e.g., Hazel, Vazirabadi, Albanes, & Gallagher, 2014), teacher–student relationships (e.g., Hazel et al., 2014), self-efficacy (e.g., Høigaard, Kovac, Øverby, & Haugen, 2014), goal orientation (Hazel et al., 2014), and educational purpose (Hazel et al., 2014). Three closely related constructs, which were not measured using school-specific items but could easily generalized to school settings, were also identified: curiosity, love of learning, and creativity (e.g., Park & Peterson, 2006). Of these 16 potential subconstructs, the majority had been researched using testable

scales, and only one subconstruct— educational purpose—was identified via single items from disparate scales. Given that our purpose was to develop a brief, multidimensional instrument that could be used for various purposes within MTSS in schools, we determined that an optimal measure structure would consist of three to five scales (representing first-order subconstructs) that were each comprised of four to five items and that, when taken together, would indicate a single composite scale (representing a secondorder or metaconstruct) of student subjective wellbeing. With this aim in mind, the list of 16 school-specific subconstructs and their associated measures were subjected to a qualitative theme analysis. Findings from this analysis suggested that four constructs tapped into aspects of school-specific relationships (i.e., connectedness, prosocial behavior, peer relationships, student–teacher relationships), five touched on educational performance and learning behaviors (i.e., self-efficacy, meaningful participation, persistence, love or learning, creativity), four tapped into affective experiences at or about school (i.e., gratitude, zest, optimism, and curiosity), and three represented holistic evaluations of or orientations toward one’s school experience (i.e., satisfaction, goal orientation, and educational purpose). Following, using face-validity and representativeness considerations, one subconstruct was selected to represent each of the four thematic domains: school connectedness, academic efficacy, joy of learning, and educational purpose. Importantly, in our view, each of these four selected subconstructs tapped into and somewhat accounted for the other subconstructs included within its thematic domain, which were not selected for inclusion in the pilot measure. School connectedness was defined as feeling cared for by and relating well to others at school; academic efficacy was defined as appraising one’s academic behaviors as effectively meeting environmental demands; academic zest was renamed joy of learning (to enhance its face-validity for educators and practitioners) and was defined as experiencing positive emotions and cognitions when engaged in academic tasks; and educational purpose was defined as appraising school and academic tasks as important and meaningful. After selecting and defining the constructs of interest, an item structure and pool was developed. We

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

determined that all items would be phrased positively to directly represent the subconstructs of interest (necessitating no reverse-coding) and that a 4-point, frequency-based response scale was the most appropriate for our purposes (1 ⫽ almost never, 2 ⫽ rarely, 3 ⫽ sometimes, 4 ⫽ almost always). This particular response scale format was selected because we conceptualized the subconstructs as representing classes of wellbeing behaviors (i.e., things students do—such as feeling, appraising, experiencing—that are located in time and space; cf. Törneke, 2010) and we reasoned that four response options was optimal for youth to make meaningful distinctions regarding behavioral frequency. Furthermore, we selected a frequency-based response scale because we intended to assess the prevalence of students’ wellbeing behaviors, rather than the intensity of or their self-identification with such experiences (which would be measured using other types of response scales), as we are primarily concerned with gauging and promoting basic wellbeing behaviors— hoping that more students will be more well, more often—and not necessarily the degree to which such behaviors are experienced, which is likely to fluctuate despite prevalence rates. Following, pilot scales were created by drafting a pool of eight straightforward, developmentally appropriate items for each of the four subconstructs of interest, resulting in 32 total test items. Items for the School Connectedness Scale (SCS) of the SSWQ were modeled after preexisting items in the Psychological Sense of School Membership Scale (Goodenow, 1993) and the Adolescent Health School Connectedness Scale (Resnick, Harris, & Blum, 1993); Academic Efficacy Scale (ASE) items were modeled after items in Roeser, Midgley, and Urdan’s (1996) Academic Self-Efficacy Scale; items from the Joy of Learning Scale (JLS) were modeled after items in the zest subscale of the PEASS (Furlong et al., 2013); and Educational Purpose Scale (EPS) items were modeled after a couple items in the cognitive engagement subscale of the Student Engagement Measure (Fredricks, Blumenfeld, Friedel, & Paris, 2005) and the aspirations subscale of the Student School Engagement Measure (Hazel, Vazirabadi, & Gallagher, 2013). Following this drafting process, the 32-item SSWQ was administered to a group of eight undergraduate research assistants working with youth in the schools, who critically reviewed item structure and wording for clarity and developmental appropriateness. Feed-

539

back from this content review resulted in several minor changes to item wording, while retaining all original pilot scales and test items. For the next step, a target sample of adolescents attending two local middle schools was identified for piloting the SSWQ and a research partnership was initiated with the administrators of both schools. As a precondition for participation, the administrative teams at both schools determined that the most feasible setting for administering the SSWQ—to reach all students while limiting intrusions into academic instruction time—was during the homeroom or advisory period, which lasted only 20 minutes each morning. Thus, as a final step, and for pragmatic purposes, we pared down the 32item SSWQ by selecting the four pilot items from each subscale that appeared to be the most facevalid and representative of the constructs of interest (see Table 1), resulting in an abbreviated 16item measure that could be feasibly overviewed, distributed, completed, and collected, along with a few concurrent validity scales, within the duration of a single homeroom period. Structural validity. After establishing the substantive validity of the SSWQ, the structural validity of the measure was tested in two phases using a target sample of middle school students (described in the Participants subsection above). Using responses from a random split-half of this target sample, initial structural validity evidence was established via the first phase of analyses by evaluating the distribution of the SSWQ pilot items, conducting exploratory factor analyses (EFA) to identify the pilot measure’s latent structure, carrying out internal consistency analyses on the resulting subscales, and then conducting correlational analyses with two hypothesized convergent validity scales, one assessing student prosociality and the other academic perseverance (described in the Concurrent Validity Measures subsection below). Given the promising findings from these initial analyses, another phase of analyses were carried out to confirm the SSWQ’s structural coherence using responses from the other random split-half of the target sample. Specifically, item distributions were reevaluated, the latent structure was verified by conducting a confirmatory factor analysis (CFA), internal consistency was replicated at the latent-level using construct reliability analyses, measurement invariance was examined across gender (given that previous research found that females have greater school-specific

540

RENSHAW, LONG, AND COOK

Table 1 Proposed Structure of the Student Subjective Wellbeing Questionnaire

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Subscale and items School Connectedness Scale SCS1. SCS2. SCS3. SCS4. Joy of Learning Scale JLS1. JLS2. JLS3. JLS4. Educational Purpose Scale EPS1. EPS2. EPS3. EPS4. Academic Efficacy Scale AES1. AES2. AES3. AES4.

I I I I

feel like I belong at this school. can really be myself at this school. feel like people at this school care about me. am treated with respect at this school.

I I I I

get excited about learning new things in class. am really interested in the things I am doing at school. enjoy working on class projects and assignments. feel happy when I am working and learning at school.

I I I I

feel like the things I do at school are important. think school matters and should be taken seriously. feel it is important to do well in my classes. believe the things I learn at school will help me in my life.

I I I I

am a successful student. do good work at school. do well on my class assignments. get good grades in my classes.

subjective wellbeing than males; Furlong et al., 2013), and the CFA model was extended to conduct a latent variable path analysis (LVPA) that predicted student prosociality and academic perseverance. External validity. Following this series of structural validity analyses, a series of initial external validity analyses were conducted (again using the second random split-half of the target sample) to simulate the utility of the SSWQ as a classification instrument for predicting the presence of various risk and protective factors. As the first step in carrying out these analog analyses, participants’ overall student subjective wellbeing levels (SSWL) were computed by transforming the SSWQ composite scores (i.e., the sum of all subscale total scores) into z-scores. Next, participants were classified into SSWL groups according to these z-scores, using a four-group categorization schema common to standardized testing, assessment, and screening: below average (composite z-score ⬍ ⫺1 SD), low average (⫺1 SD ⬍ composite z-score ⬍ 0 SD), high average (0 SD ⬍ composite z-score ⬍ 1 SD), and above average (composite z-score ⬎ 1 SD). Following, SSWL was used as the sole predictor variable within a series of binary logistic regression (BLR) analyses that modeled students’ endorsements of

various risk factors (e.g., aggression and substance use) and protective factors (e.g., social support and physical exercise). Taken together, findings from these external validity analyses, as well as those from the structural validity analyses and substantive validity procedures described above, were considered to provide initial construct validity evidences in favor of the SSWQ as a measure of student subjective wellbeing. Concurrent Validity Measures Student Prosociality Scale (SPS). The SPS is a four-item scale assessing adolescents’ perceptions of their pro-social behavior toward others within the school environment (Renshaw, 2014). All SPS items are positively phrased (e.g., “I help other kids who seem to be having a hard time” and “I do nice things for people at this school”) and are arranged along a 4-point response scale (1 ⫽ almost never, 2 ⫽ sometimes, 3 ⫽ often, 4 ⫽ almost always). The SPS has been shown to be relatively normally distributed, to have a unidimensional factor structure, and to be invariant across male and female students. In the present study, the internal consistency of the SPS was observed to be

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

adequate in both subsamples (Cronbach’s alpha SS1 ⫽ .72, SS2 ⫽ .71). Academic Perseverance Scale (APS). The APS is a four-item scale for assessing students’ perceptions of their perseverance during academic tasks at school (Renshaw, 2014). All APS items are positively phrased (e.g., “I try my best to learn things I don’t understand” and “I finish my assignments even when they are really hard for me”) and are arranged along a 4-point response scale (1 ⫽ almost never, 2 ⫽ sometimes, 3 ⫽ often, 4 ⫽ almost always). The APS has been shown to be relatively normally distributed, to have a unidimensional factor structure, and to be invariant across male and female students. In the present study, the internal consistency of the APS was observed to be adequate in both subsamples (Cronbach’s alpha SS1 and SS2 ⫽ .72). Youth Risks and Assets Survey (YRAS). The YRAS is a 10-item survey for assessing youths’ risks and assets that are predictive of educational outcomes (Renshaw, 2014). Unlike the SPS and APS, which use a scale of items to assess a single latent construct, each YRAS item is intended to function as a stand-alone indicator of a particular domain of youths’ functioning. The risks assessed by the YRAS include (a) aggression reception, (b) aggression perpetration, (c) substance use, (d) self-harm, and (e) languishing affect, while the assets include (f) social support reception, (g) social support provision, (h) physical exercise, (i) enjoyable activity engagement, and (j) thriving affect. All YRAS items are arranged along a 4-point response scale (1 ⫽ 0 time/days, 2 ⫽ 1 time/day, 3 ⫽ 2– 4 times/days, 4 ⫽ 5 or more times/days) gauging the relative frequency of experiencing risks within the past month (e.g., “In the past month, how many times have you been bullied, picked on, or harassed by another student at school?”) or assets during the last week (e.g., “During the last week, how many days did someone at school do something nice for you or help you when you really needed it?”). Given that the YRAS items have been shown to be significantly non-normally distributed and to lack substantive response variability, risk items are recoded to represent the presence of the behavior of interest by transforming original responses into dichotomous variables representing either no endorsement (i.e., 0 times) or some endorsement (i.e., at least 1 time)

541

within the past month, while asset items are recoded by transforming original responses into dichotomous variables representing either rare endorsement (i.e., 0 to 1 days) or repeated endorsement (i.e., at least 2 to 4 days) during the last week. Data Collection and Processing The SSWQ, SPS, APS, and YRAS were combined into a single paper-and-pencil survey, which was administered by homeroom teachers during regular school hours over the course of approximately 2 weeks, and then retrieved from the schools by a research assistant. Prior to beginning the study, all measures, data collection procedures, and consent methods were approved by both the primary author’s Institutional Review Board and the school district’s Accountability and Assessment Department. Following data collection, two research assistants screened the data for plausible response patterns and processed all usable surveys into a secure electronic database, while the lead author verified data entry and accuracy. Data Analyses Following data collection and processing, data analyses were conducted in step with the SSWQ development process, described above. The structural validity of the 16-item pilot version of the SSWQ was explored via two phases of analyses. In Phase 1, EFA, internal consistency, and correlational analyses were conducted with SS1 using SPSS version 20. In Phase 2, the findings from Phase 1 were confirmed and extended using CFA, latent construct reliability, measurement invariance, and LVPA analyses with SS2 using Amos version 20. Given the promising findings from these two phases of structural validity analyses, additional analyses were conducted to explore the external validity of the SSWQ by estimating a series of BLR models with SS2 using SPSS version 20. Response frequency analyses of the SSWQ items indicated missing data ranges of 0.2%–3.6% and 0.6%–6.4% for SS1 and SS2, respectively. Similar analyses for the SPS, APS, and YRAS items yielded missing data ranges of 0.6%–3% and 0.8%–5.6%; 0.6%–3% and 0.2%–5.4%; and 4%–6.2% and 5%–7% for SS1 and SS2, respectively. Given that the structure

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

542

RENSHAW, LONG, AND COOK

of the study did not involve opportunities for participant attrition, and that data processing and entry was systematically verified, all missing data were considered to be the result of participant nonresponses to survey questions. Furthermore, given the large subsample sizes in the present study, the percentage of missing data per variable was deemed workable, as preliminary analyses indicated that all intended primary analyses would have adequate statistical power to detect the effects of interest (cf. Schlomer, Bauman, & Card, 2010). The pattern of missing data was explored by computing a dummy variable that represented missingness (0 ⫽ missing value, 1 ⫽ present value) for each of the SSWQ, SPS, APS, and YRAS items, and then analyzing the association between these dummy variables and all other variables of interest in the study. Findings from these analyses indicated negligible effect sizes (r ⬍ |0.1|), suggesting that missing data was most likely missing completely at random. Thus, to handle missing data, the listwise deletion method was used for all analyses conducted with SPSS, while the full information maximum likelihood method was used for all analyses conducted with Amos (cf. Schlomer et al., 2010). Results Structural Validity Phase 1, SS1. Findings from the item distribution evaluation indicated that the omnibus test of multivariate normality was significant (␹2 ⫽ 821.01, df ⫽ 32, p ⬍ .001), indicating overall non-normality, yet examination of the distributions of the individual SSWQ items suggested only mild departures from normality (g1, g2 ⬍ |1|). Considering these findings in conjunction with the recommendations of Costello and Osborne (2005), maximum likelihood extraction method with a promax rotation was chosen as the preferred analytic approach. Results from the initial EFA yielded four factors with eigenvalues ⬎ 1, which accounted for approximately 58% of the total variance and were characterized by an excellent data—model fit (␹2 ⫽ 79.09, df ⫽ 62, p ⫽ .07). Inspection of the scree plot also supported a four-factor solution, while results from the parallel analysis indicated a three-factor solution. Although the three-factor solution was further explored, it was ultimately

deemed untenable, as it had poor conceptual merit. Therefore the four-factor solution was selected as the preferred model, with items loading onto latent factors that directly aligned with the constructs underlying the hypothesized SSWQ subscales: school connectedness, joy of learning, educational purpose, and academic efficacy. Findings from the EFA pattern matrix indicated that item loadings were robust across all factors (␭ ⬎ .50; see Table 2). Furthermore, all of the resulting SSWQ subscales and the composite scale demonstrated at least adequate internal consistency (Cronbach’s alpha ⬎ .70), and all subscales and the composite scale were characterized by approximately normal distributions (g1, g2 ⬍ |1|; see Table 3). Moreover, bivariate correlations conducted among all resulting SSWQ subscales, the composite scale, and the student prosociality and academic perseverance scales indicated moderate positive associations among all subscales (.30 ⬍ Pearson r ⬍ .60) and strong positive associations between each subscale and the composite scale (r ⱖ .65; see Table 4). Phase 2, SS2. Results from the item distribution evaluation indicated that the omnibus

Table 2 EFA Pattern Matrix for the Student Subjective Wellbeing Questionnaire Factor loadings (␭) ␰1 Item SCS1 SCS2 SCS3 SCS4 JLS1 JLS2 JLS3 JLS4 EPS1 EPS2 EPS3 EPS4 AES1 AES2 AES3 AES4

␰2

␰3

␰4

Academic School Joy of Educational efficacy connectedness learning purpose ⫺.14 ⫺.07 .07 .12 .00 .00 .01 .03 ⫺.07 .10 .10 ⫺.03 .60 .72 .61 .75

.60 .52 .79 .66 ⫺.13 .12 ⫺.03 .17 .11 ⫺.06 .00 ⫺.06 .00 ⫺.05 .05 ⫺.01

⫺.03 .00 ⫺.02 ⫺.04 .74 .54 .58 .52 .08 .04 ⫺.09 .10 ⫺.11 .07 ⫺.06 .10

.17 .06 ⫺.16 ⫺.05 ⫺.03 .13 .01 .07 .51 .57 .68 .59 .12 ⫺.02 .17 ⫺.12

Note. All robust factor coefficients (␭ ⱖ .30) are formatted in bold font.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

543

Table 3 Descriptive Statistics of the Student Subjective Wellbeing Questionnaire

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Scale

Code

Items SS Min., Max.

School Connectedness Scale

SCS

4

Joy of Learning Scale

JLS

4

Educational Purpose Scale

EPS

4

Academic Efficacy Scale

AES

4

Student Subjective Wellbeing Composite Scale SSWCS Note.

4, 16 4, 16 4, 16 4, 16 5, 16 4, 16 5, 16 5, 16 24, 64 16, 64

M 11.33 11.11 11.61 11.67 13.49 13.66 12.98 13.06 49.48 49.54

SD

g1

2.92 ⫺.33 3.04 ⫺.35 2.72 ⫺.35 2.74 ⫺.34 2.36 ⫺.94 2.37 ⫺1.15 2.30 ⫺.54 2.31 ⫺.65 7.85 ⫺.46 8.26 ⫺.52

g2



⫺.60 ⫺.68 ⫺.38 ⫺.45 .35 1.23 ⫺.12 ⫺.06 .04 ⫺.26

.72 .73 .74 .76 .72 .73 .78 .78 .86 .88

SS ⫽ Subsample; Min., Max. ⫽ Minimum and maximum observed scale scores. g1 ⫽ Skewness, g2 ⫽ Kurtosis.

test of multivariate normality was significant (␹2 ⫽ 1,129.25, df ⫽ 32, p ⬍ .001), indicating overall non-normality, yet examination of the distributions of the individual SSWQ items suggested only mild to moderate departures from normality (g1, g2 ⬍ |2|). Considering these findings with Harrington’s (2008) recommendations for conducting CFA using SPSS Amos with missing data, the maximum likelihood estimation method was chosen over other potential estimation methods. To evaluate model validity, data–model fit statistics were considered in conjunction with factor loadings and other parameter estimates (Mueller & Hancock, 2008). To determine the goodness of data– model fit, a combination of absolute, parsimonious, and incremental fit indices were used. Table 4 Intercorrelations Among the Student Subjective Wellbeing Questionnaire Scales and the Concurrent Validity Scales Correlation (r) Scale 1. 2. 3. 4. 5. 6. 7.

16

1 2 1 2 1 2 1 2 1 2

SCS JLS EPS AES SSWCS SPS APS

1

2

3

4

5

6

7

1 .43 .38 .37 .75 .45 .38

1 .56 .44 .81 .53 .59

1 .47 .77 .52 .58

1 .73 .49 .65

1 .65 .72

1 .61

1

Note. All correlations significant at the p ⱕ .01 level. SCS ⫽ School Connectedness Scale; JLS ⫽ Joy of Learning Scale; EPS ⫽ Educational Purpose Scale; AES ⫽ Academic Efficacy Scale; SSWCS ⫽ Student Subjective Wellbeing Composite Scale; SAS ⫽ Student Prosociality Scale; APS ⫽ Academic Perseverance Scale.

Tucker-Lewis Index (TLI) and comparative fit index (CFI) values between .90 –.95 and root mean square error of approximation (RMSEA) values (with an accompanying 90% confidence interval) between .05–.08 were taken to indicate adequate data–model fit, while TLI and CFI values ⬎ .95 and RMSEA values ⬍ .05 were considered indicative of good data–model fit (Kenny, 2014). Regarding factor loadings, ␭ ⱖ .50 were taken to be strong loadings, as they accounted for ⱖ 25% of the variance extracted from each item by the latent factor. And for latent construct reliability, H ⱖ .70 were considered desirable, as they estimate a strong intrafactor correlation over repeated administrations (Mueller & Hancock, 2008). For CFA Model 1, which tested a fully correlated fourfactor latent structure for the SSWQ based on the EFA findings, validity was also evaluated via the strength and directionality of the interfactor correlations, with significantly positive, moderate-to-strong correlations (␸ range ⫽ .30 –.80) expected among the latent constructs. Using these model validity standards, results from Model 1 yielded an adequate data–model fit (␹2 ⫽ 231.52, df ⫽ 98, p ⬍ .001, CFI ⫽ .943, TLI ⫽ .921, RMSEA [90% CI] ⫽ .052 [.043, .061]), which was characterized by robust factor loadings for each latent construct (␭ range ⫽ .59 –.75, p ⬍ .001), moderate-to-strong positive interfactor correlations (␸ range ⫽ .49 –.73, p ⬍ .001), and adequate maximal reliability for all factors (H range ⫽ .73–.78). Findings from Model 2, which extended Model 1 by structuring the four first-order latent factors as effect indicators of a single secondorder latent factor (i.e., student subjective well-

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

544

RENSHAW, LONG, AND COOK

being), also yielded an adequate data–model fit (␹2 ⫽ 236.75, df ⫽ 100, p ⬍ .001, CFI ⫽ .941, TLI ⫽ .920, RMSEA [90% CI] ⫽ .052 [.044, .061]), which was characterized by robust factor loadings for each latent construct (␭ range ⫽ .59 –.75, p ⬍ .001), adequate maximal reliability for all first-order factors (H range ⫽ .73–. 78), and strong maximal reliability for the second-order factor (H ⫽ .92). Given that Model 2 was more conceptually and psychometrically parsimonious, it was selected as the preferred measurement structure for the SSWQ (see Figure 1). Further analyses of this preferred measurement model indicated that all of the SSWQ subscales and the composite scale demonstrated at least adequate internal consistency (␣ ⬎ .70) and were characterized by relatively normal distributions (g1, g2 ⬍ |1.5|; see Table 3). Moreover, results from measurement invariance testing of this model—which was conducted via a series of multigroup CFA testing configural invariance (i.e., equality of the latent factor structure across groups), metric invariance (i.e., equality of the factor loadings across groups), and scalar invariance (i.e., equality of crossgroup differences in latent and observed means)—indicated adequate data–model fit for all levels of invariance across male and female respondents (see Table 5), and findings from the follow-up latent mean analyses (using males as the referent group) estimated approximately equivalent scores across gender for all firstorder factors as well as for the second-order factor (see Table 6). Finally, results from the LVPA, which extended this second-order model to predict student prosociality and academic perseverance, also yielded an adequate data– model fit (␹2 ⫽ 588.64, df ⫽ 246, p ⬍ .001, CFI ⫽ .912, TLI ⫽ .893, RMSEA [90% CI] ⫽ .053 [.047, .058]), which was characterized by very robust, positive standardized path coefficients (␤ ⬎ .80; see Figure 2) and R2 values indicating that student subjective wellbeing accounted for approximately 69% and 98% of the variance extracted in the concurrent validity factors, respectively. External Validity Findings from the series of BLR models, again using SS2, were interpreted in light of the magnitude and trends of resulting odds ratios (OR). With the below average SSWL group set

as the referent, results for the risk items indicated that large effect sizes (OR ⱕ .24) were observed for the above average SSWL group across the majority of indicators (excepting aggression reception, which showed a medium effect size) and for the high average group on the self-harm indicator. The high average SSWL group demonstrated medium effect sizes (.70 ⬎ OR ⬎ .24) across the remainder of the risk factors, as did the low average group across the majority of indicators (excepting aggression perpetration and languishing affect, which indicated negligible effect sizes; see Table 7). Findings for the asset items indicated similar trends, although attenuated, in the opposite direction. Specifically, using the below average SSWL group as the referent, large effect sizes (OR ⱖ 4.25) were observed for the above average group on the social support provision and thriving affect indicators. Medium effect sizes (2. 48 ⬍ OR ⬍ 4.25) were demonstrated for the above average SSWL group on the social support reception and physical exercise protective factors, for the high average group across the majority of indicators (excepting enjoyable activities, which indicated a small effect size), and for the low average group across the majority of protective factors (again excepting enjoyable activities, which indicated a negligible effect size; see Table 7). Discussion Interpretation of Results The overarching purpose of the present study was to develop and establish the technical adequacy of a brief self-report instrument for assessing adolescents’ school-specific positive psychological functioning—the SSWQ. The SSWQ was developed to function as a standalone measure of students’ subjective wellbeing as well as a complementary measure to informant-rated wellbeing and problem measures (e.g., teacher reports of positive or negative social behavior), performance-based wellbeing or problems measures (e.g., test scores or office discipline referrals), and subjective problem measures (e.g., self-reports of internalizing and externalizing symptoms), all of which are commonly used within MTSS in schools. To this end, the first subpurpose of this study was to conceptualize the metaconstruct of student sub-

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

SCS1

SCS2

.66† .61* .64*

SCS3

School Connectedness H = .74 .76†

.64*

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

545

SCS4

JLS1

.59†

JLS2

.70* .59*

JLS3

Joy of Learning H = .77

.93*

.75* JLS4

EPS1

EPS2

Student Subjective Wellbeing H = .92

.66† .86*

.62* .62*

EPS3

Educational Purpose H = .73

.63* .71*

EPS4

AES1

AES2

.63† .71* .71*

AES3

Academic Efficacy H = .78

.69* AES4

Figure 1. Preferred second-order CFA measurement model for the SSWQ. † ⫽ Unstandardized loading fixed to 1.0; thus, no significance level was computed. ⴱ ⫽ Standardized factor loading (␭) significant at the p ⬍ .001 level. H ⫽ Latent construct reliability coefficient.

jective wellbeing and its subconstructs, and then to operationalize these subconstructs via drafting test scales and items. This initial step in measure development, which intended to estab-

lish the substantive validity of the SSWQ, resulted in four hypothesized subscales—school connectedness, academic efficacy, joy of learning, and educational purpose—that were each

546

RENSHAW, LONG, AND COOK

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Table 5 Multigroup Measurement Invariance Testing of the Student Subjective Wellbeing Questionnaire Invariance level

␹2

df

TLI

CFI

RMSEA [90% CI]

Configural First-order metric First-order metric and scalar Full metric and first-order scalar

345.78 366.99 398.65 405.97

200 212 228 231

.910 .910 .908 .907

.934 .930 .923 .921

.039 [.032, .046] .039 [.033, .046] .040 [.033, .046] .040 [.034, .046]

comprised of eight pilot items. And following the identification of a target sample of adolescents and initiation of a research partnership with administrators from two middle schools, feasibility considerations led to further refinement of the test items, resulting in a 16-item pilot measure that was optimized according to face-validity and representativeness considerations and then subjected to empirical testing. That said, the product of this conceptualization and operationalization stage suggested that the SSWQ had promise as a substantively valid, multidimensional measure of student subjective wellbeing. The second subpurpose of this study was to investigate the initial psychometric properties of the pilot version of the SSWQ by testing the measure on the target sample of middle school students. To accomplish this, we conducted two phases of analyses that intended to establish the structural validity of the measure. Findings from the Phase 1, SS1 analyses indicated that the 16 pilot items were all relatively normally distributed, that the latent structure underlying these items was characterized by a four-factor solution matching our four hypothesized subscales—school connectedness, academic efficacy, joy of learning, and educational purpose—that the resulting subscales and

Table 6 Latent Means Analysis by Gender of the Student Subjective Wellbeing Questionnaire Latent factor School connectedness Joy of learning Educational purpose Academic efficacy Student subjective wellbeing

Estimate SE .008 ⫺.009 .090 .021 .024

CR

p

.069 .114 .909 .071 ⫺.124 .902 .057 1.586 .113 .055 .372 .710 .054 .445 .656

Note. Males were used as the referent (means set at 0); thus, estimates represent females mean-difference from males. CR ⫽ Critical ratio.

composite scale of the SSWQ were all relatively normally distributed and demonstrated at least adequate internal consistency, and that all scales had moderate-to-strong positive intercorrelations with each other as well as with two concurrent validity measures (i.e., student prosociality and academic perseverance). Furthermore, findings from the Phase 2, SS2 analyses confirmed the relatively normal distribution of the pilot items, the hypothesized latent structure of the SSWQ, the relatively normal distribution of the subscales and composite scale as well as the adequate construct reliability of both the firstorder factors and the single second-order factor (i.e., student subjective wellbeing), and the strong positive association between the secondorder factor and the two concurrent validity measures. Additionally, multigroup measurement invariance and latent means analyses from the second phase of analyses indicated that the SSWQ was invariant across gender and that similar latent means were observed for males and females across all first-order factors as well as the second-order factor, which is, notably, contrary to previous findings showing that elementary-age females have greater student subjective wellbeing than males (Furlong et al., 2013). Taken together, then, these findings provided evidence in favor of the SSWQ as a structurally valid measure of student subjective wellbeing, showing that its scales and the latent constructs they represent are psychometrically sound across gender and convergent with other subjective wellbeing measures. Thus, our hypothesis that the SSWQ would demonstrate a statistically robust multidimensional latent-trait structure was positively supported, suggesting that the instrument is technically adequate for potential use as a schoolwide screener, progress monitoring tool, or other general outcome measure—and suggesting that future research is warranted to explore the classification accuracy

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

School Connectedness

.71†

Joy of Learning

.89*

.83*

547

School Prosociality

Student Subjective Wellbeing .99*

.88* Educational Purpose

Academic Perseverance .79*

Academic Efficacy

Figure 2. LVPA of Student Subjective Wellbeing as a predictor of School Prosociality and Academic Perseverance. † ⫽ Unstandardized loading fixed to 1.0; thus, no significance level was computed. ⴱ ⫽ Factor loading (␭) or standardized path coefficient (␤) significant at the p ⬍ .001 level.

and change-sensitivity of the SSWQ when used for such purposes. The final subpurpose of this study was to simulate the utility of the SSWQ as classification instrument in schools by investigating the usefulness of student subjective wellbeing levels, derived from SSWQ composite scores, for predicting the presence of various risks and assets. Findings from these analyses indicated that student subjective wellbeing levels were

strongly positively predictive of participants’ asset endorsements and strongly negatively predictive of their risk endorsements. The general effect size trend, which was stronger for some outcomes and more attenuated for others, indicated that, compared with participants’ in the below average SSWL group, those in increasingly higher groups (i.e., low average, high average, and above average) had increasingly greater odds of endorsing assets and progres-

Table 7 Odds Ratios for Risk and Assets Predicted by Student Subjective Wellbeing Level OR [95% CI] by Student subjective wellbeing level Risk/Asset

Below average

Low average

High average

Above average

Aggression reception Aggression perpetration Substance use Self-harm Languishing affect Social support reception Social support provision Physical exercise Enjoyable activities Thriving affect

1.00 (referent) 1.00 (referent) 1.00 (referent) 1.00 (referent) 1.00 (referent) 1.00 (referent) 1.00 (referent) 1.00 (referent) 1.00 (referent) 1.00 (referent)

.45 [.35, .81]b .92 [.47, 1.82] .35 [.14, .88]c .31 [.16, .57]a .71 [.40, 1.27] 2.32 [1.20, 4.49]b 2.81 [1.53, 5.15]a 2.95 [1.58, 5.51]a 1.26 [.69, 2.29] 2.03 [1.11, 3.72]c

.30 [.17, .54]a .58 [.29, 1.15] .26 [.10, .66]b .19 [.10, .36]a .50 [.29, .88]c 2.83 [1.50, 5.35]a 3.73 [2.08, 6.71]a 2.28 [1.28, 4.04]b 1.57 [.88, 2.52] 3.78 [2.04, 7.01]a

.25 [.12, .52]a .23 [.08, .66]b .07 [.01, .52]b .09 [.03, .23]a .24 [.12, .47]a 3.96 [1.94, 8.10]a 5.21 [2.61, 10.42]a 2.69 [1.34, 5.45]b 1.75 [.86, 3.55] 7.75 [3.15, 19.07]a

Note. OR ⫽ odds ratio. a ⫽ Significant at the p ⱕ .001 level.

b

⫽ p ⱕ .01 level.

c

⫽ p ⱕ .05 level.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

548

RENSHAW, LONG, AND COOK

sively diminishing odds of endorsing risks. For example, compared with the below average group, participants’ in the low average, high average, and above average SSWL groups were approximately three, five, and 11 times less likely, respectively, to endorse self-harm behavior in the past 30 days—and they were also about three, four, and five times more likely, respectively, to endorse social support behavior toward others at school within the past week. Such findings suggest that, in addition to being a theoretically and psychometrically sound instrument, the SSWQ has potential utility as classification instrument for complimenting and enhancing data-based decision-making processes within MTSS in schools. Moreover, these findings supported our general hypothesis regarding the relatedness of student subjective wellbeing to various risks and assets, suggesting that the SSWQ measures constructs that are practically significant and interrelated with other key areas of youth development. Limitations and Future Research Despite the promising findings noted above, results from the present study warrant further consideration in light of a few methodological limitations. First, like most survey studies, the participants in the present study were derived from a convenience sample of middle school students, and therefore the findings cannot be deemed representative of all secondary students in the United States. Thus, the generalizability of these findings is limited in scope to demographically similar youth (i.e., enrolled in Grades 6 – 8, majority Black, receiving free-orreduced lunch, living in midsized urban cities). To remedy these sampling limitations, we suggest that future research developing the SSWQ use random and weighted sampling techniques with more diverse samples of adolescents, and that further investigations of measurement invariance be conducted across gender, gradelevel, ethnic background, and geographic location. Furthermore, given that all of the measures in the present study were self-reported, these findings may potentially be biased by commonmethod variance (i.e., the variance attributed to the measurement method rather than to the constructs represented by the measures; Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). The potential for this bias is likely strongest for the

LVPA findings, as the SSWQ, SPS, and APS items were all arranged along the same 4-point response scale. To prevent against this potential confound in future studies, we recommend expanding the repertoire of future validation measures to include predictive self-report measures (assessed at later time points), informant-report measures (completed by teachers, parents, or peers), and performance-based measures (e.g., recent report-card grades, total grade point average, standardized-test scores, direct observations of social behaviors, and discipline infractions). Finally, the concurrent validity findings might also be limited by the measure used for screening students’ risks and assets, as singleitem assessments are prone toward construct underrepresentation; therefore, we recommend future research employ more robust, multi-item measures to assess similar indicators. Implications for Theory and Practice Considering the findings reviewed above, the present study has a few noteworthy implications for the theory and practice of school psychology. Regarding theory, results from this study lend further empirical support to the viability of a school-grounded positive psychology for youth, wherein school-specific subjective wellbeing indicators (e.g., school connectedness), as opposed to domain-general wellbeing indicators (e.g., social connectedness), are taken as the subject of assessment and intervention (cf. Furlong et al., 2013). Findings from this study also expand the available repertoire of schoolspecific subjective wellbeing indicators— further fleshing out the joy of learning and educational purpose indicators, while providing additional scales for school connectedness and academic efficacy—and, most importantly, verify the latent structure underlying these constructs. Furthermore, results from this study also offer empirical support in favor of students’ cumulative subjective wellbeing, or covitality (e.g., Renshaw et al., 2014), as a second-order construct that has concurrent validity as a predictor variable at both the latent level (demonstrated by the LVPA findings) and the applied classification level (demonstrated by the BLR findings). Finally, taken as a whole, this study offers a larger theoretical contribution to the conceptualization and operationalization of student outcome research in school psychology—

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

suggesting that assessing youths’ schoolspecific subjective wellbeing is a promising endeavor that warrants further attention in both research and practice. Regarding the practice of school psychology, findings from the present study suggest that the SSWQ is promising instrument for assessing youths’ school-specific positive psychological functioning within wellness-oriented MTSS in schools (cf. Miller, Nickerson, & Jimerson, 2014). More specifically, the SSWQ seems particularly well-suited to function as the positive counterpart measure to traditional schoolwide assessments of student problem behaviors, risk, and psychopathology—rounding out a bidimensional approach to universal mental health screening and service delivery (cf. Renshaw & Cohen, 2014). This bidimensional approach to assessment is crucial for evaluating and validating the effectiveness of schoolwide prevention programming—such as positive behavior interventions and supports, social– emotional learning, and integrative efforts (Domitrovich et al., 2010)—that explicitly aim to both reduce student problems and promote student wellbeing, although they typically only assess the former. Given that the SSWQ is multidimensional, brief, face-valid, and simple to administer, we posit that educators and other school-based practitioners will deem it to be a feasible and useful outcome measure for evaluating such interventions in schools. That said, although the SSWQ appears to assess promising wellbeing indicators, it is noteworthy that, to date, there is little direct empirical support for tailored interventions that might be used to directly target and cultivate some of its subconstructs. Specifically, although there is a substantial body of empirical literature regarding strategies for enhancing school connectedness (Zullig & Matthews-Ewald, 2014) and academic self-efficacy (Schunk & Dibenedetto, 2014), there is currently a void of empirically supported practices for targeting students’ joy of learning and educational purpose. Thus, future research is warranted using the SSWQ as an intervention planning and progress-monitoring tool, for both previously validated and emerging intervention strategies. Finally, in closing, we wish to reiterate the conceptual and psychometric tentativeness of the measurement model underlying the SSWQ. Although the construct validity of student sub-

549

jective wellbeing as measured via the SSWQ has been established in this study via garnering initial substantive, structural, and external validity evidences, we recognize that such data are provisional and warrant further investigation. From the beginning, our intention was never to create an exhaustive measure— one assessing all possible student subjective wellbeing indicators— but rather to develop a brief, multidimensional, parsimonious measure that assessed several core components of adolescents’ positive psychological functioning at school. Indeed, our primary motivation underlying the development of the SSWQ was to produce a face-valid and feasible outcome measure that could be used to compliment informant-rated and performancebased outcome measures that are commonly used within MTSS in schools, for the purposes of determining the extent to which students are living well according to both objective and subjective standards. That said, we acknowledge that the SSWQ may currently omit some school-specific subjective wellbeing indicators that other scholars or practitioners deem important to the substantive validity of student subjective wellbeing (e.g., school satisfaction). And we expect that other promising indicators will emerge in the future that will warrant reconsideration of the substantive validity of the SSWQ. Moreover, to further establish the structural and external validity of the SSWQ, it is obvious that much more research is needed— testing the measure with diverse samples of youth as well as with other convergent and discriminant measures of wellbeing. Thus, we hope other scholars interested in this line of work will rigorously rethink, revise, and retest the SSWQ, further advancing our understanding of what it means for youth to live well at school. References Chafouleas, S. M., & Bray, M. A. (2004). Introducing positive psychology: Finding a place within school psychology. Psychology in the Schools, 41, 1–5. doi:10.1002/pits.10133 Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7, 309 –319. doi: 10.1037/1040-3590.7.3.309 Clonan, S. M., Chafouleas, S. M., McDougal, J. L., & Riley-Tilman, T. C. (2004). Positive psychology goes to school: Are we there yet? Psychology in the Schools, 41, 101–110. doi:10.1002/pits.10142

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

550

RENSHAW, LONG, AND COOK

Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four practices for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10, 1–9. DeVellis, R. F. (2003). Scale development: Theory and applications (2nd ed.). Thousand Oaks, CA: Sage. Diener, E., Oishi, S., & Lucas, R. E. (2009). Subjective well-being: The science of happiness and life satisfaction. In S. J. Lopez & C. R. Snyder (Eds.), Oxford handbook of positive psychology (2nd ed., pp. 187–194). New York, NY: Oxford. Domitrovich, C. E., Bradshaw, C. P., Greenberg, M. T., Embry, D., Poduska, J. M., & Ialongo, N. S. (2010). Integrated models of school-based prevention: Logic and theory. Psychology in the Schools, 47, 71– 88. doi:10.1002/pits Ervin, R. A., Gimpel Peacock, G., & Merrell, K. W. (2010). The school psychologist as a problem solver in the 21st century. In G. Gimpel Peacock, R. A. Ervin, E. J. Daly, & K. W. Merrell (Eds.), Practical handbook of school psychology (pp. 3–12). New York, NY: Guilford Press. Fredricks, J. A., Blumenfeld, P., Friedel, J., & Paris, A. (2005). School engagement. In K. A. Moore & L. H. Lippman (Eds.), What do children need to flourish? Conceptualizing and measuring indicators of positive development (pp. 305–321). New York, NY: Springer. doi:10.1007/0-387-238239_19 Fredrickson, B. L. (2013). Updated thinking on positivity ratios. American Psychologist, 68, 814 – 822. doi:10.1037/a0033584 Furlong, M. J., Gilman, R., & Huebner, E. S. (Eds.). (2014). Handbook of positive psychology in the schools (2nd ed.). New York, NY: Routledge. Furlong, M. J., O’Brennan, L. M., & You, S. (2011). Psychometric properties of the ADD Health School Connectedness Scale for 18 sociocultural groups. Psychology in the Schools, 48, 986 –997. doi:10.1002/pits.20609 Furlong, M. J., You, S., Renshaw, T. L., O’Malley, M. D., & Rebelez, J. (2013). Preliminary development of the Positive Experiences at School Scale for elementary school children. Child Indicators Research, 6, 753–775. doi:10.1007/s12187-0139193-7 Furlong, M. J., You, S., Renshaw, T. L., Smith, D. C., & O’Malley, M. D. (2014). Preliminary development and validation of the Social and Emotional Health Survey for secondary students. Social Indicators Research, 117, 1011–1032. doi:10.1007/ s11205-013-0373-0 Gable, S. L., & Haidt, J. (2005). What (and why) is positive psychology? Review of General Psychology, 9, 103–110. doi:10.1037/1089-2680.9.2.103 Goodenow, C. (1993). The psychological sense of school membership among adolescents: Scale de-

velopment and educational correlates. Psychology in the Schools, 30, 79 –90. doi:10.1002/15206807(199301)30:1⬍79::AID-PITS2310300113⬎3 .0.CO;2-X Harrington, D. (2008). Confirmatory factor analysis. New York, NY: Oxford. Hawkins, R. O., Barnett, D. W., Morrison, J. Q., & Musti-Rao, S. (2010). Choosing targets for assessment and intervention. In G. Gimpel Peacock, R. A. Ervin, E. J. Daly, & K. W. Merrell (Eds.), Practical handbook of school psychology (pp. 13– 31). New York, NY: New York: Guilford Press. Hazel, C. E., Vazirabadi, G. E., Albanes, J., & Gallagher, J. (2014). Evidence of convergent and discriminant validity of the Student School Engagement Measure. Psychological Assessment. Advanced online publication. doi:10.1037/ a0036277 Hazel, C. E., Vazirabadi, G. E., & Gallagher, J. (2013). Measuring aspirations, belonging, and productivity in secondary students: Validation of the Student School Engagement Measure. Psychology in the Schools, 50, 689 –704. doi:10.1002/pits .21703 Høigaard, R., Kovac, V. B., Øverby, N. C., & Haugen, T. (2014). Academic self-efficacy mediates the effects of school psychological climate on academic achievement. School Psychology Quarterly. Advanced online publication. doi:10.1037/ spq0000056 Huebner, E. S., & Gilman, R. (2003). Toward a focus on positive psychology in school psychology. School Psychology Quarterly, 18, 99 –102. doi: 10.1521/scpq.18.2.99.21862 Huebner, E. S., Hills, K. J., Siddall, J., & Gilman, R. (2014). Life satisfaction and schooling. In M. J. Furlong, R. Gilman, & E. S. Huebner (Eds.), Handbook of positive psychology in the schools (2nd ed., pp. 192–207). New York, NY: Routledge. Huebner, E. S., & McCullough, G. (2000). Correlates of school satisfaction among adolescents. The Journal of Educational Research, 93, 331–335. doi:10.1080/00220670009598725 Huebner, E. S., Suldo, S. M., Smith, L. C., & McKnight, C. G. (2004). Life satisfaction in children and youth: Empirical foundations and implications for school psychologists. Psychology in the Schools, 41, 81–93. doi:10.1002/pits.10140 Jennings, G. (2003). An exploration of meaningful participation and caring relationships as contexts for school engagement. California School Psychologist, 8, 43–52. doi:10.1007/BF03340895 Jenson, W. R., Olympia, D., Farley, M., & Clarke, E. (2004). Positive psychology and externalizing students in a sea of negativity. Psychology in the Schools, 41, 67–79. doi:10.1002/pits.10139

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

STUDENT SUBJECTIVE WELLBEING QUESTIONNAIRE

Joint Committee on Standards for Educational and Psychological Testing. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. Kenny, D. A. (2014). Measuring model fit in structural equation modeling. Retrieved from www .davidakenny.net/cm/fit.htm Loveinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Record, 3, 645– 694. doi:10.2466/PR0.3.7.635-694 Martens, B. K., & Witt, J. C. (2004). Competence, persistence, and success: The positive psychology of behavioral skill instruction. Psychology in the Schools, 41, 19 –30. doi:10.1002/pits.10135 Miller, D. N., Nickerson, A. B., & Jimerson, S. R. (2014). Positive psychological interventions in U.S. Schools: A public health approach to internalizing and externalizing problems. In M. J. Furlong, R. Gilman, & E. S. Huebner (Eds.), Handbook of positive psychology in the schools (2nd ed., pp. 478 – 494). New York, NY: Routledge. Mueller, R. O., & Hancock, G. R. (2008). Best practices in structural equation modeling. In J. Osborne (Ed.), Best practices in quantitative methods (pp. 488 –508). Thousand Oaks, CA: Sage. doi: 10.4135/9781412995627.d38 O’Malley, M. D., Katz, K., Renshaw, T. L., & Furlong, M. J. (2013). Gauging the system: Trends in school climate measurement and intervention. In S. R. Jimerson, A. B. Nickerson, M. J. Mayer, & M. J. Furlong (Eds.), Handbook of school violence and school safety (2nd ed., pp. 317–329). New York, NY: Routledge. Park, N., & Peterson, C. (2006). Moral competence and character strengths among adolescents: The development and validation of the Values in Action Inventory of strengths for youth. Journal of Adolescence, 29, 891–909. doi:10.1016/j.adolescence.2006.04.011 Peterson, C., & Park, N. (2003). Positive psychology as the evenhanded positive psychologist views it. Psychological Inquiry, 14, 143–147. Peterson, C., & Seligman, M. E. P. (2004). Character strengths and virtues: A handbook of classification. Washington DC: American Psychological Association. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879 –903. doi:10.1037/ 0021-9010.88.5.879 Proctor, C. L., Linley, P. A., & Maltby, J. (2009). Youth life satisfaction: A review of the literature. Journal of Happiness Studies, 10, 583– 630. doi: 10.1007/s10902-008-9110-9

551

Renshaw, T. L. (2014). Preliminary psychometric properties of the Student Prosociality Scale, Academic Perseverance Scale, and Youth Risks and Assets Survey. Retrieved from Social Science Research Network (SSRN2439539): http://papers .ssrn.com/sol3/papers.cfm?abstract_id⫽2439439 Renshaw, T. L., & Bolognino, S. J. (2014). The College Student Subjective Wellbeing Questionnaire: A brief measure of undergraduates’ covitality. Manuscript submitted for publication. Renshaw, T. L., & Cohen, A. C. (2014). Life satisfaction as a distinguishing indicator of college student functioning: Further validation of the twocontinua model of mental health. Social Indicators Research, 117, 319 –334. doi:10.1007/s11205013-0342-7 Renshaw, T. L., Furlong, M. J., Dowdy, E., Rebelez, J., Smith, D. C., O’Malley, M. D., . . . Frugård Strøm, I. (2014). Covitality: A synergistic conception of youths’ mental health. In M. J. Furlong, R. Gilman, & E. S. Huebner (Eds.), Handbook of positive psychology in the schools (2nd ed., pp. 12–32). New York, NY: Routledge. Resnick, M. D., Harris, J. L., & Blum, R. W. (1993). The impact of caring and connectedness on adolescent health and well-being. Journal of Pediatric Child Health, 29, S3–S9. doi:10.1111/j.1440-1754 .1993.tb02257.x Roeser, R. W., Midgley, C., & Urdan, T. C. (1996). Perceptions of the school psychological environment and early adolescents’ psychological and behavioral functioning in school: The mediating role of goals and belonging. Journal of Educational Psychology, 88, 408 – 422. doi:10.1037/0022-0663 .88.3.408 Ryff, C. D. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of Personality and Social Psychology, 57, 1069 –1081. doi:10.1037/0022-3514 .57.6.1069 Sahlberg, P. (2011). Finnish lessons: What can the world learn from educational change in Finland? New York, NY: Teachers College Press. Schlomer, G. L., Bauman, S., & Card, N. A. (2010). Best practices for missing data management in counseling psychology. Journal of Counseling Psychology, 57, 1–10. doi:10.1037/a0018082 Schunk, D. H., & Dibenedetto, M. K. (2014). Academic self-efficacy. In M. J. Furlong, R. Gilman, & E. S. Huebner (Eds.), Handbook of positive psychology in the schools (2nd ed., pp. 115–130). New York, NY: Routledge. Seligman, M. E. P. (2011). Flourish: A visionary new understanding of happiness and well-being. New York, NY: Free Press. Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

552

RENSHAW, LONG, AND COOK

Psychologist, 55, 5–14. doi:10.1037/0003-066X .55.1.5 Sheldon, K. M., & King, K. L. (2001). Why positive psychology is necessary. American Psychologist, 56, 216 –217. doi:10.1037/0003-066X.56.3.216 Sheridan, S. M., Warnes, E. D., Cowan, R. J., Schemm, A. V., & Clarke, B. L. (2004). Familycentered positive psychology: Focusing on strengths to build student success. Psychology in the Schools, 41, 7–17. doi:10.1002/pits.10134 Terjesen, M. D., Jacofsky, M., Froh, J., & DiGiuseppe, R. (2004). Integrating positive psychology into schools: Implications for practice. Psychology in the Schools, 41, 163–172. doi:10.1002/pits .10148 Törneke, N. (2010). Learning RFT: An introduction to Relational Frame Theory and its clinical application. Oakland, CA: New Harbinger. Tough, P. (2012). How children succeed: Grit, curiosity, and the hidden power of character. New York, NY: Houghton Mifflin Harcourt.

Voight, A., Austin, G., & Hanson, T. (2013). A climate for academic success: How school climate distinguishes schools that are beating the achievement odds. San Francisco, CA: WestEd. Retrieved from http://www.wested.org/ You, S., Furlong, M. J., Dowdy, E., Renshaw, T. L., Smith, D. C., & O’Malley, M. D. (2013). Further validation of the Social and Emotional Health Survey for high school students. Applied Research in Quality of Life. Advanced online publication. doi: 10.1007/s11482-013-9282-2 Zullig, K. J., & Matthews-Ewald, M. R. (2014). School climate: Definition, measurement, and application. In M. J. Furlong, R. Gilman, & E. S. Huebner (Eds.), Handbook of positive psychology in the schools (2nd ed., pp. 313–328). New York, NY: Routledge. Received April 8, 2014 Revision received July 25, 2014 Accepted July 28, 2014 䡲

Assessing adolescents' positive psychological functioning at school: Development and validation of the Student Subjective Wellbeing Questionnaire.

This study reports on the initial development and validation of the Student Subjective Wellbeing Questionnaire (SSWQ) with a sample of 1,002 students ...
202KB Sizes 0 Downloads 8 Views