INTELLECTUAL AND DEVELOPMENTAL DISABILITIES

’AAIDD

2013, Vol. 51, No. 5, 333–348

DOI: 10.1352/1934-9556-51.5.333

Issues Concerning Self-Report Data and Population-Based Data Sets Involving People With Intellectual Disabilities Eric Emerson, David Felce, and Roger J. Stancliffe

Abstract This article examines two methodological issues regarding ways of obtaining and analyzing outcome data for people with intellectual disabilities: (a) self-report and proxy-report data and (b) analysis of population-based data sets. Some people with intellectual disabilities have difficulties with selfreporting due to problems of understanding and communication. However, there are serious doubts about the validity of proxy data for subjective issues. One important challenge with secondary analysis of population-based data sets is the difficulty of accurately identifying survey participants with intellectual disabilities. In both areas examined, it is important to recognize these constraints when interpreting research based on such data. Key Words: intellectual disabilities; self-report; proxy; response bias; population-based data; secondary analysis

Methodological issues in research on outcomes experienced by people with intellectual disabilities encompass a wide variety of measurement, research-design, and data-analysis issues. For example, Kratochwill et al. (2010) have provided a very useful guide to single-case research design. Likewise, issues related to group-matching designs have recently been discussed by Kover and Atwood (2013). However, the discussion of methodological issues that took place at the 2012 State of the Science Conference focused strongly on issues related to two key topics: (a) self-report and proxy-report data and (b) analysis of populationbased data sets. Because of this strong focus, we have chosen to structure this article around these two topics rather than attempt to examine the full range of methodological issues in research on intellectual disability. Unlike some of the other reviews in this special issue, we have not attempted to organize this article in terms of established findings, big debates, and emerging/unanswered questions across all content areas. Rather, we begin by examining the topic of self-report data and follow this by considering issues related to population-based data sets. Within these two major sections, we address questions of established findings and emerging issues as appropriate to the content in each section. Finally, it should be noted

E. Emerson, D. Felce, and R. J. Stancliffe

that we confine our focus to quantitative research. We acknowledge qualitative inquiry as a valuable research methodology, but it is beyond the scope of this article.

Obtaining and Interpreting Data on Outcomes From Informants: Self-Report and Proxy Report A concern with personal outcomes or quality of life as a consequence of some intervention or environmental arrangement lies at the heart of much health and social science. An important perspective is that of the person who experiences the intervention under evaluation. Self-report refers to gaining information directly from such a person. It is perhaps necessary to state that self-report is not the only valid perspective, but we assume that gaining self-report data will be an important component in some research. Indeed, its desirability is strengthened in relation to people with disabilities by contemporary emphases on empowerment, self-determination, and personal choice. The extent to which self-report data can be obtained from people with intellectual disabilities is a matter for careful determination (Fujiura, 2012; Stancliffe, 2000). Limitations among some people with intellectual disabilities in relation to understanding (e.g., determining the meaning of questions),

333

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

cognitive processing (e.g., recalling information, ordering information, or making comparisons), and expression (e.g., articulating a response) make the gaining of self-reported information an unrealistic objective. Simply put, it would overlook the heterogeneity of the population to propose that gaining self-report from everyone is possible; as Finlay and Lyons (2001) more cautiously state in their review of interviewing and self-report, people with intellectual disability ‘‘may be too heterogeneous in terms of personal history and linguistic and cognitive abilities for any single questionnaire to be valid for the whole population’’ (p. 329). Where it is thought that acquiring self-report data is not feasible, researchers have frequently used other informants who are asked to represent the person in question. This is referred to as proxy reporting. Proxies do not give their own perspectives but either respond as they think the person they represent would respond or provide an accurate account of the person’s situation. However, the extent to which such responses reflect what the actual person would say is also a matter for careful determination. For example, accuracy may depend on the length and closeness of the relationship between the proxy and the person,

’AAIDD DOI: 10.1352/1934-9556-51.5.333

on the degree to which self-interest may conflict with an unbiased appraisal of the other person’s feelings, on the extent to which the proxy has a full or only partial knowledge of the person, or on whether the information being sought is observable by another party or is instead an inner state. Certainly, it cannot be assumed that proxy reporting is a valid substitute for self-reporting. This section explores four issues: (a) how the quality-of-life construct is defined and the implications of this for gaining relevant data, (b) the limits of self-reporting, (c) the limits of proxy reporting, and (d) possible ways forward when such limits are reached.

The Quality-of-Life Construct and Measurement Principles There is reasonable agreement that personal outcomes relevant to quality of life or the rights of people with disabilities span a variety of domains (Cummins, 2005; Schalock et al., 2002; United Nations, 2006; Verdugo, Schalock, Keith, & Stancliffe, 2005; see Table 1). Quality of life is generally regarded as having objective and subjective components, the importance of which will differ between individuals (Cummins, 2005; Felce,

Table 1 Contrasting a Quality-of-Life Framework and Articles from the United Nations Convention on the Rights of Persons with Disabilities Quality of lifea Physical well-being (health, safety, fitness) Material well-being (wealth, housing, tenure, privacy, neighborhood, transport) Social well-being (relationships, community involvement) Productive well-being (personal development, independence, self-determination, occupation)

Emotional well-being Civic well-being/rights (protection under the law, participation in political and public life, state of the nation)

a

UN convention (Article number) Life (10); Health (25); Freedom from torture, degradation, exploitation, violence, abuse (15–16) Access to physical environment, transport, info, communications, & services (9); Privacy (22); Adequate standard of living & social protection (28) Respect for home & the family (23); Being included in the community (19) Education (24); (Re)Habilitation (26); Living independently (19); Personal mobility (20); Work & employment (27); Participation in cultural life, recreation, leisure, & sport (30) Protecting the integrity of the person (17) Equality/nondiscrimination (5–7); Equal recognition before the law (12); Access to justice (13); Liberty & security of person, nationality (14, 18); Freedom of expression (21); Participation in political & public life (29)

Felce (1997).

334

Issues Concerning Self-Report

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

1997; Schalock et al., 2002; Verdugo et al., 2005). Objective indicators refer to observable lifestyle or environmental life conditions, such as standard of living (income per month), housing quality (nature of property, heating, absence of damp), neighborhood quality (safety, noise, pollution, amenities), variety of leisure activities, size of friendship network, or nature of occupation. Subjective indicators focus on a person’s feelings about life, such as psychological well-being, personal satisfaction, loneliness, or happiness. Subjective indicators are not directly observable, although they may be inferred from observable behavior or demeanor. Both objective and subjective components are valid indicators of quality of life, but their different meanings and properties, together with their normally weak intercorrelation (Cummins, 2000; Emerson et al., 2000), mean that they cannot easily be combined. By way of illustration, Perry and Felce (2005) have explored the association between objective and subjective appraisals within similar quality-of-life domains. All objective measures were significantly correlated with adaptive behavior, but only one subjective measure was. With level of adaptive behavior controlled, six of seven correlations between pairs of objective measures were significant. Of the 16 correlations between objective and subjective measures, 15 were nonsignificant. Moreover, little association was found between subjective indicators of well-being and environmental variables. In contrast, environmental variables predicted the level of the objective indicators of choice, occupation, and integration. As the previous example illustrates, subjective and objective data appear to ‘‘behave’’ in different ways. Whereas variation on objective outcomes may be related to personal characteristics and environmental conditions, subjective outcomes tend to be independent of both. This observation is broadly consistent with the proposition that subjective well-being is under homeostatic internal regulation (Cummins, 2005). Through this hypothesized mechanism, an individual maintains a state of positive well-being given normal variation in life’s circumstances. Each individual has a set point for his or her subjective well-being. It would not be expected to vary significantly unless changes in objective circumstances are extreme. Hence, it is important that subjective and objective assessment not be combined. In particular, subjective and objective items should not be mixed within the same instrument. Measures should be subject to

E. Emerson, D. Felce, and R. J. Stancliffe

’AAIDD DOI: 10.1352/1934-9556-51.5.333

scrutiny in this respect. For example, the Quality of Life Questionnaire (Schalock & Keith, 1993) has four domains. One of them, Satisfaction, has subjective items (e.g., ‘‘How satisfied are you with your current home or living arrangement?’’), while another, Competence/Productivity, has a mixture of subjective and objective items (e.g., ‘‘Do you feel you receive fair pay for your work?’’; ‘‘How closely supervised are you at work?’’). The remaining two, Empowerment/Independence and Social Belonging/Community Integration, have mainly objective items (e.g., ‘‘Do you have a key to your home?’’; ‘‘How many times per week do you talk to neighbors?’’). Furthermore, Cummins (2005) has questioned the implicit weighting of importance when subscales such as independence, relationships, or health are combined. In the absence of a basis for rating how important they are to individuals, combination in practice often accords equal value to constructs. This is ironic, given the strong emphasis on personalization inherent in subjective measurement. Not only should objective and subjective indicators be kept distinct, but they also raise different measurement concerns. Note that the distinction is defined by the nature of the datum, not the source of the data. Objective indicators are potentially verifiable by a third party, provided the third party knows the person sufficiently well. Objective data may, therefore, be self-reported or reported by proxies. Subjective indicators focus on a person’s feelings about life in terms of psychological well-being, personal satisfaction, or happiness. Internal states cannot be observed by a third party directly, so there is greater doubt that proxy reports will agree with self-reports (Cummins, 2002; Rose et al., 2013).

The Limits of Self-Report It is now well established that many people with intellectual disabilities are able to provide reliable, unbiased, and valid accounts of their feelings, as the research on subjective quality of life, self-esteem, and mental health can testify. However, it is equally established that people with intellectual disabilities vary greatly in their cognitive processing and their abilities to understand and use language. Many subjective constructs are abstract, and this presents a complexity-of-language demand that not everyone will be able to meet.

335

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

The cognitive and linguistic demands of selfreport questions and response formats present challenges to reliable and valid self-reporting. Questions phrased negatively or using the passive voice are more difficult. Likewise, questions involving responses about frequency, judgments of time, or generalized evaluation are problematic (Finlay & Lyons, 2001; Fujiura, 2012; Stancliffe, Wilson, Bigby, Balandin, & Craig, 2013). More complex, fine-grained response formats (e.g., rating scales) are often unsuitable, with simpler response scales (e.g., yes, sometimes, no) proving more appropriate (Fang et al., 2011). Moreover, various response biases may be more common among people with intellectual disabilities (Finlay & Lyons, 2001; Stancliffe, 2000), specifically acquiescence (i.e., the tendency to say yes to questions regardless of content) and recency bias (i.e., the tendency to select the last alternative mentioned in either/or or multiple-choice questions, irrespective of one’s true opinion). Frequently, cognitive and linguistic difficulties are minimized by developing self-report scales designed specifically for people with intellectual disabilities, using simplified question wording and response formats. Examples include assessments of loneliness (Stancliffe et al., 2013) and depression (Cuthill, Espie, & Cooper, 2003). Stancliffe et al. (2013) compared two self-report assessments of loneliness: one used for the general community, which has more difficult question wording with a more complex response scale, and the other designed for respondents with intellectual disabilities. Among adults with mild to moderate intellectual disabilities, the researchers found a much higher rate of responding to the instrument that is specific to intellectual disabilities (83%) than to the general-population scale (25%). This finding demonstrates the benefits of scales that are specific to intellectual disabilities in enabling many more people with intellectual disabilities to selfreport. However, as the researchers note, these benefits are gained at the cost of ready comparison to data from the general community that are available for widely used scales. Deficits in expressive ability are often accommodated in response formats by removing the need for complex language (e.g., by having respondents point at icons or pictures). Questioning is more difficult to simplify, especially if it is about abstract (e.g., satisfaction) or precise (e.g., friends) constructs. Questioning which fails to recognize that

336

’AAIDD DOI: 10.1352/1934-9556-51.5.333

understanding of language may be similar to expressive ability should be viewed as dubious (even though people may respond). While there are methodologically more sound ways of approaching the issue of gaining the views of people with intellectual disabilities (Finlay & Lyons, 2001; Fujiura, 2012), an uncritical enthusiasm that all people have views that can be measured reliably if only we find the right prosthetic means approaches a denial of disability. That said, where the limits lie for gaining valid subjective information in this population have not been adequately determined. The fact that many studies that successfully use self-report measures have samples of adults with mild to moderate intellectual disabilities (Finlay & Lyons, 2001) provides an indication that obtaining valid self-reports from the majority of people with severe or profound intellectual disabilities remains a challenge for future research to address. Perry and Felce (2002) investigated responsiveness and the level of response bias among respondents to two self-report measures, the Choice Questionnaire (Stancliffe & Parmenter, 1999) and the ComQolID (Cummins, 1997). The level of nonresponse fell progressively as a function of higher Adaptive Behavior Scale (ABS) score, expressed as a percentile rank. However, it was not until the seventh decile that nonresponse, as well as acquiescence and recency response biases, fell below 25%. Those who did not show response bias had a mean ABS rank of 82% (range: 52%–97%), a mean ABS language-development domain rank of 90% (range: 50%–99%), and a mean British Picture Vocabulary Scale (Dunn, Dunn, Whetton, & Burley, 1997) standardized score of 62 (range: 39– 118), which converts to an equivalent language age of 9 years and 4 months. The proportion of people with intellectual disabilities who are able to communicate their views accurately will depend on a range of issues, such as how complex the perspective sought is, how close the issue is to the respondent’s experience, whether the opinion sought is about concrete or abstract phenomena, how well wordings can be simplified, how readily the essence of questions can be represented pictorially, and how effectively alternative response formats can be validated. There is still much development and evaluation to be done. However, as things stand, it would appear to be good practice for researchers to use preadministration screening of understanding and

Issues Concerning Self-Report

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

responsiveness (including response bias) to indicate the limits of applicability of their measure(s). One example of such a pre-testing procedure is embedded within the ComQol-ID (Cummins, 1997), whereby a series of necessary skills involved in using a Likert scale are assessed and the complexity of the Likert scale to be used in practice is determined, for those who demonstrate prerequisite competence. This involves placing bars in order of size, relating bar size to a printed scale of importance, and placing something of known high importance on such a printed scale. These tasks are assessed using binary, 3-point, and 5-point choices, dependent on performance. In addition, it would also appear to be good practice for researchers to systematically investigate and report the cognitive and language skills of their research participants, distinguishing those who appear able to respond reliably and validly from those who appear not to be able to do so. The field needs to build an evidence base. It is remarkable how few concrete data are currently available. There is a tacit acceptance that there is a limit to self-reporting among this population and much discussion of ways by which questionnaire design or administration might be adapted to maximize coverage; however, authors are generally silent about the precise applicability of various measurement approaches and about the proportion of people with intellectual disabilities who might be effectively disenfranchised from being able to express their opinions. Conversely, it would appear to be poor practice to report the use of self-report measures without taking care to measure and report the characteristics of research participants and the effort given to assessing that the responses reported are genuinely what respondents mean to communicate. Although a rigorous approach may mean that some people’s voices are silent, misrepresenting people’s views seems at least as troubling.

The Limits of Proxy Reporting There is a considerable tradition of asking people who know individuals with intellectual disabilities well questions about them. In relation to adaptive behavior, it is clear that proxies can accurately report what a person can and cannot do. Similarly, they may be able to name the important people in a person’s social network, say approximately how many times per month a

E. Emerson, D. Felce, and R. J. Stancliffe

’AAIDD DOI: 10.1352/1934-9556-51.5.333

person participates in various leisure activities, or indicate those things the person chooses for him- or herself and those that are determined by other people. However, the common denominator of these examples is that they are observable and objectively verifiable. How much confidence can we have that proxies can reflect nonobservable internal states (i.e., feelings) as accurately, particularly in relation to people whose language limitations mean that they have not been able to tell even close proxies what they think? The accuracy of proxy assessment may depend on the topic. Perry and Felce’s (2002) study illustrates the matter. Having established a subgroup within their sample who were able to respond without response bias, they went on to compare self-reported and proxy-reported data on the two measures investigated: the Choice Questionnaire (Stancliffe & Parmenter, 1999) and the ComQol-ID (Cummins, 1997). They found high agreement between proxy and self-reports on the measure of the extent of choice (r 5 .74) but low agreement on the measure of satisfaction with life (r 5 .16; see Figure 1). They suggested that this difference might be explained by the objective, observable nature of the items in the Choice Questionnaire and the subjective, nonobservable nature of the items in the ComQol-ID Satisfaction domain. Such an interpretation is supported by Rose et al. (in press), who found that self- and proxyreported anger among adults with mild to moderate intellectual disabilities were associated with different variables. A measure of felt anger, the Provocation Index (PI; Novaco, 1994; Taylor, Novaco, Gillmer, Robertson, & Thorne, 2005), was completed by 181 adults with intellectual disabilities identified as having difficulties in managing their anger. Their key workers and/or home carers also completed the proxy version of the assessment. Three hierarchical linear-regression analyses were conducted to predict serviceuser, key-worker, and home-carer PI scores. Variables were entered in three blocks: demographic variables, mental-health measures (depression, anxiety, and self-esteem), and challengingbehavior measures (hyperactivity, irritability, and aggression). Self-reported anger was associated with the three self-reported mental-health measures. However, proxy-reported anger ratings were associated with outwardly observable challenging

337

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Figure 1. Agreement between self-report and staff report on the Choice Questionnaire (Stancliffe & Parmenter, 1999) and the ComQol-ID (Cummins, 1997). Data from Perry & Felce, 2002.

behavior (see Figure 2). Hence, the individuals themselves appeared to rate their anger according to how they felt, whereas proxies seemed to use indirect signs, specifically the behavior that they could see. Notwithstanding studies that show reasonable agreement between self-reports and proxy reports (e.g., McVilly, Burton-Smith, & Davidson, 2000), the evidence suggests that proxy reports are not a valid substitute for self-reports (Cummins, 2002; Perry & Felce, 2002; Stancliffe, 2000). Thus, it would seem to be good practice for researchers who wish to use proxy reports in the place of self-reports to provide empirical evidence of agreement between the two among those sample members capable of self-reporting. In other words, the substitutability of proxy reporting for self-reporting should be demonstrated rather than asserted. Moreover, there is a need for further research like that of McVilly et al. (2000) that investigates whether there are characteristics of proxy informants which might

338

make their ratings more likely to agree with selfreports.

Ways Forward Where Cognitive and Language Limitations Make Self-Reporting Unrealistic Interest will continue in understanding the cognitive and language demands of self-reporting to reduce the proportion of people who are effectively disenfranchised from commenting on important aspects of their lives. In many ways, research is in its infancy in this area. For example, Fujiura (2012) has noted how little attention has been directed towards understanding and modeling the cognitive processes involved in responding to questions. He suggests that it is possible that a more nuanced portrait of self-report may also yield new opportunities for enhancing valid self-report. More research is needed on what proportion of the population can reliably describe abstract internal states. In the meantime, strategies are required to assess quality of life among people

Issues Concerning Self-Report

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Figure 2. Self-reported and proxy-reported anger ratings reflect different variables. The figures within the circles are intercorrelations between the various measures of mental health or challenging behavior. with more severe intellectual disabilities who appear currently not to be able to report their satisfaction. One approach is to observe demeanor that is reliably associated with pleasure or displeasure. Petry and Maes (2006) describe a video-based procedure for empirically determining individualized profiles of how people with profound and multiple disabilities express pleasure and displeasure through sounds and facial expressions, given parental and support-worker interpretation of their meanings. The results might be used to frame individualized observational measures to evaluate the extent to which various living and support characteristics affect well-being. Another approach is to restrict investigation to objective data. However, analysis of such data

E. Emerson, D. Felce, and R. J. Stancliffe

needs to reflect the fact that variation in individual preferences will mean that the interpretation of person–environment fit cannot be undertaken at the individual level. For example, one person may have a limited friendship network and few social engagements, while another may engage in a narrow range of repetitive activities. In neither case, without information on personal preferences, can one conclude that there is a lifestyle problem. However, the interpretation of group data could be more illuminating given a number of conditions, namely that (a) groups were representative cross sections of the population in a defined situation, (b) normative data were available on the general population, and (c) there was no reason to believe that the distribution of preference or lifestyle ambitions among the defined subgroup would be

339

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES

’AAIDD

2013, Vol. 51, No. 5, 333–348

different from that among the general population (e.g., the desire for friendship among people with intellectual disabilities would be similar to the remainder of the population). Under these circumstances, it would be possible to compare how the distribution of important objective lifestyle indicators among people with intellectual disabilities compared to normative levels and to identify significant difference as evidence of disadvantage. An illustration can be provided in relation to size of social network. Based on a community sample of 281 adults with intellectual disabilities, Robertson et al. (2001) showed that the median size of social network was six if disability staff were included and three if they were not. For the general population, Hill and Dunbar (2003) found that the average size of a Christmas-card list was 125. One might reasonably conclude that the concern about the generally poor social integration of people with intellectual disabilities within our society is well founded.

Conclusion About Self-Report and Proxy-Report Data In summary, we need to be clear about the evaluative purpose. We should assess subjective well-being to find out whether people with intellectual disabilities are as satisfied with life as are other population subgroups. We should assess objective outcomes to find out whether people with intellectual disabilities are socially equal to other population subgroups or are disadvantaged. Clearly, we should follow methodological guidelines and best practices in both cases. In relation to selfreport measures, this would include systematically investigating and reporting the cognitive and language skills of research participants and distinguishing those who appear able to respond reliably and validly from those who appear not to be able to do so. In relation to proxy reporting, this would include demonstrating its validity, particularly where the data are subjective in nature.

DOI: 10.1352/1934-9556-51.5.333

Gunn, Berlin, Leventhal, & Fuligini, 2000; Trzesniewski, Donnellan, & Lucas, 2011). The potential benefits of secondary analysis stem from four characteristics of the types of data that are commonly available: (a) the speed and ease of access to data (they have already been collected); (b) the quality of sampling procedures, fieldwork, and data management; (c) the breadth of the information available; and (d) the longitudinal nature of much of the data (Boslaugh, 2007; Bulmer, Sturgis, & Allum, 2009; Hofferth, 2005; Hussein, 2011; Smith, 2006; Trzesniewski et al., 2011; Vartanian, 2011). For research in intellectual and developmental disabilities, these data sets offer the additional advantage of enabling comparisons with the general population. The opportunities of research based on secondary analysis in our field arise from two distinct developments. First, many countries have made and continue to make significant investments in developing large-scale surveys in order to better monitor and understand the determinants of the health and well-being of the population and the development of children (Brooks-Gunn et al., 2000). To give a few examples, the United Kingdom1 and United States2 are both currently embarking on ambitious birth-cohort/child-development studies that aim to follow up cohorts of approximately 100,000 children. The UK has also initiated a new annual panel study involving the participation of 100,000 people in ordinary households.3 UNICEF is currently developing a new child-disability module for inclusion in the fifth round of their Multiple Cluster Indicator Surveys, one of the main vehicles for monitoring progress toward the realization of the Millennium Development Goals.4 The potential contribution of these types of data sets to our field is obvious as long as it is possible to identify participants with intellectual and/or developmental disabilities. Secondary analysis of data from birth-cohort/ child-development studies has been used in our field to investigate such issues as factors associated

Secondary Analysis of Population-Based Data Sets Research based on the analysis of existing largescale survey and administrative data sets (typically referred to as secondary analysis) is commonplace in the fields of sociology, economics, and public health. It is also increasingly being used in childdevelopment studies and psychology (Brooks-

340

1

http://www.esrc.ac.uk/funding-and-guidance/tools-andresources/research-resources/surveys/bcf.aspx.

2

http://www.nationalchildrensstudy.gov/Pages/default.aspx.

3

https://www.understandingsociety.ac.uk.

4

http://www.childinfo.org/mics5.html.

Issues Concerning Self-Report

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

with the emotional and behavioral development of young children with intellectual disabilities (Emerson, 2012b; Emerson & Einfeld, 2010; Emerson, Einfeld, & Stancliffe, 2011; Totsika & Hastings, 2012), the well-being of families supporting young children with intellectual disabilities (Emerson et al., 2010; Hatton, Emerson, Graham, Blacher, & Llewellyn, 2010), and the interrelationships between child emotional and behavioral development and family well-being (Totsika, Hastings, Emerson, Berridge, & Lancaster, 2011; Totsika et al., in press; Totsika, Hastings, Emerson, Lancaster, & Berridge, 2011). Secondary analysis of adult survey data has been used to explore the prevalence of intellectual and developmental disabilities (Larson et al., 2001; Larson, Lakin, & Anderson, 2003). An increasing range of administrative databases, including e-health records, contain individuallevel information on such matters as the educational attainment and experiences of schoolchildren, the use of primary and secondary health-care services, the receipt of welfare benefits, and causes of death. Again, access to (and linkage between) administrative data sets could provide a powerful resource for our field as long as it is possible to identify participants with intellectual and/or developmental disabilities. Secondary analysis of data from administrative data sets has been used in our field to investigate such issues as variation in the prevalence of intellectual and developmental disabilities (Chapman, Scott, & Stanton-Chapman, 2008; Emerson, 2012a; Leonard et al., 2008; Leonard, Petterson, Bower, & Sanders, 2003; Leonard et al., 2005), the health of and health care received by people with intellectual disabilities (Balogh, Brownell, Ouellette-Kuntz, & Colantonio, 2010; Balogh, Hunter, & Ouellette-Kuntz, 2005; Glover & Evison, 2013; Thomas et al., 2011), mortality (Glover & Ayub, 2010), and divorce rates among families supporting children with Down syndrome (Urbano & Hodapp, 2007).

Challenges and Future Directions In the following sections, we will briefly discuss two key challenges involved in secondary analysis as they pertain to the study of intellectual and developmental disabilities: (a) identification of people with intellectual or developmental disabilities in data sets and (b) the limitations of generalpopulation sampling strategies.

E. Emerson, D. Felce, and R. J. Stancliffe

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Identifying Participants With Intellectual or Developmental Disabilities in Surveys It is very rare indeed for major surveys to be designed with the explicit intention of identifying whether participants have an intellectual or developmental disability. As a result, the first task when examining a potential data source is to determine whether it will be possible to design a method that can credibly identify participants who may have an intellectual or developmental disability. While this is commonly possible in birth-cohort and child-development surveys, it has proved much more problematic for surveys of adult populations. There are three general strategies for identifying participants with intellectual or developmental disabilities: (a) data linkage to administrative data sets, (b) classification based on cognitive and/or psychological testing, and (c) self- or informant report. Data linkage. While rare at present, there is a growing trend to link (with consent) survey data to administrative data held by government departments. In the UK, for example, three national surveys (the Longitudinal Study of Young People in England, the Millennium Cohort Study, and child data from Understanding Society) have been linked to information held by the English Department for Education on whether the child has been identified through educational systems as having a Special Educational Need associated with intellectual or developmental disability (Emerson & Halpin, 2013). The possibility of identification through data linkage is likely to become increasingly great (though see later). Cognitive testing. Given that child cognitive development is a key outcome of interest in most large-scale birth-cohort and child-development studies, they typically involve repeated cognitive testing of child participants. As these are population-based studies, it therefore becomes a relatively simple task to use test scores to identify children scoring two standard deviations below the weighted sample mean on these tests (Emerson et al., 2010). While such an approach is sufficiently robust to identify children with intellectual disability according to ICD-10 criteria, it is insufficient to identify children with intellectual disability by DSM-IV or AAIDD criteria (Einfeld & Emerson, 2008). Self- or informant report. Child-focused surveys that do not involve either data linkage or cognitive testing will often collect parental report

341

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Figure 3. Association between breadth of exposure to material deprivation at age 3 and (a) prevalence of maternal report of concerns about the child’s cognitive development; (b) prevalence of the child scoring two or more standard deviations below the population means on test of cognitive ability at ages 3, 5, and 7; and (c) false negative rates for maternal judgment at age 3. Data from UK Millennium Cohort Study (http://www. cls.ioe.ac.uk/page.aspx?&sitesectionid5851&sitesectiontitle5Welcome+to+the+Millennium+Cohort+Study). of the presence of child impairments or disabilities (e.g., ‘‘Compared with children of the same age, does … have difficulty learning to do new things?’’) or receipt of professional diagnosis or classification (e.g., ‘‘Has … ever been diagnosed by a doctor as having autism or Asperger’s syndrome?’’; ‘‘Has … been identified as having special educational needs? If so, for ASD?’’) associated with intellectual or developmental disabilities. Relatively little is currently known about the validity of informant responses to these types of questions and about the extent to which accuracy of informant reports may be moderated by potentially important contextual factors. To illustrate the potential importance of the latter, Figure 3 uses data from the UK Millennium Cohort Study to show the association between breadth of exposure to material deprivation at age 3 and (a) prevalence of maternal report of concerns about the child’s cognitive development; (b) prevalence of the child’s scoring two or more standard deviations below the population means on a test of cognitive ability at ages 3, 5, and 7; and (c) false negative rates for maternal judgment at age 3 (percentage of children for

342

whom no parental concern was expressed who scored two or more standard deviations below the population means on a test of cognitive ability at ages 3, 5, or 7). The overall sensitivity rate for maternal concerns was just 2%. The close correspondence between false negative rates and tested prevalence reflects the very low sensitivity rates and the lack of variation between exposure to material disadvantage and either sensitivity rates or false positive rates. These data do suggest that in this instance, general maternal report should be treated with considerable caution, given its low sensitivity and the strong association between false negative reporting and social context. To rely on maternal report would significantly underestimate the association between exposure to material disadvantage and the prevalence of developmental delay. The identification of adults with intellectual disability in large-scale surveys is much more problematic, though not impossible (see, e.g., Larson et al., 2001). An increasing number of surveys collect information on whether the adult respondent has a disability and, if so, on the type(s)

Issues Concerning Self-Report

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

of impairment(s) associated with the disability. The list of impairments from which the participant can select often contains an item such as ‘‘difficulty learning or understanding.’’ It is sometimes possible to combine responses to this item with other information contained within the survey (e.g., very low educational attainment, difficulties with literacy and numeracy) to operationally identify a subgroup of people who may have intellectual disabilities. It is important to test the face validity of such approaches to identification by examining the extent to which the observed prevalence of intellectual disability varies in the expected manner in relation to gender (higher among men), age (steady decline with age), and socioeconomic position (higher among participants with lower socioeconomic position). Even when it is possible to identify participants with intellectual disabilities, it is rare (at present) to be able to identify specific subgroups of people with intellectual disabilities (e.g., people with specific syndromes, people with more severe intellectual disabilities). This problem results from two factors. First, information about specific syndromes is simply not collected. Second, while the sample sizes of available data sets are large when compared to those attainable in primary research, they are currently insufficient to identify a viable sample of children with severe intellectual disabilities. This problem is likely to be resolved as data become available from the new UK and U.S. child-cohort studies.

Identifying Participants With Intellectual or Developmental Disabilities in Administrative Data Sets The identification of people with intellectual or developmental disabilities in administrative data sets (e.g., ones generated by health or education agencies) can also be problematic. The validity of the extracted data will obviously depend on the accuracy of the coding system employed to identify individuals with intellectual or developmental disabilities and on the wide variety of etiological conditions associated with intellectual or developmental disabilities (e.g., Down syndrome, autism). Coding accuracy is likely to vary across conditions as a function of the ease of identification of particular conditions, public and professional awareness of conditions, and the extent to which the identification of a condition results in practical benefits (welfare payments, access to services). All

E. Emerson, D. Felce, and R. J. Stancliffe

’AAIDD DOI: 10.1352/1934-9556-51.5.333

of these factors are likely to change over time and vary across jurisdictions, leading at times to substantial difficulty in untangling the underlying causes of changes in administrative prevalence rates (e.g., Elsabbagh et al., 2012). Issues of coding accuracy are particularly problematic in identifying people with less severe intellectual or developmental disability in generic administrative data sets (i.e., those not collected by developmental-disability agencies). Figure 4, for example, shows estimates of the age-specific prevalence of intellectual disabilities in England drawn from two administrative data sets: the National Pupil Database and data extracted from primary-care health records (Emerson & Glover, 2012). As can be seen, the reported prevalence (from health records) of intellectual disability among adults is comparable to the reported prevalence (from educational records) of severe intellectual disability among children. It is inconceivable that the marked drop in prevalence in early adulthood reflects a change in true prevalence. More likely, this ‘‘transition cliff’’ in administrative prevalence reflects the marked underidentification of adults with less severe intellectual disabilities in primarycare health records (Kiely, 1987; National Center on Birth Defects and Developmental Disabilities, 2010). Given that adults with less severe intellectual disabilities may be ineligible for disability services or may not wish to identify as being disabled, underidentification of adults with less severe intellectual disabilities is also likely in administrative data sets collected by developmental-disability agencies.

The Limitations of General-Population Sampling Strategies Most population-based surveys use ‘‘general households’’ as the primary sampling unit. While this ensures coverage of the vast majority of the population, it excludes people who are either homeless or living in some form of institutional arrangement. While there is no reliable information on the extent of homelessness among people with intellectual disabilities, it is clear that a significant proportion of adults with intellectual disabilities who are known to services live in supported accommodation arrangements, some of which are likely to be excluded from samples of general households. As a result, population-based surveys of adults with intellectual disability may need to combine samples

343

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Figure 4. Reported age-specific prevalence of intellectual disabilities in England, 2010. SEN 5 Special Educational Needs. drawn from general households with samples drawn from targeted lists of adults with intellectual disabilities known to organizations providing disability services (see, e.g., Emerson & Hatton, 2008). A final barrier to identifying participants with intellectual disabilities is that they may have been excluded from the sample due to their perceived inability to either give consent or participate effectively in the survey. It is extremely rare for large-scale population-based surveys to incorporate reasonable accommodations to the survey process (e.g., simplifying and rephrasing questions, using visual aids) that would facilitate the participation of people with intellectual disabilities.

Overall Conclusions The methodological issues discussed in this article have identified a variety of constraints on the ways in which we can answer research questions about intellectual disabilities. Where data are obtained solely by self-report, some people with intellectual disabilities are disenfranchised because of difficulties with understanding and communication. Where proxy-report data are acceptable, there are constraints on the topics for which such data appear valid, with serious doubts about the validity of proxy data for subjective issues. Moreover, it seems

344

unwise to mix self-report and proxy data unless there is clear evidence that the two are equivalent. The potential benefits of secondary analysis of population-based data sets for research on intellectual disabilities are manifest. So too are the challenges. Prominent among these is the difficulty of accurately identifying survey participants with intellectual disabilities. A complementary difficulty is that population-sampling strategies can result in the exclusion of certain individuals with intellectual disabilities, such as those living in formal disability services or those who are perceived to be unable to consent to participation or to respond to survey questions. In both of the areas examined, it is evident that research on intellectual disabilities is characterized by a number of methodological compromises. The constraints underlying these compromises cannot easily be resolved, so it is important to recognize these constraints when interpreting research based on such data.

References Balogh, R., Brownell, M., Ouellette-Kuntz, H., & Colantonio, A. (2010). Hospitalisation rates

Issues Concerning Self-Report

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

for ambulatory care sensitive conditions for persons with and without an intellectual disability—A population perspective. Journal of Intellectual Disability Research, 54, 820–832. Balogh, R., Hunter, D., & Ouellette-Kuntz, H. (2005). Hospital utilization among persons with an intellectual disability, Ontario, Canada, 1995–2001. Journal of Applied Research in Intellectual Disabilities, 18, 181–190. Boslaugh, S. (2007). Secondary data sources for public health. Cambridge, United Kingdom: Cambridge University Press. Brooks-Gunn, J., Berlin, L. J., Leventhal, T., & Fuligini, A. S. (2000). Depending on the kindness of strangers: Current national data initiatives and developmental research. Child Development, 71, 257–268. Bulmer, M., Sturgis, P., & Allum, N. (Eds.). (2009). The secondary analysis of survey data. London, United Kingdom: Sage. Chapman, D., Scott, K., & Stanton-Chapman, T. (2008). Public health approach to the study of mental retardation. American Journal on Mental Retardation, 113(2), 102–116. Cummins, R. (1997). The comprehensive quality of life scale: Intellectual disability (5th ed.). Melbourne, Australia: Deakin University. Cummins, R. A. (2000). Objective and subjective quality of life: An interactive model. Social Indicators Research, 52, 55–72. Cummins, R. A. (2002). Proxy responding for subjective well-being: A review. International Review of Research in Mental Retardation, 25, 183–207. Cummins, R. A. (2005). Moving from the quality of life concept to a theory. Journal of Intellectual Disability Research, 49, 699–706. Cuthill, F. M., Espie, C. A., & Cooper, S.-A. (2003). Development and psychometric properties of the Glasgow Depression Scale for people with a learning disability: Individual and carer supplement versions. The British Journal of Psychiatry, 182, 347–353. Dunn, L., Dunn, L., Whetton, C., & Burley, J. (1997). The British picture vocabulary scale (2nd ed.). Windsor, United Kingdom: NFER-Nelson. Einfeld, S., & Emerson, E. (2008). Intellectual disability. In M. Rutter, D. Bishop, D. Pine, S. Scott, J. Stevenson, E. Taylor, & A. Thapar (Eds.), Rutter’s child and adolescent psychiatry (5th ed.; pp. 820–840). Oxford, United Kingdom: Blackwell.

E. Emerson, D. Felce, and R. J. Stancliffe

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Elsabbagh, M., Divan, G., Koh, Y., Kim, Y. S., Kauchali, S., Marcı´n, C., … Fombonne, E. (2012). Global prevalence of autism and other pervasive developmental disorders. Autism Research, 5, 160–179. Emerson, E. (2012a). Household deprivation, neighbourhood deprivation, ethnicity and the prevalence of intellectual and developmental disabilities Journal of Epidemiology and Community Health, 66, 218–224. Emerson, E. (2012b). Understanding disabled childhoods: What can we learn from population-based studies? Children & Society, 26, 214– 222. Emerson, E., & Einfeld, S. (2010). Emotional and behavioural difficulties in young children with and without developmental delay: A bi-national perspective. Journal of Child Psychology and Psychiatry, 51, 583–593. Emerson, E., Einfeld, S., & Stancliffe, R. J. (2011). Predictors of the persistence of conduct difficulties in children with cognitive delay. Journal of Child Psychology and Psychiatry and Allied Disciplines, 52, 1184–1194. Emerson, E., & Glover, G. (2012). The ‘transition cliff’ in the administrative prevalence of learning disabilities in England. Tizard Learning Disability Review, 17, 139–143. Emerson, E., & Halpin, S. (in press). Anti-social behaviour and police contact among 13–15 year English adolescents with and without mild/ moderate intellectual disability. Journal of Applied Research in Intellectual Disabilities. Emerson, E., & Hatton, C. (2008). The selfreported well-being of women and men with intellectual disabilities in England. American Journal on Mental Retardation, 113, 143–155. Emerson, E., McCulloch, A., Graham, H., Blacher, J., Llewellyn, G., & Hatton, C. (2010). The mental health of parents of young children with and without developmental delays. American Journal on Intellectual and Developmental Disability, 115, 30–42. Emerson, E., Robertson, J., Gregory, N., Hatton, C., Kessissoglou, S., Hallam, A., … Netten, A. (2000). Quality and costs of community-based residential supports, village communities, and residential campuses in the UK. American Journal of Mental Retardation, 105, 81–102. Fang, J., Fleck, M. P., Green, A., McVilly, K., Hao, Y., Tan, W., … Power, M. (2011). The response scale for the intellectual disability

345

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

module of the WHOQOL: 5-point or 3-point. Journal of Intellectual Disability Research, 55, 537–549. Felce, D. (1997). Defining and applying the concept of quality of life. Journal of Intellectual Disability Research, 41, 126–143. Finlay, W. M. L., & Lyons, E. (2001). Methodological issues in interviewing and using selfreport questionnaires with people with mental retardation. Psychological Assessment, 13, 319– 335. Fujiura, G. T. (2012). Self-reported health of people with intellectual disability. Intellectual and Developmental Disabilities, 50, 352–369. Glover, G., & Ayub, M. (2010). How people with learning disabilities die. Durham, United Kingdom: Improving Health & Lives: Learning Disabilities Observatory. Glover, G., & Evison, F. (2013). Hospital admissions that should not happen: Admissions for ambulatory care sensitive conditions for people with learning disabilities in England. Stocktonon-Tees, United Kingdom: Learning Disabilities Public Health Observatory. Hatton, C., Emerson, E., Graham, H., Blacher, J., & Llewellyn, G. (2010). Changes in family composition and marital status in families with a young child with cognitive delay. Journal of Applied Research in Intellectual Disabilities, 23, 14–26. Hill, R. A., & Dunbar, R. I. M. (2003). Social network size in humans. Human Nature, 14, 53–72. Hofferth, S. L. (2005). Secondary data analysis in family research. Journal of Marriage & the Family, 67, 891–907. Hussein, S. (2011). The use of ‘large scale datasets’ in UK social care research. London, United Kingdom: NIHR School for Social Care Research. Kiely, M. (1987). The prevalence of mental retardation. Epidemiologic Reviews, 9, 194–218. Kover, S. T., & Atwood, A. K. (2013). Establishing equivalence: Methodological progress in groupmatching design and analysis. American Journal on Intellectual and Developmental Disabilities, 118, 3–15. Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from http:// ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf

346

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Larson, S., Lakin, K. C., & Anderson, L. L. (2003). Definitions and findings on intellectual and developmental disabilities within the NHIS-D. In B. M. Altman, S. N. Barnartt, G. E. Hendershot, & S. A. Larson (Eds.), Using survey data to study disability: Results from the National Health Interview Survey on Disability (pp. 229–255). Boston, MA: Elsevier. Larson, S., Lakin, K. C., Anderson, L., Kwak, N., Lee, J. H., & Anderson, D. (2001). Prevalence of mental retardation and developmental disabilities: Estimates from the 1994/1995 National Health Interview Survey Disability Supplements. American Journal on Mental Retardation, 106, 231–252. Leonard, H., Nassar, N., Bourke, J., Blair, E., Mulroy, S., De Klerk, N., & Bower, C. (2008). Relation between intrauterine growth and subsequent intellectual disability in a ten-year population cohort of children in Western Australia. American Journal of Epidemiology, 167, 103–111. Leonard, H., Petterson, B., Bower, C., & Sanders, R. (2003). Prevalence of intellectual disability in Western Australia. Paediatric and Perinatal Epidemiology, 17, 58–67. Leonard, H., Petterson, B., De Klerk, N., Zubrick, S. R., Glasson, E., Sanders, R., & Bower, C. (2005). Association of sociodemographic characteristics of children with intellectual disability in Western Australia. Social Science & Medicine, 60, 1499–1513. McVilly, K. R., Burton-Smith, R. M., & Davidson, J. A. (2000). Concurrence between subject and proxy ratings of quality of life for people with and without intellectual disabilities. Journal of Intellectual and Developmental Disability, 25, 19–39. National Center on Birth Defects and Developmental Disabilities. (2010). U.S. surveillance of health of people with intellectual disabilities: A white paper. Atlanta, GA: Centers for Disease Control and Prevention. Novaco, R. W. (1994). Anger as a risk factor for violence among the mentally disordered. In J. Monahan & H. J. Streadman (Eds.), Violence and disorder: Developments in risk assessment (pp. 21–59). Chicago, IL: University of Chicago Press. Perry, J., & Felce, D. (2002). Subjective and objective quality of life assessment: Responsiveness,

Issues Concerning Self-Report

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

response bias and resident:proxy concordance. Mental Retardation, 40, 445–456. Perry, J., & Felce, D. (2005). Correlation between subjective and objective measures of outcome in staffed community housing. Journal of Intellectual Disability Research, 49, 278–287. Petry, K., & Maes, B. (2006). Identifying expressions of pleasure and displeasure by persons with profound and multiple disabilities. Journal of Intellectual and Developmental Disability, 31, 28–38. Robertson, J., Emerson, E., Gregory, N., Hatton, C., Kessissoglou, S., Hallam, A., & Linehan, C. (2001). Social networks of people with mental retardation in residential settings. Mental Retardation, 39, 201–214. Rose, J., Willner, P., Shead, J., Jahoda, A., Gillespie, D., Townson, J., … Hood, K. (2013). Different factors influence self-reports and third-party reports of anger by adults with intellectual disabilities. Journal of Applied Research in Intellectual Disabilities, 26, 410–419. Schalock, R., & Keith, K. (1993). Quality of Life Questionnaire. Worthington, OH: IDS. Schalock, R. L., Brown, I., Brown, R., Cummins, R. A., Felce, D., Matikka, L., … Parmenter, T. (2002). Conceptualization, measurement, and application of quality of life for persons with intellectual disabilities: Report of an international panel of experts. Mental Retardation, 40, 457–470. Smith, E. (2006). Using secondary data in educational and social research. Maidenhead, United Kingdom: McGraw-Hill. Stancliffe, R. J. (2000). Proxy respondents and quality of life. Evaluation and Program Planning, 23, 89–93. Stancliffe, R., & Parmenter, T. (1999). The Choice Questionnaire: A scale to assess choices exercised by adults with intellectual disability. Journal of Intellectual and Developmental Disability, 24, 107–132. Stancliffe, R. J., Wilson, N. J., Bigby, C., Balandin, S., & Craig, D. (2013). Responsiveness to selfreport questions about loneliness: A comparison of mainstream and intellectual-disabilityspecific instruments. Journal of Intellectual Disability Research. Advance online publication. doi:10.1111/jir.12024 Taylor, J. L., Novaco, R. W., Gillmer, B. T., Robertson, A., & Thorne, I. (2005). Individual cognitive-behavioural anger treatment for people with mild-borderline intellectual disabilities

E. Emerson, D. Felce, and R. J. Stancliffe

’AAIDD DOI: 10.1352/1934-9556-51.5.333

and histories of aggression: A controlled trial. British Journal of Clinical Psychology, 44, 367– 382. Thomas, K., Bourke, J., Girdler, S., Bebbington, A., Jacoby, P., & Leonard, H. (2011). Variation over time in medical conditions and health service utilisation of children with Down syndrome. Journal of Pediatrics, 158, 194–200. Totsika, V., & Hastings, R. P. (2012). How can population cohort studies contribute to our understanding of low prevalence clinical disorders? The case of autism spectrum disorders Neuropsychiatry, 2, 87–91. Totsika, V., Hastings, R. P., Emerson, E., Berridge, D. M., & Lancaster, G. A. (2011). Behavior problems at five years of age and maternal mental health in autism and intellectual disability. Journal of Abnormal Child Psychology, 39, 1137–1147. Totsika, V., Hastings, R. P., Emerson, E., Berridge, D. M., Lancaster, G. A., & Vagenas, D. (in press). Is there a bidirectional relationship between maternal well-being and child problem behaviors? Longitudinal evidence in autism spectrum disorders. Autism Research. Totsika, V., Hastings, R. P., Emerson, E., Lancaster, G. A., & Berridge, D. M. (2011). A population-based investigation of behavioural and emotional problems and maternal mental health: Associations with autism spectrum disorder and intellectual disability. Journal of Child Psychology & Psychiatry and Allied Disciplines, 52, 91–99. Trzesniewski, K. H., Donnellan, M. B., & Lucas, R. E. (2011). Secondary data analysis: An introduction for psychologists. Washington, DC: American Psychological Association. United Nations. (2006). Convention on the rights of persons with disabilities. New York, NY: United Nations. Urbano, R. C., & Hodapp, R. M. (2007). Divorce in families of children with Down syndrome: A population-based study. American Journal on Mental Retardation, 112(4), 261–274. Vartanian, T. P. (2011). Secondary data analysis. Oxford, United Kingdom: Oxford University Press. Verdugo, M. A., Schalock, R. L., Keith, K. D., & Stancliffe, R. J. (2005). Quality of life and its measurement: Important principles and guidelines. Journal of Intellectual Disability Research, 49, 707–717.

347

INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2013, Vol. 51, No. 5, 333–348

Roger Stancliffe’s contribution to this article was funded as part of a funding agreement between the University of Minnesota’s Research and Training Center on Community Living and the University of Sydney. Authors: Roger J. Stancliffe (e-mail: roger.stancliffe@ sydney.edu.au), Centre for Disability Research and

348

’AAIDD DOI: 10.1352/1934-9556-51.5.333

Policy, University of Sydney, PO Box 170, Lidcombe, NSW 1825, Australia; Eric Emerson, Centre for Disability Research, Lancaster University, England; Centre for Disability Research and Policy, University of Sydney, Australia; and David Felce, Welsh Centre for Learning Disabilities, School of Medicine, Cardiff University, Wales.

Issues Concerning Self-Report

Issues concerning self-report data and population-based data sets involving people with intellectual disabilities.

This article examines two methodological issues regarding ways of obtaining and analyzing outcome data for people with intellectual disabilities: (a) ...
610KB Sizes 0 Downloads 0 Views