The Journal of Pain, Vol 16, No 4 (April), 2015: pp 299-305 Available online at www.jpain.org and www.sciencedirect.com

Critical Reviews Quality of Pain Intensity Assessment Reporting: ACTTION Systematic Review and Recommendations Shannon M. Smith,* Matthew Hunsinger,y Andrew McKeown,* Melissa Parkhurst,z Robert Allen,x Stephen Kopko,{ Yun Lu,k Hilary D. Wilson,** Laurie B. Burke,yy Paul Desjardins,zz Michael P. McDermott,xx,{{,kk Bob A. Rappaport,*** Dennis C. Turk,yyy and Robert H. Dworkin*,{{,kk Departments of *Anesthesiology, zPsychiatry, xxBiostatistics and Computational Biology, {{Neurology, and kkCenter for Human Experimental Therapeutics, University of Rochester School of Medicine and Dentistry, Rochester, New York. y School of Professional Psychology, Pacific University, Hillsboro, Oregon. x Centrexion Corporation, Baltimore, Maryland. { Clinical Data Interchange Standards Consortium, Austin, Texas. k KAI Research, Inc, Rockville, Maryland. **Evidera, Seattle, Washington. yy LORA Group, LLC, Royal Oak, Maryland. zz Desjardins Associates, Maplewood, New Jersey. ***Center for Drug Evaluation and Research, United States Food and Drug Administration, Silver Spring, Maryland. yyy Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, Washington.

Abstract: Pain intensity assessments are used widely in human pain research, and their transparent reporting is crucial to interpreting study results. In this systematic review, we examined reporting of human pain intensity assessments and related elements (eg, administration frequency, time period assessed, type of pain) in all empirical pain studies with adult participants in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) between January 2011 and July 2012. Of the 262 articles identified, close to one-quarter (24%) ambiguously reported the pain intensity assessment. Elements related to the pain intensity assessment were frequently not reported: 31% did not identify the time period participants were asked to rate, 43% failed to report the type of pain intensity rated, and 58% did not report the specific location or pain condition rated. No differences were observed between randomized clinical trials and experimental (eg, studies involving experimental manipulation without random group assignment and blinding) and observational studies in reporting quality. The ability to understand study results, and to compare results between studies, is compromised when pain intensity assessments are not fully reported. Recommendations are presented regarding key details for investigators to consider when conducting and reporting pain intensity assessments in human adults. Perspective: This systematic review demonstrates that publications of pain research often incompletely report pain intensity assessments and their details (eg, administration frequency, type of

Supplementary data accompanying this article are available online at www.jpain.org and www.sciencedirect.com. The views expressed in this article are those of the authors and no official endorsement by the Food and Drug Administration (FDA) or the pharmaceutical and device companies that provided unrestricted grants to support the activities of the Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public-private partnership should be inferred. Financial support for this project was provided by the ACTTION public-private partnership, which

has received research contracts, grants, or other revenue from the FDA, multiple pharmaceutical and device companies, and other sources. Address reprint requests to Shannon M. Smith, PhD, Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, 601 Elmwood Ave, Box 604, Rochester, NY 14642. E-mail: shannon1_smith@ urmc.rochester.edu 1526-5900/$36.00 ª 2015 by the American Pain Society http://dx.doi.org/10.1016/j.jpain.2015.01.004

299

300

Quality of Pain Intensity Reporting

The Journal of Pain

pain). Failure to fully report details of pain intensity assessments creates ambiguity in interpreting research results. Recommendations are proposed to increase transparent reporting. ª 2015 by the American Pain Society Key words: Pain intensity, pain measurement, pain research.

T

he Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) recommendations for 6 core outcome domains for chronic pain clinical trials19 and the measures to assess these domains3 have facilitated the standardization of outcome assessment in pain research. Although the vast majority of analgesic clinical research in humans assesses pain intensity, the methods of assessment vary among studies, which can affect the interpretation and meaningfulness of study results. For example, although there is good evidence that both current and recalled pain intensity ratings are valid,1,12,13 other research suggests that recalling pain over time involves cognitive processes that may affect the validity of such ratings.5,9,14,17 There is also debate about whether a single pain intensity assessment can provide adequate assay sensitivity compared to the average of frequently reported pain intensity scores,10,11,18 despite the greater reliability of averaged assessments.13 The assay sensitivity of pain intensity assessments also may be affected by the way in which the assessments are implemented.4 Given that there is no universal approach to assessing pain intensity in human adults, the onus is on investigators to be fully transparent about all aspects of their assessment method so that others can understand what elements of pain intensity are being measured and under what circumstances. This issue has been previously addressed in research with both adults and children,15,16 although it is unclear whether these efforts have led to improved reporting of pain intensity assessments since their publication almost a decade ago. When the method of assessment is not fully described, there is ambiguity regarding how key data in the research were collected, making it unclear to the reader what instructions were given to participants and what, if anything, participants were advised to consider when rating their pain intensity. Additionally, failing to identify the pain intensity assessment used and elements such as the endpoints and anchors used, the type of pain intensity assessed (ie, average, worst, least, current), the specific pain location or condition participants considered, the time period rated, and the frequency of pain intensity assessments could reflect a lack of standardization in the assessment method. In order to appropriately interpret study results, either positive or negative, the reader must understand the outcome variable(s) assessed. In an effort to simplify harmonization of clinical trial data and facilitate regulatory review, fundamental data elements for pain intensity assessments in analgesic clinical trials (ie, type of pain, location, time period assessed, assessment frequency) have been identified.2 The Standardized Analgesic Database for Research,

Discovery, and Submissions (STANDARDS) working group from the Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public-private partnership with the U.S. Food and Drug Administration (http://www. acttion.org) undertook this systematic review to evaluate the extent to which these standard data elements and other elements of human adult pain intensity assessments (ie, the specific assessment method used, whether endpoints and anchors were defined) were reported by authors of more recent research, including clinical trials and nontrial studies, published in 3 major Englishlanguage pain journals (ie, European Journal of Pain, Journal of Pain, and Pain). We also tested the hypothesis that more complete reporting of the fundamental elements of pain intensity assessments would occur in clinical trial articles than in articles describing nontrial research, given that many clinical trials are subject to review by regulatory authorities.

Method Study Selection We selected articles reporting empirical research (ie, clinical trials, observational studies, and experimental studies involving manipulation without random group assignment and blinding) in noncognitively impaired human adults where $1 patient-reported measure of pain intensity was used (ie, visual analog scale [VAS]; numeric rating scale [NRS]; verbal response scale [VRS]; verbal descriptor scale [VDS]; any or all of the Brief Pain Inventory [BPI] intensity questions assessing average, least, worst, or current pain; short-form McGill Pain Questionnaire [SF-MPQ] VAS; or the SF-MPQ Present Pain Inventory [PPI] VRS). The second author (M.H.) searched all issues of 3 major English-language pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) published between January 2011 and July 2012 to identify articles. In order to ensure that all qualifying articles were identified, a second search was completed by a medical librarian using an electronic database (http:// www.pubmed.gov; see Supplementary Appendix 1 for a description of the search strategy), and the second author (M.H.) carefully reviewed the results. Using these 2 methods, 262 articles that fulfilled the criteria were identified (see Supplementary Appendix 2).

Data Extraction The first and last authors (S.M.S., R.H.D.) created an initial coding manual to evaluate descriptions of the type of pain intensity assessment, definitions for the anchors, frequency of assessment administration, time period to be rated, type of pain intensity rated (ie, average, least,

Smith et al worst,current),andspecificlocation orpain condition tobe rated (see Supplementary Appendix 3 for the coding manual). Nine training articles selected from issues of the 3 journals published in 2001 and 2002 were then coded by the first and second authors (S.M.S., M.H.), with modifications made to the coding manual as needed. When the coding manual was finalized, the 262 articles were randomized using a random number generator, and each article was coded twice (the first author [S.M.S.] coded all articles; the third and fourth authors [A.M., M.P.] each coded half of the articles). Coders were instructed to carefully read the Abstract, Methods, and Results sections of each article. In the event that an article contained more than 1 pain intensity assessment (eg, VAS and NRS), we determined which assessment to evaluate in the following way: 1) if the article described 1 pain intensity assessment as a primary outcome, reporting of that assessment was evaluated (if all 4 intensity items from the BPI were identified as the primary efficacy outcome, the average pain item was evaluated); 2) if no primary outcome was identified and all 4 intensity items from the BPI were used without any additional pain intensity assessments, reporting of the BPI average pain intensity item was evaluated; and 3) if multiple pain intensity assessments were used apart from those in step 1 or 2, we randomly selected a pain intensity assessment to be evaluated by the coders using a random number generator. When an article reported assessments of more than 1 type of pain (eg, experimentally evoked and spontaneous pain) or painful area (eg, knee and hip pain), we selected the pain type/area using this approach: 1) if the article described a primary outcome as assessing 1 specific pain type/area, reporting of that type/area was evaluated (if the primary outcome was a composite composed of 2 or more pain types/areas, the reporting of all pain types/areas used to create the composite were evaluated), or 2) if no primary outcome including a pain type/area was identified, we randomly selected a pain type/area to be evaluated by the coders using a random number generator. When coding articles regarding reporting of pain intensity assessment endpoints or anchors, time period rated, and type of pain intensity rated, ‘‘not applicable (N/A)’’ was used when this information was contained within the original assessment. For example, the endpoints for the SF-MPQ PPI are specified within the assessment. Similarly, the least and worst pain intensity items from the BPI ask participants to rate these pains over the past 24 hours. Although it is possible that investigators altered the endpoints or anchors, time period rated, or type of pain intensity rated in these assessments, we assumed that the assessment was used as originally developed.

Statistical Analysis Prespecified comparisons of pain intensity assessment reporting were made between clinical trials (ie, chronic pain clinical trials, acute pain clinical trials, experimental pain model clinical trials) and observational and experimental studies using Fisher’s exact tests. The overall

The Journal of Pain

301

significance level was set at .05, with the Holm correction used to adjust for the 6 statistical analyses.7

Results Coder Discrepancies Out of a possible 1,834 (7 coding items  262 articles) items coded, 347 discrepancies were observed, resulting in 81% agreement between the 2 coders. Of the 347 discrepancies, 52 were due to different interpretations of the coding item and 295 were due to oversights. Discrepancies due to oversights were resolved by the first author (S.M.S.), who reexamined the articles, whereas discrepancies due to differences in interpretation were resolved by discussion with the last author (R.H.D.).

Study Characteristics Approximately half of the articles were published in Pain (52%), with the remaining articles distributed nearly equally between European Journal of Pain (26%) and Journal of Pain (23%; Table 1). There were more observational and experimental studies (73%; 34% observational; 15% experimental in people in pain; 23% experimental in pain-free volunteers) than clinical trials (27%: 17% chronic pain; 8% experimental pain model; 2% acute pain).

Pain Intensity Assessment Reporting Nearly one-quarter of the articles ambiguously described the type of pain intensity assessment (eg, a ‘‘0-10 VAS’’ or an ‘‘11-point VAS,’’ which fails to identify whether the assessment is a 10-cm/100-mm VAS or a 0–10 NRS; a ‘‘computerized VAS’’ without information regarding the length of the line; a ‘‘Likert scale’’; Table 2). Anchors for the response options were not reported in 13% of the articles. The frequency of administration of the pain intensity assessment was generally well reported, with only 4% of the selected articles neglecting to provide this information. In approximately one-third of the articles (31%), the time period to be rated was not described (eg, past 24 hours), whereas 43% lacked information about the type of pain intensity participants were asked to rate (ie, average, least, worst, or current). More than half of the studies (58%) failed to report the specific pain condition or area of the body participants were asked to consider when making their pain intensity ratings. Comparisons between clinical trials and observational and experimental studies revealed no differences in the quality of pain intensity assessment reporting (Ps $ .12; Table 3).

Discussion In this systematic review of recent pain research, the specific pain intensity assessments used in human adults, and the details of their implementation, were frequently described incompletely. In 24% of the selected articles, inadequate information was provided to accurately identify the specific pain intensity measure that was used. When authors describe the pain intensity

302

Quality of Pain Intensity Reporting

The Journal of Pain

Table 1.

Study Characteristics CHARACTERISTIC

Journal European Journal of Pain Journal of Pain Pain Study type Chronic pain clinical trial Acute pain clinical trial Experimental pain model clinical trial Longitudinal or cross-sectional observational study Experimental study in people with pain Experimental study in pain-free volunteers Other

STUDIES (N = 262) N (%) 67 (26) 60 (23) 135 (52) 47 (17) 4 (2) 20 (8) 90 (34) 40 (15) 61 (23) 2 (1)

assessment as a ‘‘0-10 VAS,’’ the reader cannot determine whether the measure was a 10-cm/100-mm VAS or whether it was a 0 to 10 NRS that the author erroneously described as a VAS. Further, an ambiguous description such as ‘‘0-10 VAS’’ raises questions about whether the assessment is a modified version of the 10-cm/100-mm

Table 2.

Pain Intensity Assessment Reporting

PAIN INTENSITY ASSESSMENT ELEMENT Type of pain intensity assessment NRS VAS VRS or VDS BPI: 1 of 4 intensity items BPI: mean of 4 intensity items SF-MPQ VAS SF-MPQ PPI VRS Ambiguous Define VAS or NRS endpoints or all VRS anchors Yes No N/A (ie, BPI; SF-MPQ VAS or PPI) Report frequency of administration Yes No Report time period to be rated (eg, past 12 h, past 24 h) Yes No N/A (ie, BPI worst, least, right now; SF-MPQ PPI) Report type of pain intensity participants rated (ie, average, least, worst, current) Yes No N/A (ie, SF-MPQ PPI) Report whether participants rated pain in a specific location or associated with a specific pain diagnosis or condition Yes No Abbreviations: VDS, verbal descriptor scale; N/A, not applicable.

Comparisons of Pain Intensity Assessment Reporting Between Clinical Trials and Other Study Types

Table 3.

STUDIES (N = 262) N (%) 112 (43) 53 (20) 6 (2) 19 (7) 2 (1) 5 (2) 3 (1) 62 (24) 199 (76) 34 (13) 29 (11) 251 (96) 11 (4)

172 (66) 82 (31) 8 (3)

145 (55) 112 (43) 5 (2)

109 (42) 163 (58)

PAIN INTENSITY ASSESSMENT ELEMENT Report type of pain intensity assessment Yes Ambiguous Define VAS or NRS endpoints or all VRS anchors Yes or N/A (ie, BPI; SF-MPQ VAS or PPI) No Report frequency of administration Yes No Report time period to be rated (eg, past 12 h, past 24 h) Yes or N/A (ie, BPI worst, least, right now; SF-MPQ PPI) No Report type of pain intensity participants rated (ie, average, least, worst, current) Yes or N/A (ie, SF-MPQ PPI) No Report whether participants rated pain in a specific location or associated with a specific pain diagnosis or condition Yes No

CLINICAL OTHER TRIALS STUDIES* (N = 69) (N = 193) P N (%) N (%) VALUE 55 (80) 14 (20)

145 (75) 48 (25)

.51

61 (88) 8 (12)

167 (87) 26 (14)

.84

67 (97) 2 (3)

184 (95) 9 (5)

.73

44 (64)

136 (70)

.36

25 (36)

57 (30)

34 (49) 35 (51)

116 (60) 77 (40)

.12

29 (42) 40 (58)

80 (42) 113 (58)

1.00

Abbreviation: N/A, not applicable. *Observational and experimental studies.

VAS that includes numeric labels of 0 and 10 at the endpoints. If such a modification was made, authors should clearly state both the line length and the labels that were added (eg, a 10-cm VAS with anchors of 0 and 10). Although this may appear to be a minor modification, prior research suggests that label variations change participants’ interpretation of the endpoints,22 which might require further study to evaluate the psychometric properties of the revised assessment.20 Most articles selected for this review reported the anchors for the pain intensity assessment used and the frequency of assessment administration. However, authors frequently failed to report the time period to be rated, the type of pain intensity assessed, and what pain condition or bodily area participants were asked to rate. Such ambiguity may simply indicate imprecise reporting. However, these ambiguities also may reflect poor methodological rigor in that no instructions were given to study participants regarding the time period, type of pain, or pain condition or bodily area to consider when rating pain intensity. When details of the pain intensity assessment are not specified, study participants must determine the circumstances under which to rate their pain intensity for themselves.22 This may introduce additional between-participant variation in the scores

Smith et al Table 4.

The Journal of Pain

303

Recommendations for Reporting Pain Intensity Assessments

Report details regarding:  Type of pain intensity assessment used (eg, VAS, NRS) with a description that clearly distinguishes the assessment from others (eg, for a VAS, describing the length of the line; for an NRS, reporting the range of possible ratings)  Definitions of anchors, except in cases where a well-known assessment is used verbatim and the anchors are easily referenced (eg, anchors for the 4 BPI pain intensity NRS items)  Frequency of administering the pain intensity assessment  Time period to be rated, except in cases where a well-known assessment is used verbatim and the time period is easily referenced (eg, SF-MPQ PPI assesses present pain)  Type of pain intensity rated by participants (eg, average, usual, least, worst, current, present)  The specific bodily area or pain condition to be rated; if none was specified, this should be stated

and additional within-participant variation due to fluctuations in a given participant’s interpretation over time, thereby complicating the ability to properly interpret treatment effects and other research results. Investigators, as well as clinicians, need to specify exactly what research participants and patients should consider when providing pain intensity ratings.1 It is surprising that there were no statistically significant differences between clinical trials and observational and experimental studies in reporting the elements of pain intensity assessments. Many clinical trial organizations (eg, Consolidated Standards of Reporting Trials [CONSORT], Outcome Measures in Rheumatology [OMERACT], IMMPACT) and regulatory agencies provide guidance on best practices in clinical trials, and thus clinical trials might be expected to exhibit better reporting of pain intensity assessments. However, other than 3 recent publications of which we are aware,2,15,16 little attention has been given to the need to transparently report all details associated with pain intensity assessment. In order to improve transparency and consistency regarding the collection and reporting of pain intensity data, researchers designing empirical studies could review the standardized database format proposed by ACTTION and the international Clinical Data Interchange Standards Consortium (CDISC)2 as it provides some guidelines that could improve reporting of relevant information. We further recommend that journal editors require authors of pain studies to clearly identify the pain intensity assessment used, along with all specific details pertaining to the assessment (Table 4). Requiring more transparent reporting of pain intensity assessments will allow the wider community of pain researchers, clinicians, and consumers to more easily interpret study results and to determine whether pain intensity results from different studies are directly comparable. Further, our recommendations can assist investigators in more carefully administering pain intensity assessments, in turn potentially increasing the validity and reproducibility of their results. There are some limitations of this review. Pain intensity reporting was explored across all research articles recently published in 3 major pain journals, and therefore the results may not apply to pain research published in other pain journals or in journals in other disciplines. Second, we gave authors the benefit of the doubt in cases where there was some ambiguity (eg, reporting that participants were asked to rate pain ‘‘overall’’ was

coded as ‘‘average’’ pain intensity), and as such, our results may overestimate the completeness of reporting of pain intensity assessments and the related elements. In addition, we only evaluated what authors reported regarding the pain intensity assessment used (ie, we did not contact corresponding authors to determine whether additional details of the pain intensity assessment were left out of the article), and so this review may not fully represent the actual methodology used by investigators to assess pain intensity. Finally, there are additional details of the pain intensity assessments that are important to consider, but those details were not reviewed here. For example, we did not examine whether investigators discussed the reliability of the measures, which is also important in describing a study’s pain intensity assessment. Detailed investigation of the quality of reporting of pain assessment reliability was beyond the scope of the present analysis. Further, we did not review the social context within which the assessments were administered. Studies of social contexts have demonstrated their effects on pain intensity ratings, depending on the relationship between the interaction partners, the sexes of the 2 individuals, and the level of empathy and concern expressed for the person with pain (eg,6,8,21,23). Our focus, however, was on the fundamental characteristics of pain intensity assessments that are needed to fully describe the assessment that was used in the study.

Conclusions Assessments of pain intensity in human adults are ubiquitous in pain research, and additional effort is needed to ensure they are described with enough information for a reader to know exactly what was done and to precisely replicate the methods. In order to help participants better understand the pain intensity assessment, investigators should specify in detail the contextual factors (ie, pain intensity scale, endpoints and anchors, type of pain, location, time period assessed, assessment frequency) to be considered when rating pain intensity. Additionally, journal editors and reviewers are encouraged to carefully review descriptions of pain intensity assessments in manuscripts of human pain research submitted for publication. Efforts to this end will strengthen pain research not only within studies but also when comparing methods and results between studies.

304

The Journal of Pain

Quality of Pain Intensity Reporting

Acknowledgments

Library, for her assistance in conducting the PubMed search for relevant articles; and Rachel Kitt, BA/BS, from the University of Rochester Department of Anesthesiology, for assisting with the creation of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagram.

The preparation of this article was undertaken by the Standardized Analgesic Database for Research, Discovery, and Submissions (STANDARDS) Working Group, and the manuscript was reviewed and approved by the Executive Committee of the ACTTION public-private partnership with the U.S. Food and Drug Administration. We thank Sharon Hertz, MD, and Allison H. Lin, PharmD, PhD, from the U.S. Food and Drug Administration for their numerous contributions to ACTTION; Michele Shipley, MLS, from the University of Rochester Miner

References 1. Broderick JE, Stone AA, Calvanese P, Schwartz JE, Turk DC: Recalled pain ratings: A complex and poorly defined task. J Pain 7:142-149, 2006 2. Dworkin RH, Allen R, Kopko S, Lu Y, Turk DC, Burke LB, Desjardins P, Etropolski M, Hewitt DJ, Jayawardena S, Lin AH, Malamut R, Michel D, Ottinger J, Peloso P, Pucino F, Rappaport BA, Skljarevski V, St. Peter D, Timinski S, West CR, Wilson HD: A standard database format for clinical trials of pain treatments: An ACTTION-CDISC initiative. Pain 154:11-14, 2013 3. Dworkin RH, Turk DC, Farrar JT, Haythornthwaite JA, Jensen MP, Katz NP, Kerns RD, Stucki G, Allen RR, Bellamy N, Carr DB, Chandler J, Cowan P, Dionne R, Galer BS, Hertz S, Jadad AR, Kramer LD, Manning DC, Martin S, McCormick CG, McDermott MP, McGrath P, Quessy S, Rappaport BA, Robbins W, Robinson JP, Rothman M, Royal MA, Simon L, Stauffer JW, Stein W, Tollett J, Wernicke J, Witter J: Core outcome measures for chronic pain clinical trials: IMMPACT recommendations. Pain 113:9-19, 2005 4. Dworkin RH, Turk DC, Peirce-Sandner S, Burke LB, Farrar JT, Gilron I, Jensen MP, Katz NP, Raja SN, Rappaport BA, Rowbotham MC, Backonja MM, Baron R, Bellamy N, Bhagwagar Z, Costello A, Cowan P, Fang WC, Hertz S, Jay GW, Junor R, Kerns RD, Kerwin R, Kopecky EA, Lissin D, Malamut R, Markman JD, McDermott MP, Munera C, Porter L, Rauschkolb C, Rice AS, Sampaio C, Skljarevski V, Sommerville K, Stacey BR, Steigerwald I, Tobias J, Trentacosti AM, Wasan AD, Wells GA, Williams J, Witter J, Ziegler D: Considerations for improving assay sensitivity in chronic pain clinical trials: IMMPACT recommendations. Pain 153:1148-1158, 2012 5. Feine JS, Lavigne GJ, Dao TTT, Morin C, Lund JP: Memories of chronic pain and perceptions of relief. Pain 77:137-141, 1998 6. Fillingim RB, Doleys DM, Edwards RR, Lowery D: Spousal responses are differentially associated with clinical variables in women and men with chronic pain. Clin J Pain 19:217-227, 2003 7. Holm S: A simple sequentially rejective multiple test procedure. Scand J Stat 6:65-70, 1979 8. Hurter S, Paloyelis Y, Williams AC de C, Fotopoulou A: Partners’ empathy increases pain ratings: Effects of perceived empathy and attachment style on pain report and display. J Pain 15:934-944, 2014 9. Jamison RN, Sbrocco T, Parris WCV: The influence of physical and psychosocial factors on accuracy of memory for pain in chronic pain patients. Pain 37:289-294, 1989

Supplementary Data Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.jpain.2015.01.004.

10. Jensen MP, Hu X, Potts SL, Gould EM: Measuring outcomes in pain clinical trials: The importance of empirical support for measure selection. Clin J Pain 30:744-748, 2014 11. Jensen MP, Hu X, Potts SL, Gould EM: Single vs composite measures of pain intensity: Relative sensitivity for detecting treatment effects. Pain 154:534-538, 2013 12. Jensen MP, Karoly P: Self-report scales and procedures for assessing pain in adults, in Turk DC, Melzack R (eds): Handbook of Pain Assessment, 3rd ed. New York, NY, Guilford Press, 2013, pp 19-44 13. Jensen MP, McFarland CA: Increasing the reliability and validity of pain intensity measurement in chronic pain patients. Pain 55:195-203, 1993 14. Jensen MP, Turner LR, Turner JA, Romano JM: The use of multiple-item scales for pain intensity measurement in chronic pain patients. Pain 67:35-40, 1996 15. Litcher-Kelly L, Martino SA, Broderick JE, Stone AA: A systematic review of measures used to assess chronic musculoskeletal pain in clinical and randomized controlled clinical trials. J Pain 8:906-913, 2007 16. Stinson JN, Kavanagh T, Yamada J, Gill N, Stevens B: Systematic review of the psychometric properties, interpretability and feasibility of self-report pain intensity measures for use in clinical trials in children and adolescents. Pain 125:143-157, 2006 17. Stone AA, Broderick JE, Shiffman SS, Schwartz JE: Understanding recall of weekly pain from a momentary assessment perspective: Absolute agreement, between- and within-person consistency, and judged change in weekly pain. Pain 107:61-69, 2004 18. Stone AA, Schneider S, Broderick J, Schwartz J: Singleday pain assessments as clinical outcomes: Not so fast. Clin J Pain 30:739-743, 2014 19. Turk DC, Dworkin RH, Allen RR, Bellamy N, Brandenburg N, Carr DB, Cleeland C, Dionne R, Farrar JT, Galer BS, Hewitt DJ, Jadad AR, Katz NP, Kramer LD, Manning DC, McCormick CG, McDermott MP, McGrath P, Quessy S, Rappaport BA, Robinson JP, Royal MA, Simon L, Stauffer JW, Stein W, Tollett J, Witter J: Core outcome domains for chronic pain clinical trials: IMMPACT recommendations. Pain 106:337-345, 2003 20. US Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, Center for Devices and Radiological Health: Guidance for industry: patient-reported outcome measures: use in medical product development to support labeling claims, draft guidance. Available at: http://www. fda.gov/downloads/Drugs/GuidanceComplianceRegulatory

Smith et al Information/Guidances/UCM193282.pdf. Accessed November 1, 2013 21. Vigil JM, Pendleton P, Coulombe P, Vowles KE, Alcock J, Smith BW: Pain patients and who they live with: A correlational study of coresidence patterns and pain interference. Pain Res Manag 19:e109-e114, 2014

The Journal of Pain

305

22. Williams AC de C, Davies HT, Chadury Y: Simple pain rating scales hide complex idiosyncratic meanings. Pain 85: 457-463, 2000 23. Wilson SJ, Martire LM, Keefe FJ, Mogle JA, Stephens MA, Schulz R: Daily verbal and nonverbal expression of osteoarthritis pain and spouse responses. Pain 154:2045-2053, 2013

Quality of pain intensity assessment reporting: ACTTION systematic review and recommendations.

Pain intensity assessments are used widely in human pain research, and their transparent reporting is crucial to interpreting study results. In this s...
286KB Sizes 1 Downloads 7 Views