When you can’t tell when it hurts: a preliminary algorithm to assess pain in patients who can’t communicate Shuang Wang, PhD1,*; Xiaoqian Jiang, PhD1,*; Zhanglong Ji1; Robert El-Kareh, MD, MS, MPH1,2; Jeeyae Choi, RN, DNSc3; Hyeoneui Kim, RN, MPH, PhD1 1 Division of Biomedical Informatics, University of California, San Diego, La Jolla, CA 2 Division of Hospital Medicine, University of California, San Diego, La Jolla, CA 3 College of Nursing, University of Wisconsin-Milwaukee, Milwaukee, WI Abstract Pain is a common but significant problem that is considered a high priority area of care. Although there are many pain assessment scales that can be applied to patients who can communicate, either verbally or non-verbally, pain assessment for minimally responsive patients is limited. In this preliminary work, we developed a novel approach for assessing pain in such patients using a principal component analysis (PCA)-based local detector. Our algorithm produce a single index to indicate the increase in pain level based on unsynchronized, sparse and noisy time series data collected from electronic flowsheets. Among 8032 patient cases collected, 53 cases that satisfied the data requirements for PCA were used in this experiment. Our preliminary results indicate high potential in this approach by yielding an average AUC of 0.76 for the 53 cases. Introduction Pain is common and is reported by more than half of hospitalized patients.(1–3). In addition to increasing patient discomfort, unresolved pain has a significant financial impact. It can result in increased length of hospital stays and re-hospitalization and/or outpatient visits, as well as decreased ability to function fully in daily life. The Institute of Medicine estimates the societal cost of pain at $560-$635 billion annually (4). As such, effective management of pain is a high priority area of care (5) and one of the key factors that determine patient satisfaction (6,7). Pain management starts with accurately assessing the severity, timing and characteristics of pain a patient is experiencing. Pain severity of communicative patients is usually assessed based on the patients’ self-report using standardized pain assessment scales such as Numerical Rating Scale, Wong-Becker Scale, Verbal Rating Scale, and Visual Analogue Scale (8–11). Sedated or non-verbal patients require different approaches to pain assessment. Behavioral Pain Scale (BPS), Critical-care Pain Observation Tool (CPOT), and Non-Verbal Pain Scale (NVPS) are the examples of the pain assessment tools frequently applied to such patient group. However, in general, studies report mixed findings on validity and reliability of behavioral pain rating scales in assessing pain in this patient group (12–14). Finally, all of these scales require additional time from busy nurses to perform assessments beyond routinely collected physical signals, e.g., heart rate, blood pressure, etc. Family and friends of patients with severe brain injuries frequently ask whether the patients are in pain (or can feel pain) (15). However, assessing pain in minimally conscious or severely brain-injured patients is especially challenging (16) as no reliable assessment method is available. Patients in a minimally conscious state may show reproducible but hard to detect and inconsistent signs of consciousness (17). Although they cannot reliably communicate pain either verbally or non-verbally, they may show brain activity reactions to unpleasant stimulation that are similar to healthy people, suggesting a potential ability to perceive pain (18,19). The Nociceptive Coma Scale (NCS) was developed to assess nociception in non-communicative patients recovering from coma (14). Although this scale showed a high level of concurrent validity when compared to other behavioral observational pain scales, the authors of this scale emphasize that this scale is not designed to assess pain level (14). To address the limitations in minimally conscious patients, we explored the possibility of assessing pain using physical signals routinely captured and documented in electronic flowsheets. Methods Our task is use electronic data to predict the likelihood that a patient will suffer from increased pain. The inputs are unsynchronized time series observations (e.g., lab tests) and readings from sensors (e.g., blood pressure and heart *These authors contributed equally to this work.

1429

rate). This is a very challenging task because: 1). observed time series are noisy, sparse, high-dimensional and unsynchronized; 2). patients perceive the level of pain differently; and 3). patients undergo different treatments for pain. All these factors can influence individual patients’ reports of their pain scores (i.e., 0-10), which makes it very difficult to apply traditional supervised learning tools like logistic regression (21) or support vector machine that are developed to learn consistent patterns in cohorts. We developed a novel mechanism for personalized pain estimation by exploring state changes in the reduced dimension space for all observations. Background We begin with an introduction of the basics of Singular Value Decomposition (SVD), a technique that we will use to synthesize high dimensional time series data into low dimensional ones (20). Singular Value Decomposition: SVD is a factorization of a real valued matrix. Any real valued matrix 𝐴!×! (suppose 𝑝 < 𝑛) can be expressed as the product of three matrices: 𝐴!×! = 𝑈!×! 𝐷!×! 𝑉!×! ′ while 𝐷!×! is a positive diagonal matrix, 𝑉!×! is a unitary matrix (i.e., 𝑉𝑉 ! = 𝐼), and 𝑈!×! is part of an 𝑛×𝑛 unitary matrix (all columns of 𝑈 have length 1 and any two columns are orthogonal to each other). There can be many SVD solutions (𝑈, 𝐷, 𝑉) for any matrix 𝐴 because rows in 𝑈, 𝐷 and 𝑉 might change orders. To avoid ambiguity, we assume the diagonal components of 𝐷 are ranked by their value (the largest one is on the upper-left). Low-rank Approximation: SVD can be used to find a low-rank approximation to a matrix. Using the notations from the previous paragraph, the approximation is to find a 𝑘-rank matrix 𝐴! that is closest to the original matrix 𝐴. 𝐴! = arg min ∥𝐴−𝐵 ∥ {!:!"#$(!)!!}

One can use any norm function ∥. ∥. In practice, F-norm and Manhattan norm are the most commonly used ones1 and we use the latter in the paper. Given the SVD decomposition, the 𝒌-rank approximation 𝑨𝒌 to matrix 𝑨 is 𝑨𝒌 = 𝑼𝑫𝒌 𝑽′, where 𝑫𝒌 is a diagonal matrix whose first 𝒌 diagonal components are the same as those in 𝑫, and the remaining components of diagonals are filled with 0.     Model building Each patient is measured by multiple sensors. By intuition, if a patient is in a normal state, we expect a smooth pattern from the readings of these sensors, otherwise, there might be substantial fluctuations. Because the raw feature space is of high dimensionality and noisy, we decide to look at “state” transitions in the subspace and our goal is to detect “outliers” (in terms of projected residual errors). We hypothesized that these “outliers” are highly correlated with the physical condition of a patient and can be used to infer if a patient might suffer from increased pain. Given a data matrix 𝐴!×! including 𝑝-dimensional sensor readings of 𝑛 time ticks, we use a moving window approach to identify “state” transitions (through evaluting its projected residual errors) in the 𝑘-rank subspace, where 𝑘 is a positive integer. In simple language, we construct the local subspace for a short period of time and check whether it fits observations of a consequtive time window well. How to find the local subspace: We use low-rank approximation (induced by SVD) to find the local subspace. We (!) divide the entire data matrix 𝐴!×! into continuous pieces 𝐴!/!×! (𝑡 = 1 … 𝑇) over 𝑇 consecutive time periods. If the reading matrix is 𝐴(!) and the SVD decomposition matrices are (𝑈, 𝐷, 𝑉), the 𝑘 rank approximation is !

𝐴! = 𝐴(!) 𝑉 (!)

𝐼! 0

0 0

!

𝑉 (!) .

𝐼! 0 ! 𝑉 (!) ′ includes only the first 𝑘 rows in matrix 𝑉 (!) , each row in 𝐴! is a linear combination of 0 0 these rows, which generate the subspace for the local period 𝑡 (Figure 1).

Because

1

The F-norm and Manhattan norm of a matrix 𝐶 are defined as ∥ 𝐶!×! ∥! = ! ! !!!   !!!  |𝑐!" |,

respectively. Here 𝑐!" is the entry in the 𝑖, 𝑗-th entry.

1430

! ! ! !!!   !!!  𝑐!"

and ∥ 𝐶!×! ∥! =

Figure 1. Singular Value Decomposition (SVD) for observations in a given period A(t) How to measure state changes: Since the subspace is generated by the first 𝑘 rows in the unitary matrix 𝑉 (!) , the last 𝑝 − 𝑘 rows (for convenience, we call them 𝑣!!! , . . . 𝑣! ) in matrix 𝑉 (!) are the orthogonal bases of the complementary space. Given a vector 𝛼, subspace generated by of 𝑉

(!)

(!) (!) 𝑣!!! , . . . 𝑣! ,

! ! (!) !!!!!  |𝛼 𝑣! |

is the Manhattan norm residual of projecting 𝛼 to a

which is the distance between 𝛼 and the subspace generated by the first 𝑘 rows !!!

. To measure how well the subspace (induced by the first 𝑘 rows of 𝑉 (!) ) fits observations 𝐴! !!!

!

!

in a

consecutive period, we only need to access the residual error ∥ 𝐴! 𝑉!!!,…,! ∥! . If the error is large, it means that the “state” from the previous time period does not model the current observation well, and therefore there might be a substantial change in the patient’s physical condition (Figure 2).

Figure 2 Calculated projection residual Experiment In this section, the study site and data collection will be introduced first. Then, we will present our experimental results. We ran our experiments on an Intel 2.7 GHz machine with 16 GB RAM. The patient records were organized in a MySQL database. The source codes were developed in MATLAB and SQL scripts. Before applying PCA, we normalized the data by subtracting the mean and then dividing by the standard deviation. Study site and data collection This study was done using the inpatient data collected from the University of California, San Diego Medical Center (UCSD-MC). Most of the documentation at UCSD-MC is done using a commercial electronic health record (EHR) system but a few in-house built electronic documentation modules current co-exist. Upon approval by Institutional Review Board (IRB), we retrieved inpatient EHR data of the adult patients diagnosed with cancer and hospitalized before year 2013. For those patients who have been hospitalized multiple times, we retrieved the data from the latest hospitalization. We retrieved the entire medication administration and flowsheet data of the patients for the entire period of the latest hospitalization. In total, we collected 8,032 patients with 56,155 medication administration records and 946,149 flowsheet entries for 2145 assessment items. To meet the data requirements of PCA, we only included assessment items documented in continuous numerical values (N=78) with sufficient frequencies (i.e., at least 90% overlap with pain period). The process of selecting patient cases for this study was illustrated in Figure 3.

1431

Feature  selection

Record  s election 5447  patients  have   records  of  our  selected   78  candidate   assessment  items

61  patients  have  more  than   21  pain  records  and  all  78   candidate  assessment  items

8032   patients in  the  database

For  each  p atient,  the  t ime  series  of   each  assessment  item  that  covers  at   least  90%  pain  period  will  be  selected  

5768  patients  have     pain  records

Figure 3. Workflow of the record selection and feature selection procedures. After this filtering, 61 patients with at least 21 pain level observations remained for this analysis. Table 1 shows the distributions of the characteristics of the entire patients sampled and the patients included in the analysis. Among 61 patients, 43 were treated in an ICU. Their length of stay in ICUs varied from less than 1 day to 73 days. The average Glasgow Coma Scale (GCS) score of these patients was 12 (s.d. = 3). None of the 61 patients had all 73 assessment items available. Number of available data items for the 61 patients varied from p = 3 to p = 28. The number of documented data points also varied from n = 21 to n =1573. Table 1. Demographics length of stay of the patients sampled for this study.

Mean age Percent female Average length of stay (s.d.)

All Sampled (N=8,032) 61.7 54.68% 1.3 days (s.d. 8.65)

Included in the analysis (N=61) 66.33 51.67% 9.91 (s.d. 14.73)

Results Excluding an additional 7 patient cases which did not have sufficient level of variability in the data, we modeled the residual errors of the physiologic data in relation to pain levels for 53 patients. There are mainly three tunable parameters in the proposed method, i.e., the size of reduced dimension 𝑘 after PCA, moving window size, and physiologic transition threshold. The size of reduced dimension was selected such that the cumulative energy of the selected dimension 𝑘 = 2 can cover at least 90% of the energy of the original data. In this experiment, the moving window sizes vary from 3 to 8. Since the records of different patients have different time scales, we might select different windows sizes for different patients to optimize the capture of their physiologic transitions. The physiologic transition threshold and its selection will be discussed briefly in the following experimental results2. The goal of this study is to investigate whether the physiologic transition obtained through our proposed moving window PCA method can be used to predict the changes of pain levels for each individual patient. We first examine the Receiver Operating Characteristic (ROC) curves for each model and calculated the area under the curve (AUC) by varying the physiologic transition threshold. In our experiment, Forty seven of the 53 cases (89%) showed AUC of greater than 0.5. Average AUC of the 53 cases was 0.76 (with s.d. = 0.20). As an example, we present 9 ROC curves along with the box plot that shows the distribution of the AUC values of the 53 cases in Figure 4. For example, Figure 4 (a)-(h) illustrate the cases with AUC over 0.5, where patient 21 shows the best AUC performance of 0.875. Moreover, in Figure 4 (i), patient 10 shows a poor AUC performance of 0.488 compared with the other cases. Some possible reasons for cases with poor AUC performance will discussed in details in the next section.

2

The complete set of result graphs and the list of the physiologic data items are available at http://dbmi-engine.ucsd.edu/pain/

1432

0.5

0

ROC with AUC = 0.568173 and window size = 8 for patient 3 1

True Positive Rate

True Positive Rate

ROC with AUC = 0.572508 and window size = 4 for patient 2 1

0

0.2

0.5

0

0 0.2 0.4 0.6 0.8 1 (b) False Positive Rate ROC with AUC = 0.663580 and window size = 8 for patient 5 1

True Positive Rate

0.4 0.6 0.8 1 (a) False Positive Rate ROC with AUC = 0.587385 and window size = 4 for patient 4 1

True Positive Rate

0

0

0 0.2 0.4 0.6 0.8 1 (d) False Positive Rate ROC with AUC = 0.875062 and window size = 6 for patient 21 1

True Positive Rate

0.2

0.5

0

0 0.2 0.4 0.6 0.8 1 (f) False Positive Rate ROC with AUC = 0.764938 and window size = 3 for patient 25 1

True Positive Rate

0.2

0.5

0.5

0

0.4 0.6 0.8 1 (g) False Positive Rate ROC with AUC = 0.488455 and window size = 3 for patient 10 1

0.2

(h)

0

0.2

0.4 0.6 False Positive Rate

1 0.8

AUC

True Positive Rate

0

0.5

0

0.4 0.6 0.8 1 (e) False Positive Rate ROC with AUC = 0.746500 and window size = 3 for patient 22 1

True Positive Rate

0

0.5

0

0.4 0.6 0.8 1 (c) False Positive Rate ROC with AUC = 0.743278 and window size = 5 for patient 16 1

True Positive Rate

0

0.5

0.5

0.6 0.4

0 (i)

0

0.2

0.4 0.6 False Positive Rate

0.8

1

(j)

Study case

Figure 4. Sample ROC curves and the box plot of the 53 AUC values

1433

0.8

1

12 -0 430 ,1 12 5: -0 21 501 ,0 12 9: -0 43 502 ,0 12 4: -0 04 502 ,2 12 2 -0 :2 56 03 ,1 12 6: -0 4 58 04 ,1 12 1: -0 0 59 05 ,0 12 5: -0 31 505 ,2 12 3: -0 52 506 ,1 12 8 -0 :1 54 07 ,1 12 2 -0 :3 56 08 ,0 12 6: -0 5 57 09 ,0 12 1: -0 19 509 ,1 12 9: -0 40 510 ,1 12 4: -0 02 511 ,0 12 8 -0 :2 54 12 ,0 12 2 -0 : 4 55 12 ,2 12 1: -0 07 513 ,1 12 5: -0 28 514 ,0 12 9: -0 50 515 ,0 12 4 -0 :1 52 15 ,2 12 2 -0 : 3 53 16 ,1 12 6: -0 5 55 17 ,1 12 1: -0 16 518 ,0 12 5: -0 38 519 ,0 0: 00

Physiologic Tansition

Physiologic Transition

Physiologic Transition

0.3

0.3

0.3

0.2 4

0

0.5

Physiologic Transition Pain Level Medication Taken

0.2 4

0

0.5

Physiologic Transition Pain Level Medication Taken

0.2 4

0

1434

Pain Level

Physiologic Transition Pain Level Medication Taken

Pain Level

13 -0 723 ,1 13 2: -0 48 724 ,1 13 9: -0 24 726 ,0 13 2: -0 00 727 ,0 13 8: -0 36 728 ,1 13 5 -0 :1 72 29 ,2 13 1 -0 : 4 78 31 ,0 13 4: -0 24 801 ,1 13 1: -0 00 802 ,1 13 7: -0 36 804 ,0 13 0 -0 :1 82 05 ,0 13 6 -0 : 4 88 06 ,1 13 3: -0 2 84 07 ,2 13 0: -0 00 809 ,0 13 2: -0 36 810 ,0 13 9: -0 12 811 ,1 13 5 -0 :4 88 12 ,2 13 2: -0 2 84 14 ,0 13 5: -0 0 80 15 ,1 13 1: -0 36 816 ,1 13 8: -0 12 818 ,0 13 0: -0 48 819 ,0 13 7 -0 :2 84 20 ,1 13 4: -0 0 80 21 ,2 13 0: -0 36 823 ,0 3: 12

0.5

Pain Level

13 -0 914 ,0 13 3: -0 55 914 ,2 13 2: -0 17 915 ,1 13 6: -0 39 916 ,1 13 1 -0 :0 90 17 ,0 13 5: -0 2 92 17 ,2 13 3: -0 4 93 18 ,1 13 8: -0 05 919 ,1 13 2: -0 27 920 ,0 13 6 -0 :4 98 21 ,0 13 1 -0 :1 90 21 ,1 13 9: -0 3 91 22 ,1 13 3: -0 53 923 ,0 13 8: -0 15 924 ,0 13 2: -0 36 924 ,2 13 0 -0 :5 98 25 ,1 13 5 -0 : 1 99 26 ,0 13 9: -0 41 927 ,0 13 4: -0 03 927 ,2 13 2: -0 24 928 ,1 13 6 -0 :4 96 29 ,1 13 1 -0 : 0 97 30 ,0 13 5: -0 2 99 30 ,2 13 3: -1 51 001 ,1 13 8: -1 12 002 ,1 2: 34

Physiologic Transition vs. Pain Level with window size = 8 and threshold = 0.013469 for patient 5

(a) 8

0.4 6

0.1 2

0

Physiologic Transition vs. Pain Level with window size = 3 and threshold = 0.007305 for patient 10

(b)

8

0.4 6

0.1 2

0

(c)

Physiologic Tansition vs. Pain Level with window size = 3 and threshold = 0.007305 for patient 10 8

0.4 6

0.1 2

0

Figure 5 Transition of physiologic status vs. reported pain level (artificial timestamps are used for patient privacy)

Figure 5 shows the co-occurrence of the changes in physiologic status (with a certain physiologic threshold and moving window size) and the changes in pain level. Please note that we have not calibrated the size of residual errors in physiologic data to correspond to the pain level. Therefore the magnitude of the physiologic transition score is not an indication of the severity of pain. In this paper, a physiologic transition score that is larger than a given threshold only indicates the prediction of pain. The study of residual error calibration warrants future study. Additionally, we also included the pain medication administration to the graph to see how the pain treatment may have affected the reported pain scores. Figure 5 (a) shows the overlaid time series of physiologic transition, pain level and medication administration of patient 21, who has the best AUC performance among the experimental cases shown in Figure 4. In Figure 5 (a), we can see that the proposed method correctly predicts all the recorded incidents of pain increase with the exception of the first two pain instances. Moreover, Figure 5 (a) depicts that the patient 21 has a regular pattern of taking pain relief medication. Figure 5 (b) illustrates the study case of patient 5, where we have physiologic transition scores observed but without a subsequent increased pain score. In this study, we consider these predications as false positive reports. However, it is worth mentioning that there is no increased pain score reported during the period between 2012-03-16 16:48 and 2012-03-21 19:12, however, a large amount of medication administrations have be observed. Therefore, the patient may have been in pain and the absence of pain scores may represent missing data. Finally, we show the study case of patient 10 in Figure 5 (c). We can see that the proposed algorithm results in a poor AUC performance in this case due to a large portion of false positive reports during the period without medication administration and pain scores. Discussion In this exploratory work, we investigated whether the changes in physiologic status determined by calculating residual errors of physiologic time series data using a PCA-based local detector can indicate changes in pain level. Our experiment produced mixed results, in which many cases are predicted well. The average AUC is 0.76 and 47 out of 53 cases have an AUC above 0.5. Upon closer examination of the results, we observed the following associations between the data and the results: First, the cases with more frequently documented physiologic data seemed to have more consistent co-occurrence of changes in residual errors and in pain level. This is not surprising because predictive models tend to generalize better with more observations. Second, the extensive pain medication might mask the changes in physiologic data. We observed that the change in physiologic status values tend to be lower during the period where pain medication administration was documented with a high frequency, even if the medication did not effectively manage the pain. We analyzed the 6 cases with the AUC score of less than 0.5. Two of them showed very sparse data points (less than 5 pain and physiologic status change records in the graph) thus their results were deemed less reliable. We reviewed the flowsheets data for the other 4 cases but were not able to find reasonable explanations why these cases showed less desirable results. However, we noticed an unusually high central venous pressure reading in one case, which is an erroneous record that contributed to the high residual error calculated in that time point. This observation implied that extra data cleaning process needs to be conducted before applying our PCA-based local detector. This work has several limitations that warrant careful interpretation of the findings. First, the generalizability of our results is limited because this work was performed using a limited set of patient samples collected from a single medical center. Second, our PCA-based local detector currently indicates only the changes in pain level. To be clinically useful, we need to refine the algorithm to be able to indicate the severity of pain. Finally, our ultimate goal is to develop a pain assessment method that can be applied to the patients who cannot effectively react to pain either verbally or behaviorally. The very motivation of this goal also makes it challenging to achieve this goal. It remains unclear whether an algorithm created for predicting pain in communicative patients would generalize to noncommunicative patients. The absence of feasible ways to accurately assess pain level among minimally responsive patients makes it difficult to validate the pain level predicted through this approach. The validation of this approach may require more advanced neurologic or physiologic monitoring techniques. Despite the limitations described above, we feel that this work holds promise to be useful in clinical settings. Potential benefits include improving both the efficiency and accuracy of pain assessment for non-verbal patients. Many behavioral observational scales are used to assess pain in non-verbal patients. The increased nursing time required to perform these assessments, along with the mixed results on their reliability (12–14), limit their utility. A faster, more accurate and objective method for pain assessment would clearly be welcome in this patient population.

1435

Conclusion We developed a novel approach to assess pain in patients using a PCA-based local detector. The algorithm takes unsynchronized, sparse and noisy time series data and produces a single index to indicate the pain level. Our algorithm can produce a personalized ROC curve (prediction vs. self-reported pain score) for each patient under monitoring, which provides a quantified way to assess the efficacy of this approach. Although our study is still preliminary, the results show great promise (averaged AUC>0.76), justifying further investigation along this line. Acknowledgment This work is supported in part by the grant U54HL108460 (NIH/NHLBI) K99LM011392 and AHRQ (R01HS19913). References 1.

Abbott F V, Gray-Donald K, Sewitch MJ, Johnston CC, Edgar L, Jeans ME. The prevalence of pain in hospitalized patients and resolution over six months. Pain [Internet]. 1992 Jul [cited 2013 Mar 13];50(1):15– 28. Available from: http://www.ncbi.nlm.nih.gov/pubmed/1513602

2.

Whelan CT, Jin L, Meltzer D. Pain and satisfaction with pain control in hospitalized medical patients: no such thing as low risk. Archives of internal medicine [Internet]. 2004 Jan 26 [cited 2013 Mar 13];164(2):175–80. Available from: http://www.ncbi.nlm.nih.gov/pubmed/14744841

3.

Strohbuecker B, Mayer H, Evers GCM, Sabatowski R. Pain prevalence in hospitalized patients in a German university teaching hospital. Journal of pain and symptom management [Internet]. 2005 May [cited 2013 Mar 13];29(5):498–506. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15904752

4.

Committee on Advancing Pain Research C and E. Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research [Internet]. Washington DC; 2011. Available from: http://books.nap.edu/openbook.php?record_id=13172&page=1

5.

The fact about pain management. The Joint Commission; 2001.

6.

Innis J, Bikaunieks N, Petryshen P, Zellermeyer V, Ciccarelli L. Patient satisfaction and pain management: an educational approach. Journal of nursing care quality [Internet]. [cited 2013 Mar 13];19(4):322–7. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15535537

7.

Hanna MN, González-Fernández M, Barrett AD, Williams KA, Pronovost P. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? American journal of medical quality  : the official journal of the American College of Medical Quality [Internet]. [cited 2013 Mar 13];27(5):411–6. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22345130

8.

Huskisson EC. Measurement of pain. The Journal of rheumatology [Internet]. [cited 2013 Mar 14];9(5):768– 9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/6184474

9.

Wong-Baker Faces foundation [Internet]. [cited 2013 Feb 11]. Available from: http://www.wongbakerfaces.org/

10.

Hjermstad MJ, Fayers PM, Haugen DF, Caraceni A, Hanks GW, Loge JH, et al. Studies comparing Numerical Rating Scales, Verbal Rating Scales, and Visual Analogue Scales for assessment of pain intensity in adults: a systematic literature review. Journal of pain and symptom management [Internet]. 2011 Jun [cited 2013 Mar 11];41(6):1073–93. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21621130

11.

Hartrick CT, Kovan JP, Shapiro S. The numeric rating scale for clinical pain measurement: a ratio measure? Pain practice  : the official journal of World Institute of Pain [Internet]. 2003 Dec [cited 2013 Mar 14];3(4):310–6. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17166126

12.

Cade CH. Clinical tools for the assessment of pain in sedated critically ill adults. Nursing in critical care [Internet]. [cited 2013 Mar 14];13(6):288–97. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19128312

1436

13.

Persson K, Ostman M. The Swedish version of the PACU-Behavioural Pain Rating Scale: a reliable method of assessing postoperative pain? Scandinavian journal of caring sciences [Internet]. 2004 Sep [cited 2013 Mar 14];18(3):304–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15355525

14.

Schnakers C, Chatelle C, Vanhaudenhuyse A, Majerus S, Ledoux D, Boly M, et al. The Nociception Coma Scale: a new tool to assess nociception in disorders of consciousness. Pain [Internet]. 2010 Feb [cited 2013 Feb 27];148(2):215–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19854576

15.

Casey KL. Pain and consciousness at the bedside. Pain [Internet]. 2010 Feb [cited 2013 Mar 14];148(2):182–3. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19942350

16.

Mazzocato C, Michel-Nemitz J, Anwar D, Michel P. The last days of dying stroke patients referred to a palliative care consult team in an acute hospital. European journal of neurology  : the official journal of the European Federation of Neurological Societies [Internet]. 2010 Jan [cited 2013 Mar 14];17(1):73–7. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19614968

17.

Giacino JT, Ashwal S, Childs N, Cranford R, Jennett B, Katz DI, et al. The minimally conscious state: definition and diagnostic criteria. Neurology [Internet]. 2002 Feb 12 [cited 2013 Mar 14];58(3):349–53. Available from: http://www.ncbi.nlm.nih.gov/pubmed/11839831

18.

Boly M, Faymonville M-E, Schnakers C, Peigneux P, Lambermont B, Phillips C, et al. Perception of pain in the minimally conscious state with PET activation: an observational study. Lancet neurology [Internet]. 2008 Nov [cited 2013 Mar 4];7(11):1013–20. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18835749

19.

Laureys S, Faymonville ME, Peigneux P, Damas P, Lambermont B, Del Fiore G, et al. Cortical processing of noxious somatosensory stimuli in the persistent vegetative state. NeuroImage [Internet]. 2002 Oct [cited 2013 Mar 14];17(2):732–41. Available from: http://www.ncbi.nlm.nih.gov/pubmed/12377148

20.

Jolliffe I. Principal Complnent Analysis. 2005.

21.

Wang, S., Jiang, X., Wu, Y., Cui, L., Cheng, S., Ohno-Machado, L. EXpectation Propagation LOgistic REgRession (EXPLORER): Distributed Privacy-Preserving Online Model Learning. Journal of biomedical informatics. [Internet]. [cited 2013 Mar 14]; 46(3):480–496. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23562651

1437

When you can't tell when it hurts: a preliminary algorithm to assess pain in patients who can't communicate.

Pain is a common but significant problem that is considered a high priority area of care. Although there are many pain assessment scales that can be a...
807KB Sizes 0 Downloads 3 Views