Patient Education and Counseling 97 (2014) 396–402

Contents lists available at ScienceDirect

Patient Education and Counseling journal homepage: www.elsevier.com/locate/pateducou

Assessment

The health competence measurement tool (HCMT): Developing a new scale to measure self-rated ‘‘health competence’’ Lawrence Mbuagbaw a,b,c,*, Renee Cecile Bonono Momnougui c, Lehana Thabane a,b,d,e,f, Pierre Ongolo-Zogo c,g a

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada Biostatistics Unit, Father Sean O’Sullivan Research Centre, St Joseph’s Healthcare – Hamilton, ON, Canada Centre for the Development of Best Practices in Health, Yaounde´ Central Hospital, Yaounde´, Cameroon d Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada e Centre for Evaluation of Medicines, St Joseph’s Healthcare – Hamilton, ON, Canada f Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada g Faculty of Medicine and Biomedical Sciences, University of Yaounde´ 1, Yaounde´, Cameroon b c

A R T I C L E I N F O

A B S T R A C T

Article history: Received 12 April 2014 Received in revised form 16 September 2014 Accepted 22 September 2014

Objectives: To develop and test a tool for measuring health competence. Methods: In order to measure this attribute, we used a sequential exploratory mixed methods design in rural and urban communities in Cameroon. In the qualitative phase, 67 clients constituted 10 focus groups to elicit themes related to health competence. In the quantitative phase, self-rated items were tested on 300 participants and on a random selection of 25 participants 2 weeks later. Results: The internal consistency for the subscales derived varied from 0.61-0.81. Older (F[45, 339.1] = 1.2; p = 0.031) and more educated (F[3, 22.6] = 2.1; p = 0.004) people were more likely to score higher on the scale. Interviewers also contributed to the variance (F[5, 37.6] = 3.6; p < 0.001). Test–retest reliability was 0.66. The final scale with 15 items is made up of three subscales: knowledge of disease, how to stay in good health and health information. Conclusion: We present a new self-rated scale for health competence with good psychometric properties. It circumvents the need to be literate, but requires well trained interviewers. We recommend that it be tested in other settings. Practice implications: This tool should be used to appraise individual and community health education needs with minor context specific modifications. ß 2014 Elsevier Ireland Ltd. All rights reserved.

Keywords: Health competence Cameroon Scale Measurement Literacy

1. Introduction For individuals and communities to make the right decisions about their health, they need to be able to access and use health information adequately [1] and measurably. However, validated measures of such attributes are unavailable in many resource limited settings [1]. A more commonly measured attribute is health literacy. Health Literacy is ‘‘the ability to read, understand and act on

* Corresponding author at: Biostatistics Unit, Father Sean O’Sullivan Research Centre, St Joseph’s Healthcare – Hamilton, ON, Canada. Tel.: +1 9055221155 x 35929. E-mail addresses: [email protected], [email protected] (L. Mbuagbaw), [email protected] (R.C. Bonono Momnougui), [email protected] (L. Thabane), [email protected] (P. Ongolo-Zogo). http://dx.doi.org/10.1016/j.pec.2014.09.013 0738-3991/ß 2014 Elsevier Ireland Ltd. All rights reserved.

health care information’’ [2]. It describes an individual’s capacity to cope within a health system by successfully applying literacy skills in a health context. People with low health literacy tend to have poorer knowledge about the conditions affecting them; use preventive services less often; have lower medication adherence rates; higher hospitalization rates and poorer self-reported health than those with high health literacy [3]. Poor health literacy is also associated with increased health care costs [4]. Interventions to improve health literacy have been shown to improve health outcomes related to breastfeeding, antiretroviral treatment, smoking, diet and physical activity [1]. Much research has been done on health literacy, but very little has been carried out in sub-Saharan Africa even though this region has lower literacy rates, a higher disease burden and lower life expectancy than the rest of the world [5–7]. Many tools are used to

L. Mbuagbaw et al. / Patient Education and Counseling 97 (2014) 396–402

measure health literacy, notably the Rapid Estimate of Adult Literacy in Medicine (REALM) [8]; Short Assessment of Health Literacy for Spanish Adults (SAHLSA-50) [9]; the Test of Functional Health Literacy in Adults (TOHFLA) [10] and variations of these. However, these tools have some limitations. Firstly, they are cumbersome, and not adapted to low resource settings where the literacy rates are lower and the disease burden is different. Secondly, these tools tend to depend largely on peoples’ ability to read. As such, the validity of such tools is unclear in other regions of the world with different customs, languages and approaches to health care [1]. Many people may perform poorly on these health literacy tests (because they can’t read) even though they may be able to achieve optimal health outcomes by receiving, understanding and using oral, auditive and visual information related to their health. Yet, given the importance of health literacy, it is of relevance to develop measures that can inform action and be used to measure progress [1]. An adequate measure should go beyond literacy skills to tap into the nature of information, the source of information and how it is used. Instead of focusing on the literary skills, we will focus on knowledge, and the actions performed in obtaining and using specific types of health information to improve health outcomes. There is a need for a context relevant, rapid and accurate tool to measure attributes linked to access, understanding and use of health information in the regions of the world with the highest disease burden and the lowest literacy rates. We therefore propose the term ‘‘health competence’’ as a more encompassing attribute, for which we describe the steps taken to develop a measurement tool. The primary objective of this study was to develop a scale to measure patient ‘‘health competence’’ and to evaluate its psychometric properties. The psychometric properties of interest were content validity (the extent to which the scale incorporates the domain of interest), construct validity (the extent to which measurements on the scale correspond to theoretical constructs) and test–retest reliability (the degree to which the scores on the scale can be replicated from one test to another) [11].

397

unknown and there is no pre-existing guiding framework [13]. In the qualitative phase we used focus group discussions to elicit themes related to health competence and converted these themes into items for a structured questionnaire. In the quantitative phase we used structured interviews to test participants’ responses to items on the scale. 2.4. Sampling and sample size This study ran from the 22nd July to the 9th August 2013. Two sampling strategies were employed for each phase of the study. In the qualitative phase a purposeful sample of hospital clients were selected from the waiting rooms of the Yaounde´ Central Hospital. Written or verbal consent was a prerequisite to enrolment. Consecutively enrolled participants were invited to a separate room for focus group discussions (FGDs). We targeted 6–10 individuals per group. Sampling was stopped once no new themes emerged from the FGDs. For the qualitative phase, our sample size and sampling strategy were determined based on recommendations for item analysis and the need to obtain a representative sample. A sample size between 100 and 200 is required to perform a comprehensive item analysis [14,15]. We therefore employed a stratified probabilistic sampling strategy, targeting 300 participants. Our target population was divided into four: urban hospitalbased, urban community-based, rural hospital-based and rural community-based. During the period of the study all adults aged above 21 years residing in Yaounde´ or Mfou, who gave consent were eligible to participate in the study, provided they belonged to the appropriate strata. Seventy-five consecutive individuals were interviewed in each of four strata to make a total of 300. The hospital based participants (rural and urban) were recruited from the Central Hospital in Yaounde´ and the District Hospital in Mfou. For the community based participants, trained interviewers were sent into neighborhoods closest to the hospitals, accompanied by community representatives and approached people on the streets, in their homes and in public places. Participants were interviewed individually.

2. Methods 2.5. Data collection 2.1. Ethics This study was approved by the Institutional Review Boards of the Yaounde´ Central Hospital, Yaounde´, Cameroon (119L/MINSANTE/SG/DHCY/Stages) and the Mfou District Health Service, Center province, Cameroon (42/MSP/SG/DRSPC/SSDM/BAAF). 2.2. Study setting Yaounde´ is the capital city of Cameroon, a central African lowincome country. The Yaounde´ Central Hospital is one of three main referral centers in the Center province. It has a capacity of 381 beds and is staffed by 95 doctors and 270 nurses [12]. Half of the participants were recruited from the Yaounde´ Central Hospital and the surrounding communities. Approximately 2.5 million people live in Yaounde´. The rest of the participants were recruited from the rural village of Mfou. Mfou is located 25 km away from Yaounde´. The approximately 70,000 inhabitants of Mfou receive medical care from the Mfou district hospital. 2.3. Study design We conducted an exploratory sequential mixed-methods study. The exploratory sequential design is a two-phase design that starts with an initial qualitative phase which is followed by a quantitative phase [13]. It is the method of choice for developing measurement tools in which the variables that will be used are

In the qualitative phase a plain-clothed experienced moderator and scribe (equipped with writing material and audio recorder) led the discussions. They used a discussion guide to ensure that the discussions did not stray from the phenomenon under study. At the end of each focus group discussion, the notes and audio recordings were transcribed and prepared for analysis. In the quantitative phase, six trained interviewers approached consenting individuals in the four study sites and conducted structured interviews. The interviewers asked questions and filled the forms themselves. 2.6. Item identification and selection Qualitative research techniques were used to identify and select items. Item identification was done using FGDs with hospital clients to identify desirable attributes in competent patients. The list included items such as the ability to understand verbal instructions, radio and TV messages, posters; frequently used sources of health information; and knowledge about specific diseases. Participants were encouraged to suggest additional items they deemed necessary to cope within a health system. See Supplementary Appendix B for focus group discussion guide. 2.6.1. Content validity Item selection also involved a panel of fifteen (15) experts invited to rate the importance of the items generated from the FGDs, to evaluate their content value and to select those most

398

L. Mbuagbaw et al. / Patient Education and Counseling 97 (2014) 396–402

likely to measure the attribute in question. This panel of experts was comprised of physicians. The experts were requested to rank the items as: ‘‘essential’’, ‘‘useful but not essential’’ and ‘‘not necessary’’. 2.6.2. Construct validity Our hypothesis was that this capacity to cope or thrive within one’s health system using health information, independent of the ability to read, is associated with self-perceived health, age, gender, level of education, profession and residence. A similar attributehealth literacy is strongly correlated with self-perceived health and QOL [16,17]. We hypothesized that we would find higher scores of health competence in people with higher self-perceived health, higher levels of education, older people and urban dwellers. 2.6.3. Item testing and selection Quantitative research methods were used to test the items with an orally administered questionnaire designed from the selected items. We assessed the time to fill a questionnaire and the psychometric properties of the scale. Developing the scale involved choosing the best combination of items that adequately discriminated between people in the sample. Item selection was guided by the screening from the expert panel, factor analysis and item-total correlation. 2.6.4. Test re-test reliability During the initial test, participants were asked if they would be willing to take the test again. Two weeks after the initial test 25 randomly selected (from those who accepted to take the test again) consenting individuals were interviewed again.

Domains (subscales) were composed using factor loadings greater than 0.5. Internal consistency within each subscale was estimated using Cronbach’s alpha score [20]. Low values of alpha (0.8) imply that the items on the scale are highly correlated with each other and may not all be necessary [21]. We also estimated the item-total correlation to identify the items which did not correlate to the scale overall. Items with value of 0.2 or less were removed from the final version [21,22]. 2.7.2. Construct validity Ratings on the scale were compared to ratings of self-perceived health to assess construct validity [23]. We used mixed-model analysis of covariance (ANCOVA) to determine the combined effect of covariates on the score. Age, gender, level of education and profession were entered as fixed effects while residence and interviewer were random factors. 2.7.3. Test re-test reliability In a smaller sample of participants (n = 25) we repeated the assessment after 2 weeks to determine test–retest reliability. The correlation coefficients between test scores from both tests were estimated. R and p-values are reported. The score on the scale was determined by calculating the average score across all items. Data were analyzed using Statistical Package for Social Sciences (SPSS) Version 20.0 (SPSS, Inc., 2009, Chicago, IL, USA). 3. Results

2.7. Data analyses 3.1. Item generation Qualitative data were analyzed in the first instance by memberchecking (participants of the focus group discussions were presented the transcribed data to confirm its accuracy) and secondly by coding the data, grouping the codes into categories and generating themes. The coding, categorization and theme generation were done in duplicate and compared. Consensus was achieved by routinely including any themes for which there was disagreement. In this way we ensured that no potentially relevant item were left out in the preliminary stages. Coding was done manually. 2.7.1. Content validity Thematic variables were generated and presented to the team of ‘‘experts’’ for final ranking and selection. Item endorsement by the experts was done by estimating a content validity ratio (CVR). CVR ¼

ne  N=2 N=2

where ne = the number of panelists indicating that the item is essential and N is the total number of panelists. The CVR indicates the items which at least half of the raters ranked as essential. A critical value of 0.506 for 15 raters was set as a threshold for selection of items beyond chance [18]. Only items with a CVR corresponding to a critical value greater than 0.506 were considered further. Quantitative data was first analyzed using descriptive statistics. Frequencies (percent) and means (standard deviation) are reported. Factor analyses were performed using varimax rotation to maximize the variance of the squared loadings of a factor on all variables. In this way we confirmed that the items in the identified domains belonged together. Items were selected based on the elbow of a scree plot and the Eigen values greater than one [19]. Scree plots and explained variance tables are reported.

Ten focus groups were constituted with an average number of 7 participants per group. Sixty-seven participants were included overall, of which 61.2% were female. The mean age was 41.3 years (SD = 13.9). Thirty four (50.7%) had a secondary education, 19 (28.4%) had a university education. The rest had either primary or no formal education. A total of 64 items were generated, and grouped into the following domains: (1) How to stay in good health, (2) appropriate sources of health information, (3) frequency of use of various sources of health information, (4) frequency of understanding of various sources of information, (5) relevant types of health information, (6) sources of information within medical facilities, (7) barriers to health information, (8) important diseases to know about and (9) level of knowledge of these diseases. A full list of items is reported in Supplementary Appendix C. 3.2. Item selection After expert consultation, the list was reduced to 24 items, by removing items with a CVR corresponding to a critical value less than 0.506. Data from the CVR ratings are reported in Supplementary Appendix D. Data from 300 participants on the 24-item scale were used for factor analysis. The baseline characteristics of the 300 participants is reported in Table 1. After factor analysis, 11 components were identified. They explained 67.5% of the total variance (See Table 2). Fig. 1 is a scree plot showing the elbow at 11 components. After selecting components with factor loadings of 0.5 or higher, five domains emerged. The number of items and reliability (Cronbach’s alpha) were: knowledge of diseases (9 items; 0.61; 95% CI 0.34–0.81), how to stay in good health (5 items; 0.81; 95% CI 0.77–0.84) sources of information (2 items; 0.81; 95% CI 0.77–0.84), sources of information within medical facilities (1 item) and relevant types

L. Mbuagbaw et al. / Patient Education and Counseling 97 (2014) 396–402

399

Table 1 Socio-demographic characteristics of 300 participants involved in item testing. Group

Variable

Age (years): mean (SD)a Gender: n (%)a Male Female Level of education: n (%) None Primary Secondary University Occupation: n (%) Working Unemployed Retired Student Interviewer MNc MVa OF PPb TNc VT

Urban hospital

Rural hospital

Urban community

Rural community

37.7 (12.79)

34.9 (10.92)

34.38 (10.13)

38.2 (11.52)

36 (24.7) 38 (24.8)

37 (25.3) 39 (25.5)

29 (19.9) 46 (30.1)

44 (30.1) 30 (19.6)

153 (100.0) 146 (100.0)

1 9 42 23

(100.0) (31.0) (32.6) (16.3)

0 4 30 42

(0.0) (13.8) (23.3) (29.8)

0 5 16 54

(0.0) (17.2) (12.4) (38.3)

0 11 41 22

(0.0) (37.9) (31.8) (15.6)

1 29 129 141

(100.0) (100.0) (100.0) (100.0)

35 21 6 13

(20.6) (52.2) (18.8) (22.8)

37 7 10 22

(21.8) (17.1) (31.2) (38.6)

43 8 8 16

(25.3) (19.5) (25.0) (28.1)

55 5 8 6

(32.4) (12.2) (25.0) (10.5)

170 41 32 57

(100.0) (100.0) (100.0) (100.0)

0 0 0 75 0 0

(0.0) (0.0) (0.0) (100.0) (0.0) (0.0)

0 39 0 0 37 0

(0.0) (52.0) (0.0) (0.0) (49.3) (0.0)

25 0 25 0 0 25

(100.0) (0.0) (100.0) (0.0) (0.0) (100.0)

0 36 0 0 38 0

(0.0) (48.0) (0.0) (0.0) (50.7) (0.0)

25 75 25 75 75 25

(100.0) (100.0) (100.0) (100.0) (100.0) (100.0)

Total 36.3 (11.45)

Note: SD = standard deviation. a 1 missing; b 2missing; c 5 missing.

of information (1 item). The last three subscales were all related to health information so we decided to put them together. The corresponding reliability was: 4 items; 0.53; 95% CI 0.44–0.61. The variance explained for each component is reported in Table 2 Total variance explained for each component. Component

1 2 3 4 5 6 7 8 9 10 11

Table 2. Overall reliability for 18 items was 0.81; 95% CI 0.77– 0.84. Reliability and variance could not be modified greatly by removing any of the items. Three items had an item-total correlation less that 0.2 and were removed from the final version. The item-total statistics are reported in Table 3. 3.3. Construct validity

Initial eigenvalues Total

% of variance explained

Cumulative %

6.829 4.416 2.898 2.241 1.777 1.533 1.381 1.262 1.151 1.125 1.044

17.972 11.622 7.628 5.897 4.676 4.033 3.635 3.322 3.029 2.961 2.747

17.972 29.594 37.221 43.118 47.795 51.828 55.463 58.785 61.814 64.775 67.522

Fig. 1. Scree plot showing eleven components.

The correlation between scores on this scale (15 items) and selfperceived health was r(287) = 0.12; p = 0.037). The results of our mixed-model ANCOVA suggested that the variance of the differences between pairs were unequal (sphericity assumption violated for Mauchly’s test statistic; x2(104) = 202.6; p < 0.001) so we applied the Greenhouse–Geisser correction. We found that age (F[45, 339.1] = 1.2; p = 0.031), level of education (F[3, 22.6] = 2.1; p = 0.004) and interviewer (F[5, 37.6] = 3.6; p < 0.001) played a significant role in the variability in scores. Older people Table 3 Item-total statistics. Item

Scale mean if item deleted

Scale variance if item deleted

Corrected item-total correlation

Cronbach’s alpha if item deleted

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

5.02 5.03 5.03 5.03 5.02 5.10 5.10 5.03 5.17 5.05 5.13 5.13 5.06 5.11 5.14 5.17 5.13 5.08

183.312 181.222 179.845 179.851 185.834 170.300 173.961 185.760 181.685 173.693 159.680 158.432 173.265 173.549 167.048 160.858 161.864 167.821

0.243 0.273 0.371 0.360 0.198a 0.440 0.362 0.168a 0.110a 0.463 0.550 0.589 0.432 0.359 0.477 0.508 0.533 0.547

0.806 0.805 0.801 0.801 0.807 0.795 0.800 0.809 0.821 0.795 0.787 0.784 0.796 0.800 0.793 0.790 0.788 0.789

a

Removed from scale.

L. Mbuagbaw et al. / Patient Education and Counseling 97 (2014) 396–402

400

Table 4 Summary table of psychometric properties. Measure

Statistic

Reliability Internal consistency for subscales Knowledge of diseases How to stay in good health Health information

Cronbach’s alpha (95% CI) Cronbach’s alpha (95% CI) Cronbach’s alpha (95% CI)

0.61 (0.34–0.81) 0.81 (0.77–0.84) 0.53 (0.44–0.61)

Test-retest reliability

Pearson’s correlation coefficient; p-value

0.66; p < 0.001***

Validity Construct validity for various constructs Self-perceived health Age Gender Level of education Interviewer Residence Profession

Pearson’s correlation coefficient; p-value ANCOVA; F test, df; p-value ANCOVA; F test, df; p-value ANCOVA; F test, df; p-value ANCOVA; F test, df; p-value ANCOVA; F test, df; p-value ANCOVA; F test, df; p-value

0.12; p = 0.037* F[45, 339.1] = 1.2; p = 0.031* F[1, 7.5] = 0.4; p = 0.863 F[3, 22.6] = 2.1; p = 0.004** F[5, 37.6] = 3.6; p < 0.001*** F[1, 7.5] = 0.7; p < 0.611 F[3, 22.6] = 1.3; p = 0.174

* **

p < 0.05. p < 0.01. p < 0.001.

***

Table 5 The Health competence measurement tool (HCMT). Subscale 1: knowledge of disease 1 = very low; 2 = low; 3 = lower than average; 4 = average; 5 = higher than average; 6 = high; 7 = very high 1. How would you rate your knowledge of malaria 1 2 2. How would you rate your knowledge of high blood pressure 3. How would you rate your knowledge of diabetes 4. How would you rate your knowledge of HIV/AIDS 5. How would you rate your knowledge of other STI’s 6. How would you rate your knowledge of diarrhoeal diseases 7. How would you rate your knowledge of viral hepatitis 8. How would you rate your knowledge of typhoid fever 9. How would you rate your knowledge of cholera

3

4

5

Subscale 2: how to stay in good health 1 = strongly disagree; 2 = disagree; 3 = somewhat disagree; 4 = neither agree nor disagree; 5 = somewhat agree; 6 = agree; 7 = strongly agree To what extent to you agree/disagree with the following statements 1 2 3 4 5 10. In order to stay in good health I should maintain personal hygiene 11. In order to stay in good health I should exercise 12. In order to stay in good health I should sleep well 13. In order to stay in good health I should maintain a healthy lifestyle Subscale 3: health information 1 = never; 2 = rarely; 3 = occasionally; 4 = sometimes; 5 = frequently; 6 = usually; 7 = every time Please respond to the following questions 1 14. How often do you get heath information from the radio 15. How often do you understand health information from the radio

r(289) = 0.15; p = 0.009) and more educated people (r(289) = 0.23; p < 0.001) had higher scores. 3.4. Test re-test reliability Twenty-five participants from Yaounde´ took part in the re-test. Their mean age was 34.7 (SD = 9.99). Fifty-six percent were male. More than half (57.7%) had a university education. Test re-test reliability for these 15 items was r(24) = 0.66 (p < 0.001) after 2 weeks. There was a mean increase in 0.25 (0.06–0.56; p = 0.11) points on the scale. The psychometric properties are summarized in Table 4. The first iteration of the Health Competency Measurement Tool (HCMT) is presented in Table 5 and Supplementary Appendix A. 4. Discussion and conclusions 4.1. Discussion We have developed a new scale for measuring an important determinant of health. It is made up of three domains: knowledge of diseases, how to stay in good health and health information.

2

3

4

5

6

7

6

7

6

7

Apart from the robust qualitative and quantitative methods applied in deriving these items, there is some empirical evidence supporting the importance of these domains as determinants of health. For example, a good knowledge of HIV is associated with earlier initiation of antiretroviral therapy, which is beneficial [24]. Exercise, sleep and personal hygiene are some of the items identified in the second domain (how to stay in good health) and are important factors related to many chronic diseases [25– 27]. Mass media has been shown to improve the use of health care services [28], and therefore it is important to know who can assess and use the relevant forms of media to improve their health. Only radio communication stood out as an important source of health information. This may be because it is easier to access and use than other mass media (TV or print). In addition, its use does not require high levels of literacy. Given that the content and source of healthinformation used may reflect unmet information needs or patient disempowerment [29], it may be worthwhile to explore why radio communication is a discriminating factor for health competence. Given the above, this tool can then be used for large scale surveys or in clinical settings to identify populations and individuals that require more help in obtaining and using health information.

L. Mbuagbaw et al. / Patient Education and Counseling 97 (2014) 396–402

Health competency fulfills the recommended criteria for measuring the public’s health [30] by being measurable (a scale was developed), valid (validation checks are used for both internal and external validity in the study design), sensitive to change (score will change if access, understanding and use for health information change) and reliable across populations in the study context (selection bias was avoided in the sampling procedure). The complete scale (see Supplementary Appendix), is comprised of 15 items grouped into three domains. The last three domains on information sources were merged for simplicity. In order to maintain the psychometric properties of this scale we recommend that it be administered orally by a trained interviewer, and that each item carry the same weight. The overall score will be the average score across items, and can carry any value between 1 and 7. Our hypothesis that scores on this scale would be correlated with self-perceived health was not proven. This low correlation may indicate that health competence taps into something quite different from health literacy, and warrants further investigation. This study is not without limitations. First, our measure of content validity, using Lawshe’s CVR has been criticized for having errors in the estimation of the critical value. Further analysis of these critical values has shown that the errors lead to more conservative selection of items [18]. Second, the accepted levels of test re-test reliability are usually no less than 0.7, but we found a reliability of 0.66 after 2 weeks. This may be because 2 weeks was too long a wait time. However a small nonsignificant increase in mean score of 0.25 (95% CI 0.06 to 0.56; p = 0.112) was noted, probably due to the awareness raised by the previous test [21]. Further testing of this tool is warranted to determine after what period of time stable estimates of reliability can be obtained. Third, in our attempt to circumvent the need for literacy to complete this questionnaire we administered the questions orally. As could be expected, interviewers contributed to the variability in the responses. Other factors such as age and level of education contributed to the variability, confirming our hypotheses. We therefore recommend enhanced training of interviewers to ensure uniform wording of questions or if at all possible, using only one interviewer. Finally, self-rated knowledge is not without problems of its own, such as social desirability bias. [21]. However, other self-rated knowledge-of-disease scales have been developed and used for specific diseases [31–33]. The challenge for future iterations of this scale would be to develop single questions that tap into relevant knowledge about each disease. Depending on the population on which this tool is employed, some diseases from the first domain: knowledge of diseases can be removed (and maybe replaced with context relevant diseases) without significantly affecting the properties of the scale, assuming the diseases hold the same importance in the populations on which the tool is used. However, disease burden may vary greatly across regions and it would be wise to investigate how best this tool can be used in other settings. The strengths of this tool include the large sample size (n = 300), and the estimation of the effects of baseline covariates. It is also an orally administered tool and therefore bypasses limitations of literacy that may affect understanding and response. It is relatively short (15 items) and covers knowledge on many diseases. 4.2. Conclusion We have developed a tool for measuring an attribute which we call ‘‘heath competence’’, and encourage other researchers to

401

continue the validation process by testing it on different populations in different settings. 4.3. Practice implications This tool should be used to determine individual and community health education needs especially in places with low levels of literacy. Based on the context and setting we recommend modifying the disease types to include locally relevant conditions. Acknowledgements This research was funded by a Pears Foundation IMPH Alumni Seed-Grant Program to Promote Public Health Research, Hebrew University of Jerusalem-Hadassah Braun School of Public Health. Mary Lou Schmuck of the Program for Educational Research and Development (PERD), McMaster University provided guidance with data analysis. Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.pec.2014.09.013. References [1] Health literacy and the millennium development goals: United Nations Economic and Social Council (ECOSOC) regional meeting background paper (abstracted). J Health Commun 2010;15(Suppl. 2):211–23. [2] Kickbusch IS. Health literacy: addressing the health and education divide. Health Promot Int 2001;16:289–97. [3] Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004;36:588–94. [4] Clark B. Using law to fight a silent epidemic: the role of health literacy in health care access, quality, and cost. Ann Health Law 2011;20:253–327. 5 p preceding i. [5] UIS. Adult and youth literacy UIS fact sheet No. 20; 2012. [6] Salomon JA, Wang H, Freeman MK, Vos T, Flaxman AD, Lopez AD, et al. Healthy life expectancy for 187 countries, 1990–2010: a systematic analysis for the Global Burden Disease Study 2010. Lancet 2012;380:2144–62. [7] Wang H, Dwyer-Lindgren L, Lofgren KT, Rajaratnam JK, Marcus JR, LevinRector A, et al. Age-specific and sex-specific mortality in 187 countries, 1970– 2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012;380:2071–94. [8] Davis TC, Long SW, Jackson RH, Mayeaux EJ, George RB, Murphy PW, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med 1993;25:391–5. [9] Lee SY, Stucky BD, Lee JY, Rozier RG, Bender DE. Short Assessment of Health Literacy-Spanish and English: a comparable test of health literacy for Spanish and English speakers. Health Serv Res 2010;45:1105–20. [10] Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: a new instrument for measuring patients’ literacy skills. J Gen Intern Med 1995;10:537–41. [11] Porta M. A dictionary of epidemiology. 5th ed. Oxford: Oxford University Press, Inc.; 2008. [12] WHO. Yaounde´, Cameroon – HUG, Switzerland; 2010. [13] Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 2nd ed. Thousand Oaks: Sage; 2011. [14] Crocker L, Algina J. Classical and modern test theory. New York: Holt, Rinehart & Winston; 1986. [15] Guadagnoli E, Velicer WF. Relation of sample size to the stability of component patterns. Psychol Bull 1988;103:265–75. [16] Song L, Mishel M, Bensen JT, Chen RC, Knafl GJ, Blackard B, et al. How does health literacy affect quality of life among men with newly diagnosed clinically localized prostate cancer? Findings from the North Carolina-Louisiana Prostate Cancer Project (PCaP). Cancer 2012;118:3842–51. [17] Bailey SC, Wolf MS, Bennett CL. Health literacy and quality of life among prostate cancer patients. In: Genitourinary Cancers Symposium; 2008. [18] Wilson FR, Pan W, Schumsky DA. Recalculation of the critical values for Lawshe’s content validity ratio. Meas Eval Couns Dev 2012;45:197–210. [19] Streiner DL. Figuring out factors: the use and misuse of factor analysis. Can J Psychiatry 1994;39:135–40. [20] Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951;16:297–334. [21] Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. Oxford: OUP; 2008.

402

L. Mbuagbaw et al. / Patient Education and Counseling 97 (2014) 396–402

[22] Kline P. A handbook of test construction: introduction to psychometric design. London: Methuen; 1986. [23] Lorig K, Stewart A, Ritter P, Gonzalez V, Laurent D, Lynch J. Outcome measures for health education and other health care interventions. Thousand Oaks, CA: SAGE Publications; 1996. [24] Ndawinz JD, Chaix B, Koulla-Shiro S, Delaporte E, Okouda B, Abanda A, et al. Factors associated with late antiretroviral therapy initiation in Cameroon: a representative multilevel analysis. J Antimicrob Chemother 2013;68:1388–99. [25] Sofi F, Abbate R, Gensini GF, Casini A. Accruing evidence on benefits of adherence to the Mediterranean diet on health: an updated systematic review and meta-analysis. Am J Clin Nutr 2010;92:1189–96. [26] WHO. Global strategy on diet, physical activity and health; 2013. [27] Hillman DR, Lack LC. Public health implications of sleep loss: the community burden. Med J Aust 2013;199:7–10.

[28] Grilli R, Ramsay C, Minozzi S. Mass media interventions: effects on health services utilisation. Cochrane Database Syst Rev 2002;1:CD000389. [29] Patel S, Dowse R. Understanding the medicines information-seeking behaviour and information needs of South African long-term patients with limited literacy skills. Health Expect 2013. http://dx.doi.org/10.1111/hex.12131 [Epub ahead of print]. [30] Thacker SB, Stroup DF, Carande-Kulis V, Marks JS, Roy K, Gerberding JL. Measuring the public’s health. Public Health Rep 2006;121:14–22. [31] Jaworski BC, Carey MP. Development and psychometric evaluation of a selfadministered questionnaire to measure knowledge of sexually transmitted diseases. AIDS Behav 2007;11:557–74. [32] Lainscak M, Keber I. Validation of self assessment patient knowledge questionnaire for heart failure patients. Eur J Cardiovasc Nurs 2005;4:269–72. [33] Swift JA, Glazebrook C, Macdonald I. Validation of a brief, reliable scale to measure knowledge about the health risks associated with obesity. Int J Obes (Lond) 2006;30:661–8.

The health competence measurement tool (HCMT): developing a new scale to measure self-rated "health competence".

To develop and test a tool for measuring health competence...
396KB Sizes 0 Downloads 4 Views