This article was downloaded by: [University of Massachusetts, Amherst] On: 06 October 2014, At: 04:20 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of American College Health Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/vach20

Development of a Health Literacy Assessment for Young Adult College Students: A Pilot Study Raquel Harper PhD

a

a

School of English and Media Studies , Massey University–Albany Campus , Auckland , New Zealand Accepted author version posted online: 21 Nov 2013.Published online: 23 Jan 2014.

To cite this article: Raquel Harper PhD (2014) Development of a Health Literacy Assessment for Young Adult College Students: A Pilot Study, Journal of American College Health, 62:2, 125-134, DOI: 10.1080/07448481.2013.865625 To link to this article: http://dx.doi.org/10.1080/07448481.2013.865625

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

JOURNAL OF AMERICAN COLLEGE HEALTH, VOL. 62, NO. 2

Major Article

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

Development of a Health Literacy Assessment for Young Adult College Students: A Pilot Study Raquel Harper, PhD

Abstract. Objective: The purpose of this study was to develop a comprehensive health literacy assessment tool for young adult college students. Participants: Participants were 144 undergraduate students. Methods: Two hundred and twenty-nine questions were developed, which were based on concepts identified by the US Department of Health and Human Services, the World Health Organization, and health communication scholars. Four health education experts reviewed this pool of items and helped select 87 questions for testing. Students completed an online assessment consisting of these 87 questions in June and October of 2012. Item response theory and goodness-of-fit values were used to help eliminate nonperforming questions. Results: Fifty-one questions were selected based on good item response theory discrimination parameter values. Conclusions: The instrument has 51 questions that look promising for measuring health literacy in college students, but needs additional testing with a larger student population to see how these questions continue to perform.

Although there are several variations in the definition of health literacy, there seems to be general agreement among health literacy scholars and organizations that health literacy includes the following important skills: comprehension, numeracy, media literacy, and computer literacy.4–7 Studies show that college students are presently graduating without the necessary skills needed for understanding and utilizing medical information. Health numeracy is of particular concern with college graduates.8 The American Institutes for Research found that 20% of US college students who completed 4-year degrees and 30% of those with 2-year degrees had only the most basic quantitative literacy skills.9 Researchers at a southeastern university assessed the basic math skills of 411 students in 3 different classes—production, statistics, and quantitative analysis—and found that, regardless of major and year in college, the students had difficulty with about 30% of the basic math problems.10 Another study compared the results between 595 US students and 231 Hungarian students and the results indicated that US university students’ math skills are significantly lower than Hungarian students taking similar courses.11 Comprehension skills may be problematic for some college graduates as well. The American Institutes for Research found that more than 75% of those with 2-year degrees and 50% of those with 4-year degrees scored below a proficient level of literacy (meaning, they lack the skills to perform complex literary tasks such as summarizing arguments in newspaper articles)—which, if related to health material, might lead to misunderstandings about health care risks, prevention, and treatment opportunities.9 Computing and Internet skills are also of concern. College students may be technologically literate in today’s digitally advanced environment, but there is increasing evidence demonstrating students are less information savvy than earlier generations because students do not use the technology effectively when they conduct research.12–15 Young adults often have poor online research skills and little patience, which

Keywords: digital literacy, health information, health literacy, media literacy, numeracy

M

ost health literacy tools are focused specifically on the general adult population and have only been validated in the general adult population.1 No known assessment tools have been created specifically for measuring health literacy in young adults, and none specifically for young adult college students. This population likely has access to several resources and is in a better position than the general population to improve their health literacy. Limited health literacy is strongly associated with several socioeconomic indicators, including race, ethnicity, age, and education.2,3 Although most socioeconomic indicators are static, education level can change. College students have perhaps the best access to helpful resources for improving their skills related to health literacy.

Dr Harper is with the School of English and Media Studies at Massey University–Albany Campus in Auckland, New Zealand. Copyright © 2014 Taylor & Francis Group, LLC 125

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

Harper

are likely related. In fact, some researchers suggest that access and ability to effectively navigate are 2 very different issues.16 An assessment specifically created for measuring health literacy in 18- to 24-year-old college students could help inform both students and university staff of the areas in health literacy that need attention for optimal navigation and utilization of today’s health care system. The current leading health literacy assessment tools are not appropriate for use with the college student population and do not assess the concept of health literacy in all its dimensions. This article describes the initial development of a comprehensive health literacy assessment tool for young adult college students, including question creation, expert review, and a pilot test of the instrument. The main goal was to develop an updated tool that includes questions with good item response theory (IRT) discrimination parameters. METHODS Development of Health Literacy Test Questions Based on the main concepts identified by the US Department of Health and Human Services (HHS), the World Health Organization, and other health communication scholars,4–7 a new quantitative assessment tool should include comprehension of health materials, health numeracy, media literacy, and computer literacy. Comprehension is generally defined in other health literacy instruments as the ability to understand health-related texts in terms of reading ability. The comprehension section included 47 questions using the sentence verification technique (SVT) and 53 questions using the Cloze technique. Because education scholars recommend using more than 1 measurement technique to accurately measure reading comprehension,17,18 2 measurement techniques were used in the new instrument. The Cloze technique has already been widely used in standard health literacy assessments, including the Test of Functional Health Literacy in Adults (TOFHLA), and research shows that the instrument has generated valid and reliable data.19–22 The technique involves having participants read a passage with every xth word deleted and then fill in the blank based on a multiple choice set of answers.23 The existing health literacy tools use plain language materials and therefore assess for a very basic level of reading comprehension. The new tool includes more advanced materials that reflect the information one might find on Web sites such as WebMD or the National Cancer Institute. The SVT is based on the idea that when people read passages, they form memory representations of those passages.24 The test measures whether participants have comprehended a passage by checking their accuracy for determining if sentences mean or do not mean the same thing as sentences in the original passage. It has been demonstrated that this test can generate valid and reliable data with internal consistency measures of .5 to .9.24 Research on the SVT suggests that it is a good measure of passage comprehension.24–27 126

Health numeracy is defined in this article as being able to understand numerical information presented in a health care context. A lack of numeracy is associated with the inability to make informed comparisons using numbers28; a lack of trust in information that contains numbers28,29; and being more influenced by a trusted source over numerical information.30 The numeracy section included 30 initial questions. Four groups of questions were created based on the functional categories of numeracy developed by Golbeck and colleagues: basic, computational, analytical, and statistical.31 Numeracy has been measured in other instruments,32–34 but the TOFHLA is the only known test that includes numeracy in addition to other health literacy components. However, researchers believe that the TOFHLA was not designed to accurately measure numeracy in its entirety and is more focused on reading ability.35 Other known tests that measure health numeracy seem to only focus on 1 or 2 of the 4 functional categories. The framework developed by the National Association for Media Literacy Education (NAMLE) was used to create the 52 initial media literacy questions. No known health literacy assessment has incorporated media literacy. Studies show that increasing media literacy can reduce current smoking habits and reduce the susceptibility of future smoking,36,37 reduce students’ beliefs that most peers use tobacco,38 help reduce harmful health behaviors related to alcohol use,39 and help curb unhealthy behaviors related to obesity and eating disorders.40 NAMLE’s framework consists of questions that are important indicators of critical media literacy (the ability to analyze information for credibility and quality)41 and has been used by several media literacy scholars.36,42 The questions include 3 principle performance areas when analyzing media messages: (1) audience and authorship, (2) messages and meaning, and (3) representations and reality. The questions included in the instrument were based on a range of health Web sites that included varying levels of credibility, quality, authorship, and accuracy. Digital literacy is also included. The HHS believes computer literacy is an essential component of health literacy.4 However, because most college students are expected to possess basic computer skills, results from such a test would likely be negligible. However, one aspect of computer literacy—digital literacy—still seemed appropriate for this population. Digital literacy is the ability to appropriately use digital tools to identify, access, manage, analyze, and synthesize digital resources, which ultimately helps construct new knowledge, create new media, and communicate with others.43,44 The first draft of digital literacy questions included 48 question items, divided into 3 categories: (1) a personal section inquiring about the participants’ health information– seeking habits; (2) a general digital literacy section with correct/incorrect answers; and (3) a set of questions based on a specific scenario, such as helping a relative with stage 1 prostate cancer learn about his/her treatment options. The scenario-based digital literacy questions are based on the JOURNAL OF AMERICAN COLLEGE HEALTH

Health Literacy Assessment for Young Adult College Students

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

Educational Testing Services’ conceptual framework for assessing information literacy in digital environments.13 Because media literacy and digital literacy can have some overlapping characteristics (such as evaluating and synthesizing resources), but are often thought to emphasize dominating traits (the emphasis of media literacy is traditionally on analyzing information for credibility and quality, and digital literacy is on searching for relevant information in a digital environment), the 2 concepts were assessed separately as part of the measurement tool. However, they were evaluated to see if the 2 concepts were conceptually different enough to be considered separate sections for the instrument. Expert Review After approval by the Institutional Review Board (IRB), 4 health education experts reviewed the initial item pool of 229 questions and provided confidential feedback via an evaluation questionnaire. The experts all hold clinical or health-related university faculty appointments, have doctoral degrees, are directly involved in clinical or health education research, and were interested in health literacy research. They ranked each question for its appropriateness in a health literacy instrument on a 5-point Likert scale. According to Lawshe, if more than half of the subject matter experts agree that an item works well in a construct, the item has some content validity.45 Therefore, if at least 3 of the 4 experts agreed that an item would work well, it was generally kept. Based on their feedback, 142 questions were eliminated and 87 questions were selected for the first round of testing with the instrument. These 87 questions consisted of 32 comprehension items, 19 numeracy items, 17 media literacy items, and 19 digital literacy items. Participants Subsequent to the questions chosen from the expert review—and after additional IRB approval—the assessment was presented as an extra credit activity to 4 sections of a junior- to senior-level summer online communication class, and 1 fall semester large lecture class. All of the classes were from the same course: a 300-level technical communication course that serves an academically diverse cross-section of students on campus. An alternate extra credit activity was provided, which no students undertook. Of the 209 students registered for the summer and fall classes combined, 144 completed the assessment (69%). Analysis The assessment was administered online via the Question Pro Web site. Participants were given no time limit, although they were warned that completion of the assessment would likely take 30 to 45 minutes. They were informed that they would only be able to move forward with the assessment, and would not be able to use the back button to change answers. The main task with this pilot study was to reduce the size of the instrument and choose the best-performing question items. Two methods were implemented in the elimination of VOL 62, FEBRUARY/MARCH 2014

question items. First, IRT-based item characteristics were obtained for each question in the assessment using the MPLUS version 6 statistical modeling program. IRT is a psychometric approach that takes both a question item’s parameters into account, such as its difficulty and discrimination values, as well as the test-taker’s skill level.46–48 IRT is based on the idea that the probability of a correct answer to a question is a mathematical function of the combination of a person’s skill level and the question item’s parameters.47,49 The question items were evaluated with regard to their relationship to their respective conceptual domains (comprehension, numeracy, media literacy, and digital literacy). This means that each of the 4 potential concepts were analyzed and modeled as separate factors. However, media literacy and digital literacy were evaluated to see if they worked better as 1 subconcept of health literacy or as 2 separate concepts. Otherwise, the 4 subcomponents seemed to clearly include different sets of questions. The items’ discriminations and difficulties were inspected in order to determine which items offered the least information with regard to their respective domains. Items with discrimination values below 0.3 were, for the most part, eliminated, and those above 0.3 were examined further in a phased approach. The phased approach involved conducting additional IRT analyses until all of the items performed well with good discrimination values. Research suggests that a discrimination of 0.3 and above indicates a good item, and 0.6 and above is very good.50 Higher numbers indicate more difficult or discriminating items51 and negative numbers indicate that test takers have a higher probability of obtaining the correct response if their ability is lower (therefore, all negative items were also eliminated). Second, the items were also inspected for how well they fit into the 4 separate conceptual domains—comprehension, numeracy, media literacy, and digital literacy—by assessing goodness-of-fit indices for various models. The Categorical Variable Methodology approach available in MPLUS was used to fit the models under consideration. This approach, usually referred to as a robust weighted least squares approach in the literature,52,53 or WLSMV, provided 3 good indices that were used for evaluating model fit: comparative fit index (CFI), Tucker-Lewis index (TLI), and the root mean square error of approximation (RMSEA), which have all been shown to be useful in assessing model goodness-of-fit for data with categorical outcomes.54 The models with the best-fit indices and corresponding best item discrimination values were kept as measurement constructs for the final version of the assessment in this study. RESULTS The participants included 59 males and 85 females, and were mostly juniors (41.7%) and seniors (44.4%), with some sophomores (13.9%). Participants identified as 86.1% white, 9% Asian, 6% Hispanic, and 5% other (with 0% black/African American). These characteristics are similar 127

Harper

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

to the university student population with 86% white and 14% ethnic minorities. Forty-four question items were identified for deletion based on both IRT statistics and review of goodness-of-fit indices of the various tested models for the 4 areas of health literacy. The results for each subcomponent—comprehension, numeracy, media literacy, and digital literacy—are provided separately. See the Appendix for a sample of some of these health literacy question items from each section. Comprehension Twenty total comprehension items—which consisted of 8 SVT questions and 12 Cloze questions—were retained, and 12 questions were identified for deletion. Seven question items had poor IRT discrimination values (below 0.3) and were therefore eliminated (5 SVT items and 2 Cloze question items). Testing the new scale of items, with the 7 aforementioned questions now deleted, revealed that an additional SVT item had a discrimination value below 0.3. This item was therefore also eliminated. After these 2 phases of calculating IRT parameter values, all of the remaining question items had discrimination parameter values above 0.3. However, because the Cloze technique works by eliminating every xth word in a passage (in this case, every 8th to 12th word), the first 6 Cloze questions were eliminated entirely, since 2 questions had poor discrimination values in the first 6 and because the next new paragraph didn’t start until the seventh Cloze question. This was important in order to keep a consistent flow with the Cloze technique. The final values, all with discrimination parameter values above 0.3, can be viewed in Table 1. Typically IRT difficulty parameters range from −3.0 to 3.0.51,55 As Table 1 illustrates, only a couple of the items in the comprehension section have values above 0, indicating that most of the items (negative) would be considered fairly easy questions for this population. SVT8 and SVT11 are the most difficult items, with IRT difficulty parameter values of 0.23 and 0.09, respectively. All of the comprehension items—using both the sentence verification and Cloze techniques—were highly discriminating (values above 0.60 are considered very good, and above 0.30 are considered good50). In Table 2, the goodness-of-fit indices of the various tested models for the comprehension section are reported. RMSEA values should generally be as close to 0.06 and below as possible to indicate good fit. However, some previous studies have shown that RMSEA values in the range of 0.05 to 0.10 are considered an indication of a fair fit.54 CFI values ≥0.95 are presently recognized as indicative of a very good fit, but values above 0.90 have also been considered as good in past research.53,54 TLI values are similar, with values above 0.95 recognized as having a very good fit, but many researchers have reported values above 0.90 as having a good fit.53 128

TABLE 1. Item-Level Analysis of Final Health Literacy Items Question item Comprehension Sentence Verification Technique Question 1 Sentence Verification Technique Question 3 Sentence Verification Technique Question 4 Sentence Verification Technique Question 6 Sentence Verification Technique Question 8 Sentence Verification Technique Question 9 Sentence Verification Technique Question 11 Sentence Verification Technique Question 14 Cloze Technique Question 7 Cloze Technique Question 8 Cloze Technique Question 9 Cloze Technique Question 10 Cloze Technique Question 11 Cloze Technique Question 12 Cloze Technique Question 13 Cloze Technique Question 14 Cloze Technique Question 15 Cloze Technique Question 16 Cloze Technique Question 17 Cloze Technique Question 18 Numeracy Basic 1: Understanding instructions Basic 2: Understanding missing info Basic 4: Understanding instructions/food Computational 4: Missing number information Computational 5: Division Analytical 1: Percent to ratio Analytical 2: Making sense of different units Analytical 3: Calculating probability Analytical 4: Calculating percent Statistical 5: Calculating risk Statistical 6: Calc risk based on previous Media literacy Melanoma skin cancer (National Cancer Institute) CNCER1: Purpose of the message CNCER2: Authorship CNCER3: Authorship credibility CNCER5: When info last updated

Difficulty Discrimination −2.00

1.00

−1.15

0.77

−0.74

0.77

−1.78

0.88

0.20

0.77

−1.82

0.74

0.07

0.94

−3.79

0.92

−1.33 −0.43 −1.03 −1.70 −1.02 −1.34 −0.95 −1.49 −1.04 −1.56 −1.27 −1.22

1.55 1.25 2.03 2.22 2.07 2.23 2.98 2.41 1.99 1.64 1.82 3.03

−1.73

1.00

−1.58

0.75

−1.99

1.15

−2.51

0.92

−2.44 −1.34 −1.47

1.80 1.67 1.22

−1.29

1.18

−2.02

1.18

−0.33 −0.44

1.35 1.38

−2.22

0.59

−4.32 0.48

0.28 0.38

−1.94

0.65

(Continued on next page)

JOURNAL OF AMERICAN COLLEGE HEALTH

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

Health Literacy Assessment for Young Adult College Students

TABLE 1. Item-Level Analysis of Final Health Literacy Items (Continued)

TABLE 2. Goodness-of-Fit Indices for Health Literacy Subcomponents

Question item

Factor model

Difficulty Discrimination

Gardasil (Advertisement for Gardasil) GARDML1: Purpose of message GARDML2: Authorship GARDML3: Authorship credibility GARDML5: When info last updated Health-information seeking DLSEARCH: Have you searched for health info DLDIFFCT: How difficult to find info DLFIND: Able to find answers to health questions DLSCORET: Use of digital applications WEBDLT: Main health websites used General digital literacy DLSOURCE: Source of Web site’s funding DLRUNWEB: Who runs a Web site DLPURPS: How can you tell Web site’s purpose DLORIGNL: Original source of Web site’s info DLCURRNT: How current info is on Web site SCENUSE2: Usefulness (Not very useful) SCENUSE3: Usefulness (Very useful)

−1.04

1.44

2.18 −0.20

0.21 1.06

−1.31

4.04

−1.91

0.59

−0.24

0.91

−0.06

7.39

−0.58

0.42

−1.65

0.48

−0.20

0.59

−1.25

0.40

−1.46

0.60

−0.12

0.71

−0.84

0.53

1.93

0.68

−2.78

0.36

Although Model E demonstrates the best model fit overall according to WLSMV statistics, this model eliminates all of the SVT items. Because research suggests that at least 2 methods should be used to measure comprehension, and because several of the SVT items had very good discrimination parameter values (above 0.60), the well-performing SVT items (those represented in Model D) were kept for future evaluation of the health literacy instrument. Numeracy Eight numeracy items were identified for elimination, leaving 11 numeracy items that performed well. Five numeracy questions had IRT discrimination values below 0.3 and were therefore eliminated. Two additional phases of IRT analysis yielded 3 additional items that had poor discrimination parameter values, and were therefore eliminated. The remaining items—after 3 phases of analysis—all had discrimination values above 0.6, which can be seen in Table 1. VOL 62, FEBRUARY/MARCH 2014

Comprehension (A) All comprehension items (B) Minus 7 items (C) Minus 7 items above + SVT 7 (D) Minus items above + Cloze 1–6 (E) Minus items above + All SVT items, Cloze 5 Numeracy (A) All numeracy items (B) Minus 5 items (C) Minus items above & Computational 2 (D) Minus items above & Statistical 1 Media literacy (A) All media literacy items (B) Minus poor items, but including authorship questions Digital literacy and health information seeking All initial digital literacy items Health information seeking General digital literacy

CFI

TLI

RMSEA

0.797

0.783

0.055

0.797 0.807

0.779 0.790

0.074 0.075

0.800

0.778

0.090

0.921

0.909

0.079

0.420 0.580 0.554

0.352 0.510 0.473

0.07 0.08 0.09

0.570

0.475

0.09

0.589

0.534

0.093

0.880

0.841

0.076

0.56

0.48

0.09

0.80 0.33

0.66 0.06

0.17 0.16

Note. CFI = comparative fit index; TLI = Tucker-Lewis index; RMSEA = root mean square error of approximation; SVT = sentence verification technique.

None of the numeracy items would be considered very difficult items for this population, as they all have negative difficulty parameters. All of the items are highly discriminating (values above 0.6). Goodness-of-fit indices of the various tested models for the numeracy construct are reported in Table 2. RMSEA values are fair for all 4 of the models (below 0.10). CFI and TLI values are not significant for any of the models, but Model D has the highest values and this was the model that was retained for future testing with the instrument. Media Literacy Media literacy and digital literacy question items were first compared to see if they performed better as 2 separate scales or as part of a single scale. The items performed better as 2 separate scales, with some improvement in CFI and TLI scores, and a considerable reduction in their chi-square values and degrees of freedom. Media literacy was further analyzed for the best separate model. After eliminating items with poor discrimination parameter values, 8 media literacy items were kept in the final model, which means that 9 questions were 129

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

Harper

identified for elimination. However, 2 questions relating to authorship of Web sites—an important component of media literacy—were retained for future testing, even though their discrimination values were below 0.3. Authorship is considered one of the key elements of media literacy for evaluating credibility and quality. Upon further inspection of the wording of these questions (“Who is the author of this message?”), it became apparent that they should be reworded to “Who is responsible for this message?” for clarity, as the authorship may be unclear. Therefore, the authorship questions were changed for future testing with the instrument. The parameter values for the final model can be seen in the media literacy section of Table 1. Goodness-of-fit indices for the media literacy models can be seen in Table 2. Model B represents all of the items with discrimination values above 0.3 plus the 2 authorship questions. Digital Literacy The digital literacy section was divided into 2 separate subconcepts: general digital literacy and health information seeking. Seven questions were kept in the general digital literacy model, and 5 questions were kept as part of a healthinformation–seeking model. Five open-ended digital literacy questions were eliminated, and 1 close-ended question was eliminated. The first phase of IRT analysis resulted in only 5 out of the original 13 question items having good discrimination values (above 0.30). These 5 items were all personal healthinformation–seeking questions. These questions obviously belonged together, and upon further inspection, it seemed that the other questions, which could be scored as either correct or incorrect, should perhaps be part of a different subscale. The correct/incorrect items were combined together and evaluated again for their IRT parameter statistics. This second phase yielded 7 of the 8 items as having good discrimination values (above 0.30) when grouped without the health-information–seeking items. The final discrimination values are shown in Table 1. Goodness-of-fit indices for these models are provided in Table 2. The health-information–seeking model has the best values, with CFI and TLI values at 0.80 and 0.66, respectively. The general digital literacy items have very poor goodness-of-fit values; however, because they had good discrimination values as a result of the IRT analysis, they have been kept for future evaluation of the instrument. But if these items continue to perform poorly, they may need to be eliminated for future use. COMMENT This new instrument reflects a range of tasks and skills that are necessary for young adults navigating the current US health care system. There is an emerging consensus in the literature regarding the expanded scope of the health literacy construct beyond the ability to read health information, and this is now widely recognized by organizations such as the World Health Organization and the HHS.4,5 130

The expanded concept includes comprehension, health numeracy, media literacy, digital literacy, and Internet health information–seeking skills. The current leading tools—such as the TOFHLA and the Rapid Estimate of Adult Literacy in Medicine—mainly measure comprehension, and only do so at a very basic level, akin to assessing Basic English language proficiency. More advanced reading skills are needed to understand public health materials available from Web sites such as WebMD or the National Cancer Institute. Health literacy is also now recognized as a concept that goes beyond reading comprehension. The TOFHLA includes numeracy, but like its comprehension measure, it’s very basic and does not reflect the more advanced skills adults are likely to face in the context of the US health care system. There are other specialized tools for measuring health numeracy, such as the Newest Vital Sign and the Medical Data Interpretation Test, but these tools are lacking in 1 or more skills-based areas of the entire concept of health numeracy. And there are no known tools for measuring health media literacy, digital health literacy, Internet health-information–seeking skills, or their effective combination. The new instrument provides a more comprehensive assessment of health literacy and may fill an existing gap for measurement with college students. The new assessment has 51 question items, and all but 2 of these items have good discrimination values (above 0.30). Seventy-eight percent of the items have discrimination values above 0.60, which are considered “very good” by some scholars.50 Two media literacy items scored below 0.30, but because they are key authorship questions, which are important to media literacy, they were kept for further evaluation in future testing of the instrument. None of the items had significant goodness-of-fit values, although this may be due to the small sample size used for this pilot study. Additional research on a larger sample size may provide more useful, significant indices. The instrument uses real-world stimuli from current health Web sites, such as the National Institute of Arthritis and Musculoskeletal and Skin Diseases, as opposed to simple plain-language approved documents. Therefore, the material and question items use terminology that is consistent with online health information. Limitations This study certainly has some limitations. Purposeful sampling was conducted on young adult college students enrolled in communication courses at only one university. However, the students represented a variety of majors, as these communication courses were for students in noncommunication disciplines. The study population also only included 18.2% ethnic minorities, which is fairly representative of the university student population of 14% ethnic minorities. This is certainly a weakness in testing with the instrument, and future research is recommended on other more ethnically diverse college student populations. JOURNAL OF AMERICAN COLLEGE HEALTH

Health Literacy Assessment for Young Adult College Students

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

The pilot study population was also rather small, with only 144 student participants. Additional testing is needed on a larger sample size. Conclusions The goal of this study was to develop an instrument appropriate for measuring health literacy, as it’s currently conceptualized, in college students. The new instrument is more comprehensive than the existing standard tools, has 51 question items that look promising based on discrimination parameter values set forth in IRT, but still needs additional testing on a larger, more ethnically diverse population. Adequate comprehension, health numeracy, media literacy, digital literacy, and health-information–seeking skills are fundamental requirements for individual decisionmaking about health care, disease prevention, screening, diagnosis, and treatment. The new instrument may be helpful for research use in university settings to help identify specific deficiencies and strengths in the subconcepts of health literacy. ACKNOWLEDGMENTS The author would like to express her sincere appreciation to Dr Craig W. Trumbo, for his excellent guidance and advice throughout this research. She would also like to thank Dr Kimberly Henry, Dr Garrett O’Keefe, Dr Kirsten Broadfoot, and Dr Donald Zimmerman for their support and constructive feedback on this project. FUNDING No funding was used to support this research and/or the preparation of the manuscript. CONFLICT OF INTEREST DISCLOSURE The authors have no conflicts of interest to report. The authors confirm that the research presented in this article met the ethical guidelines, including adherence to the legal requirements, of the United States and received approval from the Institutional Review Board of Colorado State University. NOTE For comments and further information, address correspondence to Raquel Harper, Massey University, School of English and Media Studies, Level 2, Atrium Building, Albany Campus, Auckland, 0745 New Zealand (e-mail: [email protected].). REFERENCES 1. Chisolm DJ, Buchanan L. Measuring adolescent functional health literacy: a pilot validation of the test of functional health literacy in adults. J Adolesc Health. 2007;41:312–314. 2. Paasche-Orlow M, Wolf M. The causal pathways linking health literacy to health outcomes. Am J Health Behav. 2007;31:S19–S31. 3. Pawlak R. Economic considerations of health literacy. Nurs Econ. 2005;23:170–180. 4. US Department of Health and Human Services, Office of Disease Prevention and Health Promotion. National Action Plan to VOL 62, FEBRUARY/MARCH 2014

Improve Health Literacy. 2010. Washington, DC: US Department of Health and Human Services; 2010. 5. World Health Organization. Track 2: health literacy and health behaviour. Available at: http://www.who.int/ healthpromotion/conferences/7gchp/track2/en/index.html. Published 2010. Accessed June 4, 2010. 6. Bernhardt JM, Cameron KA. Accessing, understanding, and applying health communication messages: the challenge of health literacy. In: Thomson TL, Dorsely AM, Miller KI, Parrott R, eds. Handbook of Health Communication. London, UK: Lawrence Erlbaum Associates; 2003:583–605. 7. The Centre for Literacy. Calgary charter on health literacy. Available at: http://www.centreforliteracy.qc.ca/sites/ default/files/CFL Calgary Charter 2011.pdf. Published 2008. Accessed March 26, 2011. 8. Schwartz L, Woloshin S, Black W, Welch G. The role of numeracy in understanding the benefit of screening mammography. Ann Intern Med. 1997;127:966–972. 9. American Institutes of Research. Fact sheet: the National Survey of American College Students. Available at: http://www.air.org. Published 2006. Accessed March 25, 2011. 10. Jones TW, Price BA, Randall CH. A comparative study of student math skills: perceptions, validation and recommendations. J Innov Educ. 2011;9:379–394. 11. Price BA, Randall CH, Frederick J, Gall J, Jones TW. Different cultures, different students, same test: comparing math skills of Hungarian and American college students. J Educ Learn. 2012;1:128–142. 12. Breivik PS. 21st century learning and information literacy. Change. 2005;37:20–27. 13. Katz IR. Testing information literacy in digital environments: ETS’s iSkills assessment. Inf Technol Libr. 2007;26:3–12. 14. Rockman IF. Introduction: the importance of information literacy. In: Associates IFRa, ed. Integrating Information Literacy Into the Higher Education Curriculum: Practical Models for Transformation. San Francisco, CA: Jossy-Bass; 2004:1–28. 15. Hanik B, Stellefson M. E-health literacy competencies among undergraduate health education students: a preliminary study. Int Electronic J Health Educ. 2011;14:46–58. 16. Hargittai E, Hinnant A. Digital inequality: differences in young adults’ use of the Internet. Commun Res. 2008;35:602–621. 17. Learning Disabilities Online. Louise Spear-Swerling Assessment of Reading Comprehension. Available at: http://www. ldonline.org/spearswerling/10820. Accessed October 17, 2010. 18. Marcotte AM, Hintze JM. Incremental and predictive utility of formative reading comprehension. J Sch Psychol. 2009;47:315–335. 19. Bormuth J. The Cloze readabilty procedure. Elementary English. 1968;45:429–436. 20. Hafner L. Cloze procedure. J Reading. 1966;9:415–421. 21. Taylor WL. Cloze procedure: a new tool for measuring readability. Journalism Q. 1953;30:415–433. 22. Taylor WL. Cloze readability scores as indices of individual differences in comprehension and aptitude. J Appl Psychol. 1957;41:19–26. 23. Wilson FL. Measuring patients’ ability to read and comprehend: a first step in patient education. Nurs Connect. 2000;13: 19–27. 24. Royer JM. Uses for the sentence verification technique for measuring language comprehension. Available at: http://www. readingsuccesslab.com/publications/Svt%20Review%20PDF%20 version.pdf. Accessed December 1, 2012. 25. Durwin CC, Sherman WM. Does choice of college textbook make a difference in students’ comprehension? Coll Teach. 2008;56:28–34. 26. Royer JM, Cunningham DJ. On the theory and measurement of reading comprehension. Contemp Educ Psychol. 1981;6:187–216. 131

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

Harper 27. Royer JM, Hastings CN, Hook C. A sentence verification technique for measuring reading comprehension. J Read Behav. 1979;11:355–363. 28. Reyna VF, Nelson W, Han PK, Dieckmann N. How numeracy influences risk comprehension and medical decision making. Psychol Bull. 2009;135:943–973. 29. Peters E, Hibbard J, Slovic P, Dieckmann. Numeracy skill and the communication, comprehension, and use of risk-benefit information. Health Affairs. 2007;26:741–748. 30. Lipkus IM, Peters E. Understanding the role of numeracy in health: proposed theoretical framework and practical insights. Health Educ Behav. 2009;36:1065–1081. 31. Golbeck A, Ahlers-Schmidt CR, Paschal AM, Dismuke SE. A definition and operational framework for health numeracy. Am J Prevent Med. 2005;29:375–376. 32. Nurss JR, Parker R, Williams MV, Baker DW. TOFHLA: Test of Functional Health Literacy in Adults. Snow Camp, NC: Peppercorn Books and Press; 2003. 33. Lipkus IM, Samsa G, Rimer BK. General performance on a numeracy scale among highly educated samples. Med Decis Making. 2001;21:37–44. 34. Schwartz L, Woloshin S, Welch G. Can patients interpret health information? An assessment of the Medical Data Interpretation Test. Med Decis Making. 2005;25:290–300. 35. Rothman RL, Montori VM, Cherrington A, Pignone MP. Perspective: the role of numeracy in health care. J Health Commun. 2008;13:583–595. 36. Primack BA, Gold MA, Land SR, Fine MJ. Association of cigarette smoking and media literacy about smoking among adolescents. J Adolesc Health. 2006;39:465–472. 37. Primack BA, Sidani J, Carroll MV, Fine MJ. Associations between smoking and media literacy in college. J Health Commun. 2009;14:541–555. 38. Austin EW, Pinkleton BE, Hust SJT, Cohen M. Evaluation of an American Legacy Foundation/Washington State Department of health media literacy pilot study. Health Commun. 2005;18:75–95. 39. Austin EW, Johnson KK. Effects of general and alcoholspecific media literacy training on children’s decision making about alcohol. J Health Commun. 1997;2:17–42. 40. Rosenbaum JE, Beentjes JWJ, Konig RP. Mapping media literacy: key concepts and future directions. Commun Yearbook. 2008;32:313–349. 41. National Association for Media Literacy Education. Core principles of media literacy education in the United States. Available at: http://www.namle.net. Published 2011. Accessed March 6, 2011.

132

42. Arke ET, Primack BA. Quantifying media literacy: development, reliability, and validity of a new measure. Educ Media Int. 2009;46:53–65. 43. Koltay T. The media and the literacies: media literacy, information literacy, digital literacy. Media Cult Soc. 2011;33:211– 21. 44. Martin A. Literacies for the digital age. In: Martin A, Madigan D, eds. Digital Literacies for Learning. London, UK: Facet; 2006:3–25. 45. Lawshe CH. A quantitative approach to content validity. Pers Psychol. 1975;28:563–75. 46. Ayala RJd. The Theory and Practice of Item Response Theory. New York, NY: The Guilford Press; 2009. 47. Furr RM, Bacharach VR. Item Response Theory and Rasch Models. Psychometrics: An Introduction. Thousand Oaks, CA: Sage Publications; 2008:314–334. 48. Partchev I. A Visual Guide to Item Response Theory. Jena, Germany: Friedrich-Schiller-Universitat Jena; 2004. 49. Thomas ML. The value of item response theory in clinical assessment: a review. Assessment. 2010;18:291–307. 50. Wise J-M. Item Analysis: Techniques to Improve Test Items and Instruction. Tallahassee, FL: Florida State University; 2012. 51. Schapira MM, Walker CM, Cappaert KJ, et al. The numeracy understanding in medicine instrument: a measure of health numeracy developed using item response theory. Med Decis Making. 2012;32:851–865. 52. Muthen BO, Toit SHCd, Spisic D. Robust inference using weighted least squares and quadratic estimating equations in latent variale modeling with categorical and continuous outcomes. Available at: http://pages.gseis.ucla.edu/faculty/muthen/ articles/Article 075.pdf. Accessed October 15, 2012. 53. Tsai R-C, Ling K-N, Wang H-J, Liu H-C. Evaluating the uses of the total score and the domain scores in the cognitive abilities screening instrument, Chinese version (CASI C-2.0): results of confirmatory factor analysis. Int Psychogeriatr. 2007;19:1051–1063. 54. Hooper D, Coughlan J, Mullen MR. Structural equation modelling: guidelines for determining model fit. Electronic J Business Res Methods. 2008;6:53–60. 55. Embretson SE, Reise SP. Item Response Theory for Psychologists. Mahwah, NJ: Lawrence Erlbaum Publishers; 2000. Received: 26 April 2013 Revised: 9 October 2013 Accepted: 3 November 2013

JOURNAL OF AMERICAN COLLEGE HEALTH

Health Literacy Assessment for Young Adult College Students

APPENDIX. Sample of Survey Questions From the Health Literacy Instrument. Comprehension Section One Participants read a passage about gout and uric acid from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and then answered the following YES/NO questions to indicate whether the test sentences had the same meaning as the original sentences.

1.

(YES/NO) Gout occurs if too much uric acid builds up in the fluid around the joints and/or soft tissues, resulting in the formation of uric acid crystals in the joints – which is very painful. (YES/NO) These crystals make the joint swell up and become inflamed causing the joint to appear warm and 2. red and feel very tender and stiff.

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

Comprehension Section Two Participants read a passage from Johns Hopkins Medicine about high cholesterol and triglyceride levels and provided fill-in-the-blank answers from a multiple choice list of four possibilities for every 10th–12th missing word.

Because cholesterol and other fats do not dissolve in (1) alcohol/water/lipids/sugar they cannot travel through the blood unaided. Lipoproteins are (2) food/a disorder/triglycerides/particles formed in the liver to transport cholesterol and other (3) plaques/diseases/fats/acids through the bloodstream. Numeracy 1. A person affected by an autosomal dominant disorder has a 50 percent chance of passing the mutated gene to each child. If you have an autosomal dominant disorder, what is the chance that your child will NOT inherit the mutated gene? a. 25 percent b. 50 percent c. 100 percent d. Not enough information to answer the question 2. Two unaffected people who each carry one copy of a mutated gene for an autosomal recessive disorder have a 25 percent chance with each pregnancy of having the child affected by the disorder. If you and your partner carry the mutated gene, but are not affected, and you become pregnant, what is the chance your child will NOT be affected by the disorder? a. 25 percent b. 50 percent c. 75 percent d. Not enough information to answer the question 3. The chance of passing on a genetic condition applies equally to each pregnancy. If a couple has a child with an autosomal . recessive disorder (see question above), the chance of having another child with the disorder is a. 25 percent b. 50 percent c. 75 percent d. Not enough information to answer this question Media Literacy 1. What is the purpose of the message on this webpage? a. Provide treatment information about melanoma skin cancer. b. Explain the different stages of melanoma skin cancer. c. Provide general, introductory information about melanoma skin cancer. d. Explain that melanoma skin cancer is dangerous and common for all adults. 2. What makes this author credible? a. Government site, authorship is clear, resources for information are cited, webpage states when this resource was last updated. b. Government site, authorship is clear, purpose of site is clear, webpage states when this resource was last updated. c. Government site, webpage states when this resources was last updated, tilde ∼ in the URL. Credibility cannot be identified or verified. (Continued on next page)

VOL 62, FEBRUARY/MARCH 2014

133

Harper

Downloaded by [University of Massachusetts, Amherst] at 04:20 06 October 2014

APPENDIX (Continued). Digital Literacy 1. Have you searched for health information on the Internet? NEVER 1–2 times 3–5 times 6–10 times 11+ times 2. It costs money to run a website. Which of the following helps you figure out the source of a website’s funding? a. Right-click on website homepage and then click on View Source. b. Scrolling to the bottom of a website’s homepage and viewing disclosure statement. c. Checking the address of the website, such as addresses ending in “.gov,” “.edu,” “.org,” or “.com.” d. Right-click on the URL in the address bar and click Open URL.

134

JOURNAL OF AMERICAN COLLEGE HEALTH

Development of a health literacy assessment for young adult college students: a pilot study.

The purpose of this study was to develop a comprehensive health literacy assessment tool for young adult college students...
143KB Sizes 0 Downloads 0 Views