Research Brief Reliability and Validity of Nutrition Knowledge Questionnaire for Adults Anna Marie Jones, PhD1; Cathi Lamp, MS, MPH, RD2; Marisa Neelon, MS3; Yvonne Nicholson, MS4; Connie Schneider, PhD, RD5; Patti Wooten Swanson, PhD6; Sheri Zidenberg-Cherr, PhD1 ABSTRACT Objective: To determine the validity and reliability of a nutrition knowledge questionnaire for adults in California. Methods: A convenience sample of adults was recruited for cognitive interviews. A mail-based survey of 400 randomly selected addresses was used to assess internal consistency (Cronbach a). Researchers assessed content validity (Student t test) and test–retest reliability (Pearson correlation) in a convenience sample of university students who had previously taken a college nutrition course, compared with students who had not. Results: Twenty adults participated in cognitive interviews, 94 adults returned the mailed questionnaire, and 48 university students participated in validity and reliability testing. Cronbach a ¼ .91 and test–retest r ¼ 0.95, demonstrating internal consistency reliability and test–retest reliability. Students who had taken a college-level nutrition course scored significantly higher compared with students who had not (P < .001), demonstrating construct validity. Conclusions and Implications: Findings show that the questionnaire is a valid and reliable nutrition knowledge measure for use in California and may be of use in other places. Key Words: nutrition knowledge, validated instruments, questionnaires (J Nutr Educ Behav. 2014;-:1-6.) Accepted August 4, 2014.

INTRODUCTION The high prevalence of overweight and obesity in the US has become one of the most pressing public health concerns in the nation. Over two thirds of adults are overweight or obese, rendering the majority of the adult population at increased risk for a number of chronic diseases.1 The influences on behaviors that contribute to this issue are multifaceted and complex and range from the personal, such as knowledge or preferences, to environmental or community factors,

to social and cultural norms and values.2 Improving nutrition knowledge and encouraging behavior change are some of many avenues being explored by researchers. According to Social Cognitive Theory, behavioral capability requires knowledge to perform a behavior.3 Accordingly, inadequate nutrition knowledge may be a barrier to adopting healthful behaviors and maintaining a healthful weight. Nutrition knowledge consists of declarative knowledge, or knowledge of facts and processes, as well as procedural


Center for Nutrition in Schools, Department of Nutrition, University of California, Davis, Davis, CA 2 University of California Cooperative Extension Tulare County, Tulare, CA 3 University of California Cooperative Extension Contra Costa County, Pleasant Hill, CA 4 University of California Cooperative Extension Sacramento County, Sacramento, CA 5 University of California Agricultural and Natural Resources, Davis, CA 6 University of California Cooperative Extension San Diego County, San Diego, CA Address for correspondence: Sheri Zidenberg-Cherr, PhD, Department of Nutrition, University of California, Davis, 1 Shields Ave, Davis, CA 95616; Phone: (530) 752-3817; Fax: (530) 752-8905; E-mail: [email protected] Ó2014 SOCIETY FOR NUTRITION EDUCATION AND BEHAVIOR

Journal of Nutrition Education and Behavior  Volume -, Number -, 2014

knowledge, the knowledge of how to perform a task.4 Both are needed in making healthful choices. Although not all studies have found a connection between nutrition knowledge and behavior, positive associations have been reported between nutrition knowledge and consumption of fruits and vegetables,5 more frequent use of nutrition facts labels,6 increased likelihood of minimizing salt intake, and choosing foods high in fiber.7 Nutrition knowledge also has been reported to mediate the relationship between socioeconomic status and diet quality.5,8 The impact of nutrition knowledge can also have effects beyond the individual level; studies have reported that maternal nutrition knowledge is associated with child dietary quality.9,10 A recent systematic review concluded that overall there is weak correlation between nutrition knowledge and dietary intake.11 However, studies using more rigorously validated instruments were more likely to have a significant, positive correlation. The authors concluded that studies using well-designed and well-validated


2 Jones et al nutrition knowledge and dietary intake instruments were needed. Many intervention studies have focused on improving nutrition knowledge to lay a foundation to support healthful behavior change. However, to establish the baseline level of knowledge and measure changes in knowledge, reliable and valid tools are necessary to measure knowledge.12 Questionnaire validation cannot guarantee that conclusions drawn are correct, but it is important to determine the validity of a questionnaire before its use to make it more likely that the results generated using the questionnaire are relevant. The purpose of the current study was to develop and validate a nutrition knowledge questionnaire for use in the general population of English-literate adults aged $ 18 years in California, with the potential to be used across the US.

METHODS Preliminary Questionnaire Development Development of the questionnaire began with a pool of items from previous questionnaires (Figure).13-16 The majority of questions were obtained from the General Nutrition Knowledge Questionnaire.13 The authors chose this questionnaire because it measures general nutrition knowledge in adults in several domains. Because this questionnaire was written for use in a United Kingdom (UK) population, questions were modified to be appropriate for an American audience. New questions were written based on the Dietary Guidelines for Americans, 2005 (DGA)17 and MyPyramid,18 which were current at the time the questionnaire was developed (2009–2011). The initial pool contained 108 items divided into 4 domains of knowledge: Familiarity with MyPyramid and the DGA, Nutrient Content of Foods, Everyday Food Choices, and Diet and Disease Relationships. The pool of questions

Journal of Nutrition Education and Behavior  Volume -, Number -, 2014 was intended to be much larger than the final number of items to be included in the questionnaire to allow for the removal of questions less useful in measuring nutrition knowledge. A committee consisting of nutrition faculty, postdoctoral scholars, and registered dietitians reviewed the questionnaire for content. Based on their suggestions, questions were eliminated or reworded and new questions were added. The Everyday Food Choices section of the questionnaire was eliminated because the committee determined that adequately describing food items to allow respondents to be able to choose between them required the questions to be too long, whereas using shorter descriptions would render the questions too vague. Before the following phases of development, the University of California–Davis Institutional Review Board approved the study as exempt. A convenience sample of 20 adults in 5 counties in California (Contra Costa, Fresno, Sacramento, San Diego, and Tulare) was recruited to participate in cognitive interviews. A concurrent think-aloud method was used in the cognitive interviews.19 Participants were first acclimated to the cognitive interview process with a warm-up question. After this, the interviewer read each question aloud and participants verbalized their thought process as they answered the question. All interviews were recorded and notes were taken during the interviews. Study staff listened to the interviews until interviews no longer yielded new information in the interpretation of questions. Questions were modified as needed to reduce ambiguity. Analysis of cognitive interview data directed subsequent modifications to the questionnaire. After this, a mail-based pretest was used. Data from the pretest were used to reduce the number of questions before the questionnaire was tested for validity and reliability. A total of 400 addresses

Figure. Stages of questionnaire development. Revisions took place after each stage.

were randomly selected from the US Postal Service Delivery Sequence File for the State of California. The US Postal Service Delivery Sequence File is a database containing all deliverable addresses in the US and covers up to 97% of US households.20 Following the method of Dillman et al,21 an advance letter was sent to the sampled addresses, followed by a packet containing the questionnaire and a stamped, return-addressed envelope. A reminder postcard was sent to all addresses 2 weeks later and a second packet was sent to non-respondent addresses 2 weeks after that.

Validity and Reliability The researchers used data from the pretest to determine internal consistency reliability of the revised questionnaire before the next phase of testing. A convenience sample of university students was recruited in fall, 2010 and spring, 2011 through fliers and classroom announcements to test the questionnaire for construct validity and test–retest reliability. The questionnaire was administered to participants individually or in groups of 2 or 3. Participants completed the questionnaire twice, with approximately 2 weeks between the first and second administration of the questionnaire, to evaluate the test–retest reliability of the questionnaire. The authors chose the time interval between administration of the questionnaires based on the concept that that the length of time should be long enough that respondents did not remember previously chosen answers, but short enough that new knowledge was unlikely to be acquired.22 Respondents self-reported whether they had previously taken a college-level nutrition class; these data were used to compare knowledge scores for construct validity. Students who had taken at least 1 college-level nutrition class before were expected to have higher nutrition knowledge

Journal of Nutrition Education and Behavior  Volume -, Number -, 2014 scores than those who had never taken a college-level nutrition class.

Data Analysis Cognitive interviews were analyzed using qualitative methodology. The authors recorded instances in which a questionnaire item was misinterpreted or was confusing to participants, and how it was misinterpreted or confusing. These items were revised for clarity or were removed from the questionnaire. Data from the mail-based pretest and validity and reliability testing were double-entered into Microsoft Access (Access 2010, Microsoft, Inc, Redmond, WA, 2010). After systematic review of entered data for accuracy, data were analyzed using SPSS 19.0, 20, and 21 (IBM, Inc, Armonk, NY, 2010–2012). Pretest data were analyzed for item difficulty and itemto-total score correlation. Items that are too difficult or too easy, or are not able to discriminate between those with differing levels of knowledge are not considered useful in measuring knowledge. Items that were answered correctly by over 90% of respondents or < 20% were removed from the questionnaire. Questions that had an item-to-total score correlation of 0.2, indicating poor capability of discriminating between high- and low-scoring individuals, were also removed. After the researchers removed questions based on these criteria, they analyzed each section using Cronbach a to determine internal consistency. The minimum acceptable level was set at a $ .7 because this is generally considered to be the minimum for adequate internal consistency.23 Data from the validity and reliability testing were analyzed for construct validity and test–retest reliability. The construct analyzed was nutrition knowledge learned in a classroom. Students who had taken at least 1 college-level nutrition class were expected to have statistically significantly higher nutrition knowledge than those who had never taken a college-level nutrition course. Twotailed Student t tests were used to determine whether there were statistically significant differences between groups for total score on the knowl-

edge questionnaire as well as scores on each domain of knowledge included in the questionnaire. Scores from the first questionnaire were used for construct validity analysis. To assess test–retest reliability, the researchers used Pearson correlations to compare total scores and scores for each domain of knowledge between the administration of the first and second questionnaires. Statistical significance was defined as P < .05.

RESULTS Preliminary Questionnaire Development Twenty cognitive interviews were conducted in March, 2010 in 5 different counties in California (Table 1). The most common issues observed in the cognitive interviews were lack of clarity resulting in multiple interpretations of select questions. Based on feedback from the interviews, 23 questions were reworded, 2 were deleted, and 2 were added. The questionnaire, which had started with 108 questions before review by the committee, contained 90 items at the end of this phase. Questionnaires were returned by 24% of the addresses contacted (n ¼ 94). The majority of questionnaire respondents were female (70%) and Caucasian (66%), with at least some college (33%) or an undergraduate degree (25.5%) (Table 1). The mean score on the knowledge questionnaire was 56% correct, with a minimum of 7% and maximum of 97%. Item difficulty and item discrimination criteria resulted in the removal of 30 items from the questionnaire, which brought the final number of knowledge questions to 60. Exceptions were made for questions that were considered important by the committee in demonstrating knowledge in particular areas of nutrition. To further reduce the length of the questionnaire to reduce participant burden, borderline questions were also removed.

Validity and Reliability Using the data from the mail-based survey, each domain of knowledge in the final questionnaire was found to have adequate internal consistency (Cronbach a ¼ .85, .81, and .81,

Jones et al 3 respectively, for Familiarity with MyPyramid, Nutrient Content of Foods, and Diet–Disease Relationships). The overall internal consistency reliability was 0.91. All participants in content validity and test–retest reliability phase were between the ages of 18 and 29. The majority identified themselves as Asian (60%) or Caucasian (31%), whereas 10% identified as Hispanic or Latino, 4% as Native Hawaiian or Pacific Islander, and 2% as Native American or Alaska Native. Most respondents (81%) were female, 17% were male, and 2% were transgender. Knowledge scores were significantly higher in nutrition-educated students than in non-nutrition students on all 3 sections of the questionnaire, as well as overall (Table 2). Test–retest reliability was moderate to high (r ¼ 0.93, 0.92, and 0.84, respectively for Familiarity with MyPyramid, Nutrition Content of Foods, and Diet– Disease Relationships) with total reliability equal to 0.95 (P < .001 for all correlations).

DISCUSSION The purpose of this study was to systematically develop and validate a questionnaire to measure nutrition knowledge in adults aged $ 18 years in California. This questionnaire used items from several questionnaires, but the bulk of the questions were from the General Nutrition Knowledge Questionnaire, which was developed for adults in the UK in the late 1990s.13 The differences in dietary recommendations, consumption patterns, and terminology between the populations necessitated adaptation and validation for an American audience. The questionnaire began with almost 50 more questions than were included in the final questionnaire. This allowed for the removal of questions that were less useful in measuring knowledge. Although nearly half of the questions were removed and the length of the knowledge questionnaire was reduced to 60 items, the questionnaire was found to be valid and reliable. Results were comparable to questionnaires that were much longer, such as the General Nutrition Knowledge Questionnaire,

4 Jones et al

Journal of Nutrition Education and Behavior  Volume -, Number -, 2014

Table 1. Demographic Characteristics of Participants in Cognitive Interviews and Mail-Based Pretest Cognitive Interviews, n (n ¼ 20)

Mail-Based Pretest, n (n ¼ 94)

Gender Female Male No response

11 9 0

66 27 1

Age, y 20–29 30–39 40–49 50–59 60–69 70–79 $ 80 No response

3 5 2 6 2 0 1 0

5 11 18 18 21 10 9 2

Income, US$ 0–19,999 20,000–39,999 40,000–59,999 60,000–79,999 79,000–99,999a $ 100,000 No response

2 3 4 7 0 3 1

13 14 17 15 11 19 5

Education Less than high school diploma High school diploma or equivalent Some college College graduate Postgraduate No response

0 0 9 4 7 0

6 18 31 24 14 1

Race and ethnicity Native American or Alaska Native Asian or Asian American African American Hispanic or Latino Native Hawaiian or Pacific Islander Caucasian Other No response

0 1 4 4 0 11 0 0

1 8 2 15 1 62 4 1

Employment status Employed Unemployed Retired Disabled Student Homemaker Other

14 0 3 0 1 0 2

43 6 27 5 2 10 1


Owing to an error in the questionnaire, there was overlap between 2 income categories. Categories are presented as they were in the questionnaire. which contained 110 items. This indicates that respondent burden was reduced significantly without sacrificing reliability or validity. There was, however, a smaller difference in

knowledge scores between the higher knowledge and lower knowledge groups compared with the study validating the General Nutrition Knowledge Questionnaire (17% dif-

ference between groups vs 35%). This may be because of the differences in samples. Whereas this study included students who had taken $ 1 college-level nutrition courses in the higher knowledge group, the study conducted in the UK included only final-year dietetic students in the higher knowledge group, who would be expected to have taken several nutrition courses. In a similar study, researchers in Australia validated a revised version of the General Nutrition Knowledge Questionnaire containing 113 items in an Australian sample.24 The authors reported that whereas overall internal reliability was high (a ¼ .92), internal reliability for the section about dietary recommendations was low (a ¼ .53) compared with the original study conducted (a ¼ .70) or the present study's section on Familiarity with MyPyramid (a ¼ .85). Similar results were reported for test–retest reliability. This may demonstrate the importance of basing questions on nutrition recommendations in the country in which the questionnaire will be used; the majority of the dietary recommendation questions in the Australian study were modified versions of the questions designed for use in a UK population. The main limitation of this study is that the sample participants are not representative of the population of California as a whole. Although the mail-based survey used a random sample of residential addresses in California, there was likely self-selection bias among those who returned a questionnaire. Furthermore, the questionnaire was provided only in English, limiting the sample to those who were English literate. The samples in the cognitive interviews and the mail-based sample skewed somewhat older and contained a higher proportion of those with college education or an advanced degree than is found in the general population of California. The questionnaire was validated using a homogeneous sample of college students who differed demographically from those in the development phase, so it is possible that although the questionnaire was valid in this sample, this may not extend to the general population, and validation in populations that differ from those involved in the

Journal of Nutrition Education and Behavior  Volume -, Number -, 2014

Jones et al 5

Table 2. Mean and Range of Scores of Participants in Validity and Reliability Phase Who Had Previously Taken a College-Level Nutrition Course and Those Who Had Not Questionnaire Section (maximum score possible)

Nutrition (n ¼ 27) Minimum Maximum

Familiarity with MyPyramid (23) Nutrient Content of Foods (26) Diet–Disease Relationships (11) Overall (60)

Mean (SD)

No Nutrition (n ¼ 21) Minimum Maximum

Mean (SD)

Mean Difference




18.26 (2.97)



13.81 (4.03)


< .001



17.26 (4.66)



14.19 (3.74)





9.15 (2.03)



6.29 (2.17)


< .001



44.67 (8.71)



34.29 (8.43)


< .001

study should be considered. However, this also removes several potential confounders such as age and education. A further limitation is that the first section of the questionnaire was written based on the DGA, 2005 and MyPyramid. The DGA, 201025 and MyPlate26 were released during the validity and reliability testing phase, and as a result the validated questionnaire contained outdated references. However, MyPlate can replace MyPyramid in all but 2 of these questions with no change in question intent, because the recommendations have not changed. Beyond those requiring slight rewording, only 2 questions became irrelevant after the release of MyPlate, owing to the concept of discretionary calories, which was not carried forward into the new guide. This resulted in the removal of 2 questions about discretionary calories and MyPyramid was replaced with MyPlate in several questions. Overall, the changes to the content of the questionnaire were minor and unlikely to affect the validity and reliability significantly. Furthermore, internal consistency and test–retest reliability increased slightly when the analysis was conducted without questions about discretionary calories, and construct validity was not affected (data not shown).

IMPLICATIONS FOR RESEARCH AND PRACTICE The questionnaire developed in this project has been shown to be valid and reliable by a variety of methodologies in a population of adults living

in California. California is a diverse state with large urban centers and vast agricultural and rural regions. Because of the diversity of the state, the questionnaire will likely be appropriate for use in populations outside of California, and is a useful tool for researchers intent on measuring nutrition knowledge of general adult populations. The questionnaire likely will have use beyond that of determining nutrition knowledge of populations, because it could potentially be used as a pretest in nutrition classes or interventions or as a screening tool to determine what, if any, nutrition education is required by participants in a variety of nutrition programs. Because this questionnaire focuses primarily on declarative knowledge, future research directions should include validation of procedural knowledge questionnaires.

ACKNOWLEDGMENTS This project was funded in part by the University of California, Davis, Henry A. Jastro Research Scholarship, and CRIS#CA_D_NTR-2060-H. This research was conducted as part of the dissertation of Anna M. Jones.

REFERENCES 1. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA. 2012;307:491-497. 2. McLeroy KR, Bibeau D, Steckler A, Glanz K. An ecological perspective on health promotion programs. Health Educ Q. 1988;15:351-377.

3. National Cancer Institute, US Department of Health and Human Services. Theory at a Glance: A Guide for Health Promotion Practice. 2nd ed. Bethesda, MD: National Cancer Institute, US Dept of Health and Human Services; 2005. 4. Worsley A. Nutrition knowledge and food consumption: can nutrition knowledge change food behaviour? Asia Pac J Clin Nutr. 2002;11(suppl 3): S579-S585. 5. Wardle J, Parmenter K, Waller J. Nutrition knowledge and food intake. Appetite. 2000;34:269-275. 6. Satia JA, Galanko JA, Neuhouser ML. Food nutrition label use is associated with demographic, behavioral, and psychosocial factors and dietary intake among African Americans in North Carolina. J Am Diet Assoc. 2005;105: 392-402. 7. Petrovici D, Ritson C. Factors influencing consumer dietary health preventative behaviours. BMC Public Health. 2006;6:222. 8. McLeod ER, Campbell KJ, Hesketh KD. Nutrition knowledge: a mediator between socioeconomic position and diet quality in Australian first-time mothers. J Am Diet Assoc. 2011;111:696-704. 9. Variyam JN, Blaylock J, Lin B-H, Ralston K, Smallwood D. Mother’s nutrition knowledge and children’s dietary intakes. Am Agric Econ. 1999; 81:373-384. 10. Vereecken C, Maes L. Young children’s dietary habits and associations with the mothers’ nutrition knowledge and attitudes. Appetite. 2010;54:44-51. 11. Spronk I, Kullen C, Burdon C, O’Connor H. Relationship between nutrition knowledge and dietary intake. Br J Nutr. 2014;111:1713-1726. 12. Contento IR, Randell JS, Basch CE. Review and analysis of evaluation

6 Jones et al





measures used in nutrition education intervention research. J Nutr Educ Behav. 2002;34:2-25. Parmenter K, Wardle J. Development of a general nutrition knowledge questionnaire for adults. Eur J Clin Nutr. 1999;53:298-308. Obayashi S, Bianchi LJ, Song WO. Reliability and validity of nutrition knowledge, social-psychological factors, and food label use scales from the 1995 Diet and Health Knowledge Survey. J Nutr Educ Behav. 2003;35:83-91. Cantor D, Covell J, Davis T, Park I, Rizzo L. Health Information National Trends Survey 2005 (HINTS 2005): Final Report. Bethesda, MD: National Cancer Institute; 2005. Food and Health Survey: Consumer Attitudes Toward Food, Nutrition, and Health. Washington, DC: International Food Information Council Foundation; 2008.

Journal of Nutrition Education and Behavior  Volume -, Number -, 2014 17. US Department of Health and Human Services, US Department of Agriculture. Dietary Guidelines for Americans, 2005. 6th ed. Washington, DC: US Dept of Health and Human Services, US Dept of Agriculture; 2005. 18. US Department of Agriculture. MyPyramid. html. Accessed August 30, 2014. 19. Willis G. Cognitive interviewing: a ‘‘how to’’ guide. http://appliedresearch.cancer. gov/archive/cognitive/interview.pdf. Accessed August 30, 2014. 20. Link MW, Battaglia MP, Frankel MR, Osborn L, Mokdad AH. A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opin Q. 2008;72:6-27. 21. Dillman DA, Smyth JD, Christian LM. Internet, Mail, and Mixed-Mode Surveys:


23. 24.



The Tailored Design Method. 3rd ed. Hoboken, NJ: Wiley & Sons; 2009. Parmenter K, Wardle J. Evaluation and design of nutrition knowledge measures. J Nutr Educ Behav. 2000;32: 269-277. Bland JM, Altman DG. Statistics notes: Cronbach’s alpha. BMJ. 1997;314:572. Hendrie GA, Cox DN, Coveney J. Validation of the General Nutrition Knowledge Questionnaire in an Australian community sample. Nutr Diet. 2008;65:72-77. US Department of Agriculture. US Department of Health and Human Services. Dietary Guidelines for Americans, 2010. 7th ed. Washington, DC: US Dept of Agriculture and US Dept of Health and Human Services; 2010. US Department of Agriculture. MyPlate. Accessed August 30, 2014.

Reliability and validity of nutrition knowledge questionnaire for adults.

To determine the validity and reliability of a nutrition knowledge questionnaire for adults in California...
290KB Sizes 0 Downloads 11 Views