Patient DOI 10.1007/s40271-014-0084-x

LEADING ARTICLE

Improving the Validity of Stated-Preference Data in Health Research: The Potential of the Time-to-Think Approach Semra Ozdemir

 Springer International Publishing Switzerland 2014

Abstract The objective of this article was to discuss potential benefits and drawbacks of using a time-to-think (TTT) approach in healthcare research. Implementing a TTT approach in a stated-preference survey study gives respondents the opportunity to reflect on their options before answering preference-elicitation questions. This article offers an evaluation of circumstances that are suited for implementing this approach, and highlights several remaining questions and problems that should be explored in future research.

Key Points for Decision Makers The time-to-think (TTT) approach, based on giving respondents time to reflect on their preferences, has been shown to reduce willingness to pay estimates and increase data validity in the literature. The potential drawbacks of the approach include high survey attrition rates as respondents dropout during the TTT period, and the possibility of strategic behavior as participants attempt to influence study findings. Implementing a TTT approach is most beneficial for studies of new products or topics with which respondents have limited familiarity, studies involving complex medical decisions, studies of topics involving group or joint decision making, and study designs with a higher chance of inherent bias. S. Ozdemir (&) Health Services and Systems Research and Lien Center for Palliative Care, Duke-NUS Graduate Medical School, 8 College Road, Singapore 169857, Singapore e-mail: [email protected]

1 Introduction Stated-preference (SP) survey methods have been used extensively in health research applications to understand the preferences of specific patient and physician groups, or healthcare users and providers in general [1–5]. SP methods are traditionally used for quantifying preferences or demand for goods and services that are not exchanged in well-organized markets. As part of the process, respondents state their preferences for hypothetical products or services, which may involve hypothetical out-of-pocket payments. This hypothetical nature of SP scenarios has led these methods to be criticized in the literature [6, 7]. Several methods have been developed to overcome or reduce the bias that may result from this condition, including ‘cheap talk’ (which attempts to engage respondents in the research problem, and to motivate them to devote more effort and attention than they otherwise would do) [8, 9], asking follow-up certainty questions [10, 11], or giving respondents time to reflect on their preferences, which is also known as a time-to-think (TTT) approach [12]. The objective of this article was to discuss the potential benefits and drawbacks of using a TTT approach in SP healthcare studies. In this article, I discuss the TTT approach as it applies to two specific SP methods: contingent valuation (CV) and discrete choice experiments (DCEs). The CV method was developed and widely used by environmental economists prior to its use in health applications [13], while the DCE method has been widely used to understand customer preferences in marketing, environmental and transportation research, as well as in health applications [3]. The CV method usually focuses on a single commodity or service, and the focus of the method is to vary hypothetical prices to

S. Ozdemir

elicit willingness-to-pay (WTP). The DCE method focuses on multi-attribute goods or services, where price, when included, is one of several attributes that define the product (see Tables 1, 2 for examples of DCE and CV questions, respectively).

2 Potential Problems with Stated-Preference Methods 2.1 Hypothetical Bias The difference between what people say they are willing to pay in a survey, and what people actually would pay

using their own money in a laboratory or field experiment, is known as hypothetical bias [14]. Since SP studies obtain data from evaluations of hypothetical scenarios, there is a chance that some respondents answer the questions without sufficient consideration of what they would do in a real-life setting. In particular, the lack of a binding budget constraint can cause some respondents to unrealistically overstate the amount of money they would be willing to pay and/or they could afford [15–19]. Although the classic definition of hypothetical bias in CV studies focuses on the hypothetical nature of the payment commitment, in DCE studies attributes other than

Table 1 Example DCE question

Treatment A

Treatment B

Moderate Mild • Severity of daily Crohn’s symptoms

• • •

Effect on serious complications Time between flare-ups

Moderate pain on most days or severe pain on some days About 8 or more diarrhea stools per day Generally feel poorly Considerable problems with work and leisure activities

• •

Mild pain on most days About 3 diarrhea stools per day Generally feel below par Some problems with work and leisure activities

Prevents all serious

Reduces some of the serious

complications

complications

2 years

6 months

Treatment requires taking

Yes

oral steroids Chance of dying from a serious infection within 10

• •

20 patients out of 1,000 (2%)

None would die

would die

years Chance of dying or severe

5 patients out of 1,000 (0.5%)

disability from PML within

would die or have severe

10 years

None would die or have severe

disability

Chance of dying from

disability

None would die

lymphoma within 10 years

Which treatment would you choose?

Treatment A

Treatment B

*Reproduced from Johnson FR, Ozdemir S, Mansfield C, Hass S, Miller DW, Siegel C, Sands BE. Crohn’s Disease Patients Benefit-Risk Preferences: Serious Adverse Event Risks versus Treatment Efficacy, Gastroenterology 2007; 133(3):769-779.

Time-to-Think Approach in Health Research Table 2 Example CV questiona

Suppose that the government will not supply the new vaccine for free. Those who want a vaccine would have to pay a fixed price for it. Everyone would pay the same price. Now I'd like to know whether you would buy the vaccine if it was available at a specified price. Some people say they cannot afford the price of the vaccine or that they are actually not at risk of getting this disease. Other people say that would buy the vaccine because the protection is really worth it to them. In other studies about vaccines, we have found that people sometimes say they want to buy the vaccine. They think: "I would really like as much protection from this disease as possible." However, they may forget about other things they need to spend their money on. Please try to think carefully about what you would actually do if you had to spend your own money. There are no right or wrong answers. We really want to know what you would do. When you give your answer about whether you would or would not buy the vaccine, please consider the following: yours and your family's income and economic status compared with the price of the vaccine, and your risk of getting cholera. Apart from the vaccine, remember that we still have other ways to treat cholera such as oral dehydration solution. Also, remember that the benefit of the vaccine in preventing cholera is [50% effective for 3 years]. Again, the cholera vaccine cannot be used by children under 1 year and pregnant women. …………….. Suppose that this cholera vaccine costs (Tk.10, Tk.25, Tk.50, Tk.75, Tk. 300, Tk. 600) for the two doses needed for one person. Would you buy this vaccine for yourself? (Spontaneous response; one response permitted) (1)_______

Yes

(2)_______

No

(-98)______

Don’t know/not sure

What is the main reason that you would buy the vaccine? (Do not read choices; record only the most important reason) (1) _____

Vaccine is useful for me because it is good for prevention and safety

(2) _______Price is reasonable, can afford easily (3) _______I think I have a chance of getting cholera (4) _______Cholera is a dangerous disease (-95) _____ Other, specify (-98) ______Don’t know/not sure ‡

[If the answer is YES ] Are you certain of your answer that you would purchase the vaccine for yourself if the price of the vaccine were (Tk.10, Tk.25, Tk.50, Tk.75, Tk.300) for the two doses needed for one person? (read all responses: one response permitted). (1)______

Very certain of my answer

(2)______

Somewhat certain

(3)______

Not certain; unsure

*Only parts of the CV question are reported here due to the limited space. Reproduced from Islam, Z., Maskery, B., Nyamete, A., Horowitz, M. S., Yunus, M., & Whittington, D. (2008). Private demand for cholera vaccines in rural Matlab, Bangladesh. Health Policy, 85(2), 184-195 at doi:10.1016/j.healthpol.2007.07.009. ‡

Added by the author of this paper as the skip pattern of the original questionnaire is not presented here.

cost could also be affected by hypothetical bias [8]. Respondents may over- or under-estimate their preferences for attributes as they neglect or undervalue the opportunity cost of their commitment. For example, when patients are asked about their willingness to accept side effects or serious adverse event risks in exchange for therapeutic benefits, they may overstate the level of risk they are willing to accept in a survey relative to what they would accept in real life, or they may overreact to very

small risks that they would ignore in real-life decision making [20]. It should be noted that, unlike the cost attribute, the direction of the bias for other attributes is not always clear. While hypothetical bias is a potentially significant problem in all SP surveys, this could be less of a problem for health applications, especially for the studies that target private health outcomes compared with the applications that focus on public goods, quasi-public goods or publicly-

S. Ozdemir

provided services [21]. SP surveys in health applications target outcomes generally of a more personal nature than surveys in environmental applications [13], and this may lead to greater engagement on the part of the respondents. For example, it is intuitive that cancer patients are likely to be more engaged in a survey about treatment options than general-population respondents are to be engaged in a survey about protecting wetlands, or general-population respondents are likely to be more concerned about their health than they would be for environmental goods. 2.2 Enumerator Bias or Yea-Saying Effect Enumerator or interviewer bias is one of the problems inherent in face-to-face interviews, and is defined as a tendency for some respondents to provide a positive response to please the interviewer [12]. Similar to the enumerator bias, the yea-saying effect is a tendency to express agreement when responding to hypothetical questions, regardless of one’s actual views [22]. This is a particular problem in typical face-to-face surveys where respondents are aware of the interviewer’s anticipation of a positive response. The expected direction of the bias for the enumerator bias or yea-saying effect is upwards [12]. In typical CV surveys where respondents are asked whether they would ‘buy’ a hypothetical product or service at a given price, a positive response can be seen as an agreeable or desirable response [22]. For DCE surveys, especially when one or more alternatives are offered in each choice set, there is usually no obvious agreeable or desirable response, so respondents may be less susceptible to these biases than CV surveys [23]. 2.3 Focusing Illusion The focusing illusion is defined as exaggerating the importance of the current focus of one’s attention [24]. Encouraging respondents to focus on a particular survey topic can make the topic seem, briefly, very important. For example, a 20-min survey on the topic of maternal transmission of HIV may convince participants that something needs to be done, and they could experience a fleeting desire to do their part to help children born with HIV. When respondents are then asked to give an answer to a preference-elicitation question related to HIV immediately after reading about the topic, they are likely to experience a focusing illusion and possibly exaggerate their WTP if cost is one of the attributes, or overestimate their commitment to the proposed case. However, after people go back to their daily lives, face other problems or challenges, and have to write checks to pay their bills and utilities, contributing to the campaign against HIV may not seem as

important as before; thus, focus illusion can lead to overestimation of WTP or uptake [25].

3 Time-to-Think Approach SP surveys in health applications can benefit from a TTT approach because they target topics that are, in general, consequential or complex decisions related to health outcomes or health care. The TTT approach asks respondents to take a deliberate break from the survey to reflect on a specific complicated question. The objective of the TTT approach is to remind respondents that their responses are important; they should think it over carefully and not feel obliged to provide an answer promptly [26]. A TTT approach requires two sessions, which are implemented over 2 days in face-toface interviews [27]. On the first day of the interviews, respondents are introduced to the questionnaire, including the SP preference-elicitation task, and are asked to think about their options. They are asked to provide answers to the SP questions and any other TTT-related questions on the second day. Possible follow-up questions are whether participants talk to someone about the survey or look for more information on the survey topic, and how much time they spend on each of these activities. It is harder to implement a TTT approach in mail surveys because researchers have no control over when and how respondents answer a mailed questionnaire. On the other hand, web technology gives researchers effective regulation over the two-stage TTT format. In the first online session, respondents are introduced to the SP scenario and the questions, but are then instructed to reflect on the SP questions and return to the questionnaire later. A common practice to maximize response rates is to specify a maximum time that is allowed for respondents to return to the questionnaire, for example 24–72 h [20]. The TTT approach was first implemented in an SP study to measure rural households’ willingness to pay for public taps and private connections to improved drinking water systems in Anambra State, Nigeria [26]. Half of the sample was introduced to the SP scenario and given 1 day to think about their responses to the preference-elicitation questions, while the other half were expected to provide immediate WTP estimates but were allowed to revise these estimates after 1–2 days. The Whittington et al. study [26] was followed by a series of other SP studies that implemented a TTT approach through face-to-face interviews in developing countries. Cook and coauthors [27] provided an overview of these studies that compared implementing a TTT approach to not implementing it. The TTT approach has recently been utilized in web-enabled SP surveys [20, 28], but more studies are needed to gain a better understanding of the effect of giving respondents time to think in web-enabled surveys.

Time-to-Think Approach in Health Research

3.1 Better Simulation of Real-World Decision Making In real life, it is common for individuals to take time or ‘sleep on it’ before they make a consequential or complicated decision [29]. It generally takes time to acquire information and get advice from experts, family, and/or friends. People would also like to reflect when making decisions about unfamiliar health conditions and treatments. A newly diagnosed patient may need time to evaluate the possible benefits and side effects or risks associated with the available treatments. As observed by Cook et al. [27], individuals may need time to ‘discover’ their preferences. Giving respondents time to think allows them to engage in processes similar to those in real-life decision-making scenarios for these types of important decisions. Furthermore, shared preferences or decisions of households when income and time constraints are shared may also be of interest to researchers [30]. In the case of health decisions, outcomes can be shared in complicated ways via time burden of care and emotional connection of family members to the patient. For example, decisions related to children’s health are likely to be a function of the preferences of parents [31, 32]. Interviewing only one of the parents may not represent household preferences [33], and interviewing both parents can be more costly and time-consuming than utilizing a TTT approach. The previous TTT studies show that the majority of respondents reported utilizing the opportunity to think over the survey task. In a review of studies on WTP for vaccines in four developing countries, Cook et al. [27] showed that respondents reported spending, on average, 37 min on the survey topic, while less than 5 % of respondents reported spending no time thinking about the survey. In terms of shared decision making, 69 % of respondents consulted with their spouse, and 23 % reported consulting with someone outside the household. In three web-enabled SP studies on government programs my colleagues and I conducted with representative samples of US adults, respondents reported an average of 36–45 min spent thinking about the survey, and approximately 14–17 % reported discussing the survey with someone else [28]. However, almost 33 % of the respondents in each survey reported not thinking about the survey at all during their TTT period. The difference in the time spent by respondents on reflecting on these surveys could be explained by the perceived relevance of the study topic to respondents. In vaccine studies in developing countries, respondents might have perceived the topic and the SP questions as more relevant and consequential to themselves compared with the perceptions of respondents in a sample of a general US population about their preferences on government programs. Although it may not be possible to induce all

respondents to take time to think about the survey, analysis of the data, controlling for reported TTT effort, could provide insights into the validity of SP data collected under more realistic decision-making contexts. 3.2 Effect on Willingness-to-Pay Estimates and Survey Validity Giving respondents time for reflection can prevent the biases mentioned above and reduce the WTP and uptake estimates [27]. When respondents have more time, they can potentially evaluate their budget and expenses more carefully, and decide whether they can afford the proposed costs [26]. They will have the opportunity to consider the options and the consequences of each option more carefully instead of having to provide an immediate answer to please an enumerator [12]. Furthermore, more time can help people view the survey topic and the preferenceelicitation question in a way that is more consistent with their fundamental, long-term attitudes and concerns, which will reduce the focusing illusion. The previous literature shows evidence that giving respondents time to think lowered the WTP estimates [34– 36]. There is also evidence that time to think helped respondents consider the options and questions more carefully because respondents’ certainty of their answers to the preference-elicitation questions increased when they were given time to reflect [27]. In the DCE survey, respondents who were given time to think were less likely to fail internal consistency tests than respondents who were not given time to think [36].

4 Potential Drawbacks of Giving Respondents Time to Think 4.1 Strategic Behavior If respondents in SP studies give answers in a way to intentionally influence the study findings, they are said to engage in strategic behavior [37]. Giving respondents time to think could allow them the opportunity to devise individual strategies to ‘game’ the study and increase the chances of strategic behavior. In addition, respondents may have a chance to talk to other survey respondents and engage in collusion. However, the possibility of being in contact with other respondents is low in most studies by design. A study that plans to utilize time to think should therefore evaluate opportunities for and minimize the risks of strategic behavior. The Whittington et al. [26] study was conducted in small communities where strategic behavior could exist; however, they found no evidence of strategic behavior due to giving respondents time to think.

S. Ozdemir

4.2 Response Rates

Table 3 Socioeconomic characteristics of subjects who returned after the TTT period and those who did not

One of the problems associated with implementing a TTT approach is possible dropouts between the two sessions. Recruiting respondents to participate in a survey is challenging; it is even more challenging to convince them to commit to two sessions. Increased incentive payments can at least partially offset responserate problems. In face-to-face surveys, minimizing the chances that respondents may dropout may be possible if enumerators can track the respondents between sessions. In particular, tracking may be possible in studies conducted in small communities or in studies where respondents are intercepted at a location, such as a hospital. Dropping out was not a problem in the studies reviewed by Cook et al. [27], which were all conducted in small communities in developing countries through face-to-face interviews. However, dropout rates can be substantial in a webenabled implementation of the TTT approach. In the three SP surveys my colleagues and I conducted in the US, the dropout rate in the TTT period was between 17 and 20 % [28]. When the dropout rate is not negligible, it is advisable to conduct a comparison of socioeconomic characteristics and other available statistics between subjects who returned for the second session of the survey and those who did not. In all three of the SP studies we conducted, the proportion of female respondents was higher than the proportion of male respondents among the respondents who dropped out than respondents who returned to the survey after the TTT period (Table 3). The household size was larger, the ratio of non-White to White participants was higher, and the mean age was lower across two of three surveys for respondents who dropped out than those who returned to the survey. These findings suggest a time constraint on younger female respondents with larger families in our study.

Category

4.3 Additional Costs Implementing a TTT protocol requires a greater time commitment from respondents, and also for enumerators, in the case of face-to-face interviews. Enumerators will need to be hired for twice as many days than for a study without a TTT protocol. It is also common for online panel companies to charge higher costs for studies requiring panel members to commit to two sessions. Respondents may need extra incentives to return to a second session. Also, because researchers need a certain sample size to ensure statistical power, they may need to target a larger sample at the beginning of the study to account for possible dropouts between the two sessions.

Subjects returned

Subjects dropped out

p-Value

Survey 1a

54

49

0.054

Survey 2

51

42

\0.001

Survey 3

52

41

0.002

Male (%)

Age [years; mean (SD)] Survey 1

53 (16)

50 (16)

0.008

Survey 2 Survey 3

52 (16) 52 (16)

49 (17) 50 (16)

0.010 0.154

Survey 1

78

70

0.002

Survey 2

76

71

0.032

Survey 3

77

76

0.761

White (%)

Household income [$; mean (SD)] Survey 1

69K (49 K)

66K (49 K)

0.272

Survey 2

72K (49 K)

71K (52 K)

0.620

Survey 3

71K (48 K)

69K (51 K)

0.621

Household size [mean (SD)] Survey 1

2.6 (1.4)

2.8 (1.3)

0.011

Survey 2

2.7 (1.5)

2.7 (1.3)

0.304

Survey 3

2.6 (1.3)

2.8 (1.5)

0.018

TTT time-to-think, SD standard deviation, CV contingent valuation, DCE discrete choice experiment a

Survey 1 was a CV study conducted with a sample of 1,712 US citizens. Surveys 2 and 3 employed a DCE method and were conducted with 2,037 and 1,010 US citizens, respectively [28]

5 Study Design Considerations An important issue that has not been discussed in the literature is whether to show the actual valuation questions in the first session. In a CV study, the question is whether to show the exact bid (cost) amount to respondents before giving them time to think. The practice of showing the exact bid makes sense and gives respondents the opportunity to evaluate exactly how much spending they are willing to commit in terms of consequences of this extra cost on their household budget. However, respondents in a CV study are assigned different bid levels from a specified list. If respondents can communicate with one another and learn that their cost is different, disclosing the exact bid could encourage strategic behavior. For example, if respondents realize that they are asked to pay a higher amount than other respondents, they may answer ‘no’ when they otherwise would have answered ‘yes’. While such behavior is a theoretical possibility, published CV studies that implemented a TTT approach have shown the exact bid in the valuation questions in the first session, and reported observing no strategic behavior [27].

Time-to-Think Approach in Health Research

In a DCE study, the question is whether to show the actual trade-off questions in the first session or to show only a sample question before giving respondents time to think. Showing the exact trade-off questions could potentially lead to strategic behavior of the kind discussed for CV studies. In addition, DCE respondents may compare alternatives across choice sets if they can see all the questions at the same time, and this would influence their decisions. Cook et al. [36] encountered this problem when they left the trade-off questions with the respondents overnight. There was evidence that some respondents compared alternatives in one choice set with the alternatives in another choice set. Each choice set in this study offered two alternative vaccines and a status quo (‘no vaccine’) option, and respondents tended to choose the status quo option in the choice sets where both vaccine alternatives were not as attractive as the vaccine alternatives in other choice sets. For example, if vaccine A in question 3 was more attractive than both the alternative vaccines A and B in question 5, then respondents picked the status quo option in question 5. Therefore, the authors found that the rate of choosing the opt-out alternative was much higher when respondents were given time than when they were not given time [36]. In web-enabled surveys, it is possible to either allow or disallow the option of returning to the survey to look at the questions during the TTT period. For example, in Ozdemir et al. [28], we showed the trade-off questions to the respondents before giving time to think, and asked respondents to return to the survey at least 1 day later, but no more than 10 days later. We were not worried about the possibility of strategic behavior because (1) the question format was a forced choice between two alternatives (A or B) without a status quo or opt-out alternative; and (2) respondents did not have a copy of the SP questions unless they thought about printing a screen shot of every page before the first session ended, which is very unlikely. By design, it was difficult for respondents to remember the exact questions and compare them as they saw the questions one at a time. Our aim in giving respondents time to think was not to make them think about particular trade-off questions but to encourage them to think about the relative importance of the particular attributes and levels, and the trade-offs between these attributes.

6 Discussion The decision to implement a TTT protocol depends on the type and topic of the study, as well as resource constraints. Circumstances that are suitable for implementing a TTT protocol for an SP survey include the following:











Studies of new products or topics with which respondents have limited familiarity: Giving respondents time to think gives them greater opportunity to discover their preferences for a product with which they have little or no experience. Studies of complex products or topics: In the case of complex decisions, individuals need time to think things over before making a decision. An SP scenario involving many features can be complicated and also not easy to evaluate. In particular, answering DCE trade-off questions can be overwhelming when respondents need to weigh the importance of a large number of attributes with varying levels. For example, a study of cancer treatments can involve evaluating trade-offs among important attributes such as quality of life, life expectancy, timing and severity of multiple side effects, and costs of treatment. In such cases, having time to think could help respondents better evaluate the possible alternatives and trade-offs they are willing to make. Studies of topics where a person is likely to engage in group or joint decision making: Under circumstances when individuals would likely make decisions with the input of their family and friends in real life, giving time to think in a survey to allow communication will more accurately reflect actual preferences. For example, decisions on end-of-life care are mostly made by caregivers in Asian societies [38], and surveying only the patients would not be sufficient to understand decisions related to end-of-life care. Study designs with a higher chance of bias: Implementing a TTT protocol would also be a good idea when the effects of hypothetical bias, enumerator bias, and/or focusing illusion are expected to be large. This includes face-to-face interviews, where enumerator bias is likely to occur, and products that target non-private goods, where hypothetical bias is likely to occur. Studies in which response rates are less of a concern: Implementing a TTT protocol would be suitable if the incentives for respondents to return to the second session are high or it is easy to track and reclaim respondents for the second session. When the chance of respondents dropping out after the first session is high, then asking them to complete the survey in one session, without time to think, is a better practice.

My colleagues and I decided to implement a TTT protocol in all three SP surveys we conducted in the US with a sample of the general population [28]. All respondents were given time to think before they answered the preference-elicitation questions; none of these studies was a controlled experiment on time to think, nor was the focus on time to think. The reasons we decided to implement a

S. Ozdemir

TTT protocol were (1) the surveys focused on publiclyprovided goods, so hypothetical bias could be a problem; (2) respondents could be familiar with the commodities the surveys targeted but may not have had experience with them in their daily lives; and (3) we were interested in household, not individual, demand. The additional costs of implementing the TTT approach to not implementing it, and the expected dropout rate, were not negligible; however, we decided that the benefits of giving time to think would outweigh the costs. The current findings in the literature show evidence that TTT treatment lowers the average WTP compared with estimates of respondents who had no time to think [34–36]. The lowering of WTP estimates is likely due to the reduction in the biases described above. However, it is hard to identify and separate the effect of a single bias from the effects of other biases. This is an area for future research. Another important standing issue is to understand what happens during the TTT period, and how that affects the way people answer the SP questions. Presently, we can only speculate how people use the time between sessions and how it influences their answers to preference-elicitation questions. The TTT literature would benefit from a theoretical conceptual framework that explains how giving time to think could affect specification of utility functions. Acknowledgments I am very grateful to F. Reed Johnson and Eric Finkelstein for their comments on earlier drafts of this article. I also thank Marcel Bilger for his feedback on the structure of the article and M. Saif Farooqui for his assistance with preparing the article. Conflict of interest Semra Ozdemir has no conflict of interest and has received no funding for this study.

References 1. Ryan MK, Gerard K, Amaya-Amaya M. Using discrete choice experiments to value health and health care. Springer; 2008. 2. Johnson FR, et al. Are gastroenterologists less tolerant of treatment risks than patients? Benefit-risk preferences in Crohn’s disease management. J Manag Care Pharm. 2010;16(8):616–28. 3. Lancsar E, Louviere J. Conducting discrete choice experiments to inform healthcare decision making. Pharmacoeconomics. 2008;26(8):661–77. 4. Diener A, O’Brien B, Gafni A. Health care contingent valuation studies: a review and classification of the literature. Health Econ. 1998;7(4):313–26. 5. Krupnick A, et al. Age, health and the willingness to pay for mortality risk reductions: a contingent valuation survey of Ontario residents. J Risk Uncertain. 2002;24(2):161–86. 6. Hausman J. Contingent valuation: from dubious to hopeless. J Econ Perspect. 2012;26(4):43–56. 7. Diamond PA, Hausman JA. Contingent valuation: is some number better than no number? J Econ Perspect, 1994;8(4): 45–64. ¨ zdemir S, Johnson FR, Hauber AB. Hypothetical bias, cheap 8. O talk, and stated willingness to pay for health care. J Health Econ. 2009;28(4):894–901.

9. Cummings RG, Taylor LO. Unbiased value estimates for environmental goods: a cheap talk design for the contingent valuation method. Am Econ Rev. 1999;89(3):649–665. 10. Champ PA, et al. Using donation mechanisms to value nonuse benefits from public goods. J Environ Econ Manag. 1997;33(2):151–62. 11. Vossler CA, Kerkvliet J. A criterion validity test of the contingent valuation method: comparing hypothetical and actual voting behavior for a public referendum. J Environ Econ Manag. 2003;45(3):631–49. 12. Whittington D. What have we learned from 20 years of stated preference research in less-developed countries? Ann Rev Resour Econ. 2010;2(1):209–36. 13. Hanley N, Ryan M, Wright R. Estimating the monetary value of health care: lessons from environmental economics. Health Econ. 2003;12(1):3–16. 14. Loomis J. What’s to know about hypothetical bias in stated preference valuation studies? J Econ Surv. 2011;25(2):363–70. 15. Harrison GW, Rutstro¨m EE. Experimental evidence on the existence of hypothetical bias in value elicitation methods. In: Plott CR, Smith VL, eds. Handbook of experimental economics results. Elsevier; 2008. p. 752–67. 16. Cummings, Harrison, Rutstro¨m. Homegrown values and hypothetical surveys: is the dichotomous choice approach incentivecompatible? Am Econ Rev. 1995;85(1):260–6. 17. Jacquemet N, et al. Do people always pay less than they say? Testbed laboratory experiments with IV and HG values. J Public Econ Theory. 2011;13(5):857–82. 18. List J, Gallet C. What experimental protocol influence disparities between actual and hypothetical stated values? Environ Resour Econ. 2001;20(3):241–54. 19. Cummings et al. Are hypothetical referenda incentive compatible? J Polit Econ. 1997;105(3):609–21. 20. Johnson FR, et al. No time-to-think about benefit-risk preferences: an experiment to test the validity of patients’ stated preferences. Ithaca, NY: American Society of Health Economics; 2010. 21. Johnston RJ. Is hypothetical bias universal? Validating contingent valuation responses using a binding public referendum. J Environ Econ Manag. 2006;52(1):469–81. 22. Blamey RK, Bennett JW, Morrison MD. Yea-saying in contingent valuation surveys. Land Econ. 1999;75(1):126–41. 23. Viney R, Lancsar E, Louviere J. Discrete choice experiments to measure consumer preferences for health and healthcare. Expert Rev Pharmacoecon Outcomes Res. 2002;2(4):319–26. 24. Kahneman D, Sugden R. Experienced utility as a standard of policy evaluation. Environ Resour Econ. 2005;32(1):161–81. 25. Bateman IJ, et al. Learning design contingent valuation (LDCV): NOAA guidelines, preference learning and coherent arbitrariness. J Environ Econ Manag. 2008;55(2):127–41. 26. Whittington D, et al. Giving respondents time to think in contingent valuation studies: a developing country application. J Environ Econ Manag. 1992;22(3):205–25. 27. Cook J, et al. Giving stated preference respondents ‘‘time to think’’: results from four countries. Environ Resour Econ. 2012;51(4):473–96. 28. Ozdemir S. Measuring the economic value of government programs: an application to early-childhood interventions. Chapel Hill, NC; Environmental Sciences and Engineering, University of North Carolina at Chapel Hill; 2013. 29. Bos MW, Dijksterhuis A, van Baaren RB. The benefits of ‘‘sleeping on things’’: unconscious thought leads to automatic weighting. J Consumer Psychol. 2011;21(1):4–8. 30. Dosman D, Adamowicz W. Combining stated and revealed preference data to construct an empirical examination of intrahousehold bargaining. Rev Econ Household. 2006;4(1):15–34.

Time-to-Think Approach in Health Research 31. Stewart JL, Pyke-Grimm KA, Kelly KP. Parental treatment decision making in pediatric oncology. Semin Oncol Nurs. 2005;21(2):89-97. 32. Pyke-Grimm KA, et al. Parents of children with cancer: factors influencing their treatment decision making roles. J Pediatr Nurs. 2006;21(5):350–61. 33. Whittington D, et al. Household demand for preventive HIV/ AIDS vaccines in Thailand: do husbands’ and wives’ preferences differ? Value Health. 2008;11(5):965–74. 34. Islam Z, et al. Private demand for cholera vaccines in rural Matlab, Bangladesh. Health Policy. 2008;85(2):184–95.

35. Lucas ME, et al. Private demand for cholera vaccines in Beira, Mozambique. Vaccine. 2007;25(14):2599–609. 36. Cook J, et al. Reliability of stated preferences for cholera and typhoid vaccines with time to think in Hue, Vietnam. Econ Inq. 2007;45(1):100–14. 37. Carson RT, Flores NE, Meade NF. Contingent valuation: controversies and evidence. Environ Resour Econ. 2001;19(2):173–210. 38. Bowman KW, Singer PA. Chinese seniors’ perspectives on endof-life decisions. Soc Sci Med. 2001;53(4):455–64.

Improving the Validity of Stated-Preference Data in Health Research: The Potential of the Time-to-Think Approach.

The objective of this article was to discuss potential benefits and drawbacks of using a time-to-think (TTT) approach in healthcare research. Implemen...
331KB Sizes 0 Downloads 4 Views