Process Evaluation and Measurement Evaluation & the Health Professions 2015, Vol. 38(3) 382-403 ª The Author(s) 2013 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0163278713513586 ehp.sagepub.com

Adding Postal Follow-Up to a Web-Based Survey of Primary Care and Gastroenterology Clinic Physician Chiefs Improved Response Rates but not Response Quality or Representativeness

Melissa R. Partin1,2, Adam A. Powell1,2, Diana J. Burgess1,2, David A. Haggstrom3, Amy A. Gravely1, Krysten Halek1, Ann Bangerter1, Aasma Shaukat1,2, and David B. Nelson1,2

Abstract This study assessed whether postal follow-up to a web-based physician survey improves response rates, response quality, and representativeness.

1

Minneapolis VA Healthcare System, Minneapolis, MN, USA University of Minnesota Medical School, Minneapolis, MN, USA 3 Indianapolis VA Medical Center, Indianapolis, IN, USA 2

Corresponding Author: Melissa R. Partin, Minneapolis VA Healthcare System, 1 Veterans Drive, Minneapolis, MN 55417, USA. Email: [email protected]

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

Partin et al.

383

We recruited primary care and gastroenterology chiefs at 125 Veterans Affairs medical facilities to complete a 10-min web-based survey on colorectal cancer screening and diagnostic practices in 2010. We compared response rates, response errors, and representativeness in the primary care and gastroenterology samples before and after adding postal followup. Adding postal follow-up increased response rates by 20–25 percentage points; markedly greater increases than predicted from a third e-mail reminder. In the gastroenterology sample, the mean number of response errors made by web responders (0.25) was significantly smaller than the mean number made by postal responders (2.18), and web responders provided significantly longer responses to open-ended questions. There were no significant differences in these outcomes in the primary care sample. Adequate representativeness was achieved before postal follow-up in both samples, as indicated by the lack of significant differences between web responders and the recruitment population on facility characteristics. We conclude adding postal follow-up to this web-based physician leader survey improved response rates but not response quality or representativeness. Keywords physicians, surveys, survey methods, respondents, data quality, organizational structure, colorectal neoplasms

Introduction Previous research has documented that response rates for physician surveys are typically lower than response rates for general population surveys (Asch, Jedrziewski, & Christakis, 1997; Cummings, Savitz, & Konrad, 2001). The highest response rates for physician surveys have been derived from postal and telephone administration methods (McLeod, Klabunde, Willis, & Stark, 2013; VanGeest, Johnson, & Welch, 2007). In both physician and general population samples, however, web-based surveys are increasingly employed due to several advantages they hold for both researchers and respondents over postal and telephone surveys. For researchers, advantages of web surveys include lower cost (Cobanoglu, 2001; Couper, 2000; Dillman, 2000; Dykema, Jones, Piche, & Stevenson, 2013; Raziano, Jayadevappa, Valenzula, Weiner, & Lavizzo-Mourey, 2001; Schleyer & Forrest, 2000), faster response time (Bates, 2001; Beebe, Locke, Barnes, Davern, & Anderson, 2007; Dykema et al., 2013; Raziano et al., 2001; Yun & Trumbo, 2000), immediate storage of data in electronic

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

384

Evaluation & the Health Professions 38(3)

format (Dykema et al., 2013; Schleyer & Forrest, 2000), and the ability to employ programming to minimize response patterns that can compromise data quality (such as response fatigue, item nonresponse, improper adherence to skip patterns, ambiguous or unclear response selections, and data entry errors; Birnbaum, 2004; Dykema et al., 2013). For physician respondents, these latter two advantages may help address common barriers to participation such as time and effort required to complete the survey (Klabunde, Willis, & Casalino, 2013) and concerns about confidentiality (Klabunde et al., 2013). Specifically, the availability of design features such as automated skip patterns and drop-down menus can make the response task simpler and less time consuming for participants than inperson, mail, or telephone surveys, and the lack of an interviewer or need to send responses through the mail may reduce concerns about confidentiality. Prior studies suggest, however, that web-based physician surveys may produce lower response rates than postal surveys (Akl, Maroun, Klocke, Montori, & Schunemann, 2005; Leece et al., 2004; Losch, Thompson, & Lutz, 2004; McMahon et al., 2003; Raziano et al., 2001). The response rate disadvantage associated with web-based surveys may be particularly problematic when surveying physicians, given their already lower response rates relative to the general population. Maximizing response rates is particularly critical for studies of organizational behavior targeting physician leaders, as statistical power in such studies is highly dependent on the number or organizations represented in the sample. Because the number of eligible organizations in such studies is typically small, the loss of a significant fraction of organizations to nonparticipation can seriously compromise the external and internal validity of study findings. Although achieving a high response rate may be necessary for accruing an adequate sample size, response rates are not in themselves sufficient or reliable indicators of survey quality (Johnson & Wislar, 2012). Indeed, at least two prior studies have documented that higher response rates are not associated with lower nonresponse bias in physician surveys (McFarlane, Olmsted, Murphy, & Hill, 2007; Thomsen, 2000). Therefore, assessments of survey quality must consider not only response rates but also response bias or representativeness. While prior studies of general population surveys suggest web-based administration produces less representative samples than postal administration (Akl et al., 2005; Leece et al., 2004; Losch et al., 2004; McMahon et al., 2003; Raziano et al., 2001), response bias is less pronounced in web-based surveys of physicians (Beebe et al., 2007; Kellerman & Herold, 2001; Scott et al., 2011).

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

Partin et al.

385

Mixed-mode survey administration (whereby more than one mode of survey administration is offered to study participants) has been employed in several prior survey research studies as a strategy for maximizing survey quality while controlling costs of survey administration (Beebe, Davern, McAlpine, Call, & Rockwood, 2005; Beebe et al., 2007; de Leeuw, 2005; Dillman et al., 2009; Dillman & Smyth, 2007; Dykema et al., 2013; Griffin & Obenski, 2002; Scott et al., 2011). Mixing web and postal administration strategies may represent an opportunity to harness the advantages of web survey administration while still benefiting from the higher response rates typical of postal surveys (Beebe et al., 2007; Dillman, 2000). However, one concern with mixed-mode survey administration is the potential for mode effects, whereby the quality of the data collected varies depending on what mode is used (de Leeuw, 2005, 2010; Dillman, 2000). For survey quality measures such as item nonresponse and the amount of detail provided in open-ended questions, one might expect web administration to perform better due to the availability of interactive features (such as automatic prompts for responses to any items skipped) and conveniences (such as access to a keyboard, which can make it easier to provide detailed responses to open-ended items). Few prior physician survey research studies have targeted physician leaders or health administrators as responders. Indeed, a recent review of health care provider survey studies published between 2000 and 2010 identified only four prior studies targeting health administrators (McLeod et al., 2013). We identified four prior studies that specifically targeted physician leaders (Duke, 2007; Nelson et al., 2006; Robinson et al., 2009; Seeff et al., 2004). Only one of these studies involved web administration of the survey (Duke, 2007), and this study, which achieved a 19% response rate, did not examine response quality or representativeness. None of the surveys targeting physician (as opposed to nurse) leaders in these prior studies achieved a response rate greater than 32% without use of incentives (Duke, 2007; Nelson et al., 2006; Robinson et al., 2009; Seeff et al., 2004). Those employing incentives achieved response rates of 60% (Nelson et al., 2006) to 74% (Seeff et al., 2004). Additional research is needed to identify effective approaches to improving response rates, data quality, and representativeness in surveys targeting physician leaders. The current study adds to the literature on physician survey methods by examining whether adding postal follow-up to a web-based survey about colorectal cancer screening and diagnostic practices, administered to a sample of primary care and gastroenterology physician leaders without an incentive, improves response rates, response quality, and representativeness.

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

386

Evaluation & the Health Professions 38(3)

Method Population The survey was administered to the physician chiefs of primary care and gastroenterology at Veterans Health Administration (VHA) medical facilities. The largest integrated health care system in the United States, the VHA includes 152 medical facilities (U.S. Department of Veterans Affairs, 2013). The primary purpose of the survey was to collect information on primary care and gastroenterology clinic structures and processes related to timely diagnostic follow-up of positive fecal occult blood tests (FOBTs) conducted for colorectal cancer screening purposes. To ensure surveyed facilities were engaged in some FOBT-based colorectal cancer screening, the survey sample was restricted to VHA facilities that conducted at least 1,400 FOBTs in 2009 (n ¼ 125).

Survey Recruitment and Administration The survey was conducted between August and December 2010. Survey recruitment involved the following steps (summarized in Figure 1). An initial e-mail message was sent to all regional VHA Directors from the VHA Office of the Deputy Under Secretary for Health, Operations and Management in the VHA requesting that the Director forward an attached approval request e-mail to their facility directors and chiefs of staff (COS). The forwarded COS e-mail asked COS to respond to indicate whether they approve or decline their facility’s participation in the survey. They were instructed to communicate their preferences by clicking on links in the e-mail to the survey application (one indicating approval to have their facility participate and another indicating they prefer not to have their facility participate). Once connected to the survey application, the COS were asked to verify their facility and provide the contact information for their chiefs of primary care and gastroenterology, if applicable. A reminder e-mail was sent after 1 week to any COS not responding to the initial COS e-mail. Two weeks after the initial COS e-mail, the project coordinator attempted to reach by phone all COS who had not responded to determine if they supported their facility participating in the survey. At that time, the coordinator obtained their approval or refusal, as well as the names of primary care and gastroenterology chiefs (if approval was received) over the phone. As a final contact, all nonresponding COS were mailed a letter asking them to indicate their site’s participation preference, provide the contact information for their chiefs of primary care

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

Partin et al.

387

All VA facilities that conducted at least 1400 FOBTs in 2009 identified (N=125)

Chief of Staff (COS) approval request email

Total COS Responses Approval: 107 (85.6%)

Survey Recruitment Email & Web-based Survey Link sent to 107 Primary Care (PC) and 101 Gastroenterology (GI) Chiefs

Survey Completed 1 PC: 35 (32.7%) GI: 35 (34.7%)

No Response

PC/GI Email Reminder #1

Survey Completed PC: 15 (46.7%) GI: 12 (46.5%)

No Response

PC/GI Email Reminder #2

Survey Completed PC: 7 (53.3%) GI: 10 (56.4%)

No Response

PC/GI Postal Reminder and Self-Administered Questionnaire

Survey Completed PC: 21 (17 postal, 4 web) (72.9%) GI: 25 (22 postal, 3 web) (81.2%)

Total Surveys Received: PC: 78 (72.9%) GI: 82 (81.2%) 1 Numbers in the Survey Completed boxes represent non-cumulative number of completed surveys at each recruitment stage. Percentages represent cumulative response rates at each recruitment stage.

Figure 1. Survey recruitment steps.

and gastroenterology (if applicable), and return their response in the self-addressed, stamped envelope provided. Once approval was received from the local COS, the study team sent recruitment e-mails to primary care and gastroenterology chiefs, which were cosigned by medical leadership. The recruitment e-mails included a

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

388

Evaluation & the Health Professions 38(3)

link to the survey website and informed potential participants that their COS had approved their facility’s participation in the survey. Reminder e-mails were sent to all primary care and gastroenterology chiefs who had not yet responded 1 and 2 weeks after the initial recruitment e-mail. Approximately 3–5 weeks after the initial recruitment e-mail, all nonresponding chiefs were sent a pen and paper version of the survey by standard U.S. Postal Service mail that could be completed and returned in the self-addressed, stamped envelope provided. The survey materials also included the URL addresses for the online versions of the survey and potential respondents were instructed that they could still complete the survey online if they preferred. Per VHA policy, participants were not offered an incentive for participating in the survey.

Content and Features to Reduce Response Errors The survey asked about organizational structures and clinic processes related to colorectal cancer screening and diagnostic follow-up present at the respondent’s medical center. Both surveys were pretested on two primary care and two gastroenterology physicians, as well as members of the research team, before finalizing. Features used to minimize response errors on the web survey included: ensuring the survey took less than 10 min to complete, providing instructions on the number of responses requested for each question, automating skip patterns, and providing a prompt about any missing responses (which would display once, before allowing the responder to advance to the next page). Additionally, menus, command buttons, check boxes, radio buttons, and pop down lists were employed to reduce the need for typing and reduce errors associated with typing skill level. In the postal survey, features used to minimize response errors included: ensuring the survey took less than 10 min to complete, providing instructions on skip patterns and the number of responses requested to each question, and employing check box formatting.

Protection of Human Subjects The Institutional Review Boards at the Minneapolis and Boston Veterans Affairs Medical Centers approved the study protocol.

Dependent Measures Response Rates. Survey response rates for the primary care and gastroenterology samples were estimated using the American Association for Public

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

Partin et al.

389

Opinion Research Response Rate 1 calculation method, which divides the number of completed surveys by the total number of individuals in the recruitment sample (The American Association for Public Opinion Research, 2011). Data Quality. The data quality measures examined included (1) the mean number of missing items for questions asked of all respondents (possible range 0–12 for the primary care sample; 0–22 for the gastroenterology sample); (2) the mean number of items for which more than one response was indicated for a ‘‘select one’’ response instruction, which we refer to hereafter as ‘‘response errors’’ (possible range 0–6 for both samples); (3) the mean number of total errors, calculated by summing the above two types of errors (possible range 0–18 for the primary care sample; 0–28 for the gastroenterology sample); and (4) the mean character count for an open-ended question at the end of the survey, which asked ‘‘Do you have any other thoughts or concerns about the follow-up of positive FOBT results at your facility you would like to share with us?’’

Independent Measures Response Status. Our analyses compare three groups of responders, classified according to response status on December 20, 2011: (1) web responders, (2) postal responders, and (3) all responders. Time of Response. Participants were also classified according to when they responded (1) before the e-mail reminders (labeled ‘‘invitation’’), (2) after the first e-mail reminder but before the second reminder (labeled ‘‘e-mail 1’’), (3) after the second e-mail reminder but before postal administration (labeled ‘‘e-mail 2’’), and (4) after postal administration (labeled ‘‘postal’’). Facility characteristics used to assess representativeness included the main outcome of the larger study (the proportion of positive FOBT screening results followed by diagnostic colonoscopy within 6 months) and the following characteristics suspected or demonstrated in prior research to be associated with this outcome: facility region (northeast, midwest, west, south); facility location (urban, rural); the proportion of patients age 50–75 receiving care at the facility who are adherent to colorectal cancer screening recommendations (i.e., received on FOBT in the past year, a colonoscopy in the past 10 years, or a sigmoidoscopy in the past 5 years); the dominant colorectal cancer screening modality employed at the facility

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

390

Evaluation & the Health Professions 38(3)

(FOBT, mixed, or colonoscopy); the primary care and gastroenterology missed opportunity rates (defined as the proportion of appointments scheduled in primary care and gastroenterology clinics that were either cancelled less than 2 weeks in advance or were not attended); and the facility complexity score (Stefos, LaVellee, & Holden, 1992), a measure summarizing facility workload, patient risk level, complexity of clinical services, and teaching and research activity, which has been found to be associated with colorectal cancer screening program quality in prior studies (Yano, Soban, Parkerton, & Etzioni, 2007). Scores on this measure range from 1.41 to þ1.67, with higher scores indicating greater complexity.

Analyses Response Rates. We estimated survey response rates by sample (primary care, gastroenterology) and timing. Further, we used logarithmic regression techniques to examine the incremental impact of switching the final contact to postal administration. Specifically, we estimated a trend line that fit the observed response rates after the first three survey contacts (initial e-mail invitation and two e-mail reminders). We then used this fitted regression equation to estimate the response rate that would be expected from a hypothetical fourth e-mail reminder, and compared this estimate to the observed response rate achieved when the fourth contact involved postal administration. Representativeness. To examine the impact of adding the final postal administration contact on sample representativeness, we compared the percentage distribution of facility characteristics in the recruitment population (all physician leaders from eligible facilities) to the distribution before (web responders) and after adding the postal follow-up (all responders) in each sample (primary care and gastroenterology). One sample chi-square and t-tests, treating the reference population as fixed, were used to identify significant differences between each response group and the recruitment population. Data Quality. To assess the impact on data quality of adding postal administration, we compared the quality measures defined above across web and postal response groups and used negative binomial regression models to identify significant differences.

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

Partin et al.

391

Results Response Rates Figures 2 and 3 provide summaries for the primary care (Figure 2) and gastroenterology (Figure 3) samples of observed response rates by timing (bars), and the predicted response rates had the last contact been an e-mail reminder (last point on the lines). Response rates were 32.7% and 34.7% in the primary care and gastroenterology samples, respectively, before the first reminder; 46.7% and 46.5% after the first reminder; and 53.3% and 56.4% after the second reminder. Adding the postal contact increased response rates by 19.6 percentage points in the primary care sample, and by 24.8 percentage points in the gastroenterology sample, resulting in total response rates of 72.9% in the primary care sample and 81.2% in the gastroenterology sample. The predicted response rate for an additional e-mail contact was 13 percentage points lower than the observed response rate following the postal contact in the primary care sample (60 vs. 73%), and 20 percentage points lower than the observed response rate in the gastroenterology sample (61 vs. 81%).

Representativeness Tables 1 and 2 present the frequency distributions for facility characteristics by response group in the primary care and gastroenterology samples, respectively. Before postal responders were included, there were small but statistically insignificant differences between respondents and the population on region, location, complexity, and dominant screening mode in both the primary care and the gastroenterology samples. With the exception of region in the primary care sample, any existing small differences were further reduced after adding postal responders. There were no differences between the respondents and the population on screening rates, follow-up rates, or missed opportunity rates in either sample. Because there were no statistically significant differences between responders and the population on any characteristics before postal administration, our analysis did not demonstrate significant improvement in representativeness by adding postal follow-up in either the primary care or gastroenterology samples.

Data Quality Table 3 provides data quality comparisons for web and postal responders by physician type. In the primary care sample, the differences in mean number

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

392

Evaluation & the Health Professions 38(3)

100 90

Response rate

80

observed predicted

70 60 50 40 30 20 10 0 invitaon

email 1

email 2

postal

Response ming

Figure 2. Predicted and observed response rates by response timing, primary care sample.

100 90

Response rate

80

observed predicted

70 60 50 40 30 20 10 0 invitaon

email 1

email 2

postal

Response ming

Figure 3. Predicted and observed response rates by response timing, gastroenterology sample.

of missing items, mean number of response errors, and total errors across response groups were not statistically significant. In the gastroenterology sample, although the number of missing items was small in both response

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

393

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

19.7 26.2 34.4 19.7 82.0 18.0 0.24 81.7 0.51

16.4 31.2 52.5 0.11 57.0

15.9 25.23 39.25 19.63 79.44 20.56 0.17 82.2 0.50

16.8 34.6 48.6 0.11 100 .8175 .7599

.4985 .3596 .5531

.6253

.8178

17.7 35.3 47.1 0.10 15.9

70.6 29.4 0.02 82.6 0.50

5.9 35.3 35.3 23.5

.9911 .2451

.3496 .6671 .8368

.3665

.5818

16.7 32.1 51.3 0.11 72.9

79.5 20.5 0.18 81.9 0.50

16.7 28.2 34.6 20.5

All Responders% (n ¼ 78)

Note. CRC ¼ colorectal cancer; FOBT ¼ fecal occult blood test; PC ¼ Primary Care. *p Values calculated using one-sample chi-square tests for categorical variables and one-sample t-tests for continuous variables.

Region Northeast Midwest South West Location Urban Rural Mean complexity score CRC screening rate Proportion FOBT þ with colonoscopy  6 months Dominant screening mode FOBT Mixed Colonoscopy PC missed opportunity rate Total

Characteristic

p Value* Postal p Value* Web Recruitment Population % Responder (Web vs. Responder (Postal vs. % (n ¼ 61) Population) % (n ¼ 17) Population) (n ¼ 107)

.8769 .5402

.8924 .4802 .6876

.9918

.8589

p Value* (Web þ Postal vs. Population)

Table 1. Percentage Distribution of Facility Characteristics in the Recruitment Population and Each Response Group (Web, Postal, All Responders), Primary Care Sample (N ¼ 107).

394

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

18.3 25.0 41.7 15.0 86.7 13.3 0.33 82.2 0.50

18.3 33.3 48.3 0.10 59.4

16.8 24.8 37.6 20.8

80.2 19.8 0.18 82.2 0.50

15.8 35.6 48.5 0.10 100 .8264

.8476

.2088 .1492 .9575 .6832

.7246

4.6 50.0 45.5 0.10 21.8

68.2 31.8 0.18 82.0 0.50

18.2 18.2 36.4 22.2

.8662

.2136

.1572 .9805 .6333 .8643

.8337

p Value* (Postal vs. Population)

14.6 37.8 47.6 0.10 81.2

81.7 18.3 0.29 82.1 0.50

18.3 23.2 40.2 18.3

.9060 .7853

.7320 .2311 .7466 .6586

.8972

p Value* All (All vs. Responders% Population) (n ¼ 82)

Note. CRC ¼ colorectal cancer; FOBT ¼ fecal occult blood test; GI ¼ Gastrointestinal Clinic. *p Values calculated using one-sample chi-square tests for categorical variables and one-sample t-tests for continuous variables.

Region Northeast Midwest South West Location Urban Rural Mean complexity score CRC screening rate Proportion FOBT þ with colonoscopy  6 months Dominant screening mode FOBT Mixed Colonoscopy GI missed opportunity rate Total

Characteristic

Postal Web Recruitment Population % Responder % p Value* (Web vs. Responder % (n ¼ 22) (n ¼ 60) Population) (n ¼ 101)

Table 2. Percentage Distribution of Facility Characteristics in the Recruitment Population and Each Response Group (Web, Postal, All Responders), Gastroenterology Sample (N ¼ 101).

395

Downloaded from ehp.sagepub.com at University of Manitoba Libraries on September 13, 2015

0.36 (2.82) 0.07 (.25) 0.43 (2.82) 95.5 (162.78) 0.05 (.22) 0.20 (.40) 0.25 (.47) 178.4 (271.27)

0.45 (1.69) 0.32 (.54) 0.77 (1.87) 141.3 (243.08)

Web Responder M (SD)

0.31 (2.49) 0.09 (.29) 0.40 (2.50) 78.8 (149.31)

All Responders M (SD)

*p Values obtained from negative binomial regressions.

Primary care sample (n ¼ 107) Mean number of missing items Mean number of improper check all Mean total error count Mean character count for comments Gastroenterology sample (n ¼ 101) Mean number of missing items Mean number of improper check all Mean total error count Mean character count for comments

Characteristic

Table 3. Data Quality Measures by Response Group and Sample.

1.55 0.63 2.18 40.41

0.12 0.18 0.29 18.94

(3.04) (.73) (3.16) (81.11)

(.33) (.39) (.59) (55.31)

Postal Responder M (SD)

Adding Postal Follow-Up to a Web-Based Survey of Primary Care and Gastroenterology Clinic Physician Chiefs Improved Response Rates but not Response Quality or Representativeness.

This study assessed whether postal follow-up to a web-based physician survey improves response rates, response quality, and representativeness. We rec...
277KB Sizes 0 Downloads 0 Views