AJPH METHODS

Challenges and Innovations in Surveying the Governmental Public Health Workforce Surveying governmental public health practitioners is a critical means of collecting data about public health organizations, their staff, and their partners. A greater focus on evidencebased practices, practice-based systems research, and evaluation has resulted in practitioners consistently receiving requests to participate in myriad surveys. This can result in a substantial survey burden for practitioners and declining response rates for researchers. This is potentially damaging to practitioners and researchers as well as the field of public health more broadly. We have examined recent developments in survey research, especially issues highly relevant for public health practice. We have also proposed a process by which researchers can engage with practitioners and practitioner groups on research questions of mutual interest. (Am J Public Health. 2016; 106:1967–1974. doi:10.2105/ AJPH.2016.303424)

November 2016, Vol 106, No. 11

AJPH

Jonathon P. Leider, PhD, Gulzar Shah, PhD, Nikki Rider, ScD, Angela Beck, PhD, MPH, Brian C. Castrucci, MA, Jenine K. Harris, PhD, Katie Sellers, DrPH, Danielle Varda, PhD, Jiali Ye, PhD, Paul C. Erwin, MD, MPH, and Ross C. Brownson, PhD

S

urvey data have become one of the important scientific resources for making evidence-based decisions to improve the overall functioning and performance of public health departments.1 Survey data on health departments’ infrastructure, such as finance, workforce, and programs, are similarly valuable for informing public health policy and practice.2–5 In recent years, an increasing number of surveys have been sent to practitioners, particularly those at local health departments (LHDs) and state health agencies (SHAs).4 Although the research may be valuable, those data collection activities collectively impose a burden on public health staff, many of whom are already overextended because of significant health department workforce reductions and turnover as a result of recent budget cuts.6–8 According to the National Association of County and City Health Officials (NACCHO), which represents the nation’s 2800 LHDs, 40% of LHDs were asked to complete more than 5 organizational-level surveys in the previous 6 months during 2013 (NACCHO, unpublished data, 2013). Approximately 9% of these LHDs received more than 20 survey requests. Varying methodologies are employed to address barriers related to survey sampling, dissemination, and respondent follow-up, creating limitations when it comes to data comparability and replication of results.

The myriad objectives, methods, successes, and failures observed among public health researchers surveying governmental staff motivated our examination. In mid-2015, the writing team, consisting of public health systems and services research scholars, began meeting regularly to create a consensus list of challenges facing researchers and public health practitioners who participate in survey research. Additionally, the group constructed a set of recommendations for those seeking to engage practitioners in research moving forward. We discuss 3 main themes: (1) what makes survey research with public health practitioners different, (2) challenges, and (3) recommendations.

SURVEY RESEARCH WITH PUBLIC HEALTH PRACTITIONERS Public health practitioners are different from the general public as survey participants. Differences

may exist in demographics and educational attainment as well as in the type of information collected from staff as members of public organizations and whether information is collected directly from the practitioner or from a proxy, such as an agency head or senior deputy.

Major Targets of Survey Research Surveys in public health tend to gather either information about individuals collected from the individuals themselves or information about individuals or the organization overall collected from a proxy (e.g., regarding budget, policies and practices, programs). Surveying public health workers directly for individual-level data collection rather than at the department or program level is a challenge because there is no existing national list of public health workers employed in health departments or other public health organizations from which to sample.

ABOUT THE AUTHORS At the time of writing, Jonathon P. Leider was and Brian C. Castrucci is with the de Beaumont Foundation, Bethesda, MD. Gulzar Shah is with the Jiann-Ping Hsu College of Public Health, Georgia Southern University, Statesboro. Nikki Rider was with the National Network of Public Health Institutes, New Orleans, LA. Angela Beck is with the School of Public Health, University of Michigan, Ann Arbor. Jenine Harris and Ross C. Brownson are with the Brown School, Washington University, St. Louis, MO. Katie Sellers was with the Association of State and Territorial Health Officials, Arlington, VA. Danielle Varda is with the University of Colorado, Denver. Jiali Ye is with the National Association of County and City Health Officials, Washington, DC. Paul C. Erwin is with the University of Tennessee, Knoxville. Correspondence should be sent to Jonathon P. Leider, 7501 Wisconsin Ave #1310e, Bethesda, MD 20814 (e-mail: [email protected]). Reprints can be ordered at http://www.ajph.org by clicking the “Reprints” link. This article was accepted August 4, 2016. doi: 10.2105/AJPH.2016.303424

Leider et al.

Peer Reviewed

Surveillance

1967

AJPH METHODS

Other health professions, particularly those with licensure requirements or worker registries, have methods for reaching individual workers, but most public health disciplines do not benefit from these types of data sources.9 One exception is the Public Health Foundation’s TRAIN (the TrainingFinder Real-time Affiliate Integrated Network) system, which does not provide national estimates but does provide valuable data on subsets of workers from numerous states, the Medical Reserve Corps, and the Centers for Disease Control and Prevention. In 2012, the system had more than 400 000 registered learners; today TRAIN exceeds 1 million registered accounts.10,11 One recent large-scale survey illustrates the challenges workforce researchers face. In 2014, the Association of State and Territorial Health Officials, with funding from and in partnership with the de Beaumont Foundation, fielded the Public Health Workforce Interests and Needs Survey, which was the first survey to use a nationally representative sample of state health agency workers as well as a large sample of LHD staff.12 More than 23 000 public health practitioners responded. To select a nationally representative sample, state health agencies had to be willing to provide worker information to Association of State and Territorial Health Officials to create a sampling frame. It may be feasible for a national professional organization to elicit human resources information from their membership, but it is a challenging feat for other researchers and it is unlikely that regular access to such a list would be possible. Many public health workforce surveys are conducted at the organizational level.5,13 A single respondent, or group of

1968

Surveillance

Peer Reviewed

respondents, provides data on behalf of the health department. All data are reported in a single survey for the organization. This survey approach can be an effective way for a researcher to obtain data on organizational characteristics and activities for a large group, but there are persisting challenges with this methodology.

Types of Surveys Public health practitioners receive 3 major overlapping types of surveys on a regular basis: (1) membership research surveys, (2) third-party research surveys, and (3) programmatic and evaluation surveys on behalf of funders or partners. Membership organizations are in frequent contact with their constituents, with surveys as a primary means of data collection. These may be conducted to gauge membership satisfaction, perform priority setting, or conduct topical research on behalf of member interests. For example, the Association of State and Territorial Health Officials and National Association of County and City Health Officials conduct large, national, organizational-level profiles of their respective memberships every few years. These profiles serve as the research community’s primary sources of financing data, activity provision, and other organizational characteristics.5,13,14 Another major type of survey sent to public health practitioners comes from third parties, such as academic researchers. Academics conduct these surveys for numerous reasons: to build the evidence base for public health practice, to generate new knowledge about the field, and to ascertain the health and functioning of the governmental public health enterprise.

Leider et al.

Many times, third-party surveys—some of which are at the individual level but more frequently are at the organizational level—are fielded through national membership organizations. The primary advantage of this gatekeeping approach is that membership organizations maintain strong relationships with their members, with an explicit, stated aim to protect their members from survey fatigue and from research unlikely to benefit the practice of public health. Membership organizations’ selective endorsement of surveys going to their members may result in more robust response rates.

Key Characteristics of the Survey Population Textbooks on survey methods are written with a broad spectrum of survey participants in mind, which can range from the general public to workers in a wide array of industries and organizational settings. Educational attainment is 1 primary difference between the public health workforce and the general population. For example, when surveying the general population, 90% will have a high school education and less than a third will have a bachelor’s degree15,16; health department employees, however, tend to be professionals with higher levels of education and computer literacy. Three quarters of permanently employed staff at state health agencies have at least a bachelor’s degree, 38% at least a master’s degree, and 9% a doctoral or professional degree.17 Although these proportions are lower among LHD staff, they far exceed national averages for educational attainment.18 Individually targeted surveys may be sent to staff at all levels of educational

attainment, whereas organizationally targeted surveys are often sent to either the top executives or to the directors or managers of certain programs. Staff in these leadership roles often have higher educational attainment than does the average public health worker and average individuals in the general public; this may allow more technical or complex survey designs.17,18 Although educational attainment is a key consideration in the creation and fielding of survey instruments, there are several other important considerations. Public health staff are public employees. As such, they may be legally or politically constrained on whether they can respond to surveys, the types of responses they may give, and to what extent they can accept financial or nonfinancial incentives to respond. Organizational policy on the extent to which they can represent their organization may also limit their answers.

CHALLENGES Public health practitioners are faced with performing their roles and responsibilities in an extremely demanding and exceptionally complex landscape.19 Time is a scarce resource in an environment characterized by postrecession budget cuts and staff reductions.6,20 Broadly, although there are many specific challenges in surveying the public health workforce, these can largely be placed into 1 of 3 major thematic areas: the “who,” the “how,” and the “how many” (see the box on the following page).

The Who In the who arena, there are major challenges for both the surveyor and the potential

AJPH

November 2016, Vol 106, No. 11

AJPH METHODS

CHALLENGES AND POTENTIAL SOLUTIONS IN SURVEYS OF PUBLIC HEALTH PRACTITIONERS Survey Issues

Challenges

Potential Solutions

The who Identifying and reaching appropriate survey respondents

Sampling frames for individual workers may not be

Engage in partnerships with respondent organizations to

available obtain sampling frame Reliance on top executive to distribute survey to staff may Create organizational incentives for encouraging and be necessary Approval may be needed for workers to participate in

permitting workers to participate in surveys

survey Ensuring a high response rate

Difficult to achieve buy-in from the workforce to be

Include practice partners in survey development and

surveyed Survey questions may not be relevant to respondents High respondent burden

implementation plans Create incentives for participation Obtain endorsement from those who will encourage others to participate Tailor questionnaire to respondents

Providing feedback to respondents

Providing survey or study results back to the workforce in Determine the most appropriate means for disseminating a manner that is timely, useful, and valuable and can serve as an incentive to future survey participation

findings to respondents Communicate a timeline for providing feedback at the beginning of the study Consider developing policy briefs or summary results ahead of peer-reviewed publications

The how Choosing appropriate survey modality

Some workers may not have individual access to computers Consider multiple mode surveying if measurement or phones

Ensuring validity and reliability of surveys

differences can be controlled

Surveys often lack extensive psychometric testing

Continue to improve public health workforce survey

methods organization may not have access to needed information Ensure that survey questions are feasible for respondents to answer by engaging potential respondents before

Respondents providing information on behalf of the

survey launch (e.g., focus groups, pilot testing, input on survey design) Examining an issue from numerous perspectives

Some questions are better answered by quantitative data Build transdisciplinary teams to address mixed methods of and others by qualitative data data collection Skill sets vary for collecting and analyzing both types of Match the methods of data collection closely to the data

questions being answered The how many

Determining an adequate sample size

For new areas of research (e.g., administrative practices), Estimate rates and effect sizes on the basis of the rates and effect sizes are not well known

best available data; vary parameters in a sensitivity analysis For qualitative research, employ the practice of saturation

Estimating rates for smaller health departments

An adequate number of respondents may be lacking in agencies

Pool data from numerous agencies Identify the minimal sample in the numerator needed for stable rates

respondents. The surveyor must first identify whether organizationor individual-level information is needed and which is feasible to collect. The surveyor’s challenges then include identifying and reaching appropriate respondents and ensuring a high

November 2016, Vol 106, No. 11

AJPH

response rate. Gathering contact information for staff of a particular background or content expertise can prove extremely challenging or logistically infeasible. So instead of surveying people who are implementing a particular set of activities (e.g., obesity prevention

or infectious disease control) and who have the most knowledge and relevant experience, researchers often identify an individual who can provide a response for the entire organization—especially local and state health departments.

Leider et al.

Although organizational surveys typically collect information about the department overall, or perhaps characteristics of its leadership or programs, numerous surveys ask respondents to identify particular activities the department performs as well as

Peer Reviewed

Surveillance

1969

AJPH METHODS

characteristics about the workforce within the agency. It can be difficult to obtain detailed worker- or project-level characteristics, in part because the respondent may not have ready access to such information for each worker and program, or it may be too time intensive to compile the information to provide a survey response. It is also possible that respondents are dissimilar across health departments or have different levels of access to information.

The How With regard to how, challenges to surveying the workforce include the most appropriate and efficient survey administration techniques, the validity and reliability of surveys, and the survey approval processes. Numerous issues often arise in the course of survey research among governmental public health staff. These include methodological, technological, financial, and political and legal obstacles. For instance, although most health departments have Internet connectivity, individuals who are not doing desk work may not have access to computers, and telephones may be limited21; thus, the most efficient modalities for conducting surveys may be the least feasible. Additionally, surveys often lack extensive psychometric testing because of budgetary, time, or other constraints. This both becomes a communications challenge for potential respondents (especially when respondents ask questions about interpretation) and affects the validity and reliability of results. For a respondent providing organizational responses, such as budget or performance metrics, there will often be a disconnect between the need for

1970

Surveillance

Peer Reviewed

standardized questionnaires that can be used across jurisdictions and the limitations of the systems used in any particular jurisdiction to generate data that match the standardized categories. Because health departments are structured differently (e.g., different departments or bureaus), surveys attempting to capture staffing, activity, or spending across broad categories such as chronic disease and communicable disease control may lack standardization to make apples to apples comparisons.22 Survey challenges may also be technical. Many surveys request information that is not immediately available to the respondent (e.g., expenditure data) or are structured in a way that does not allow the respondents to leave and then return to the survey. Survey respondents may also have political considerations. The workforce is often located in hierarchical organizations, and gaining approval to respond to surveys poses a challenge for the part of the workforce at the lower levels of that structure. Politically, those at lower organizational levels sometimes have the realistic concern that their responses—especially those that may cast a negative light—may be accessible to their supervisors.23 Additionally, staff at agencies may perceive harm from disclosure of data to the research community or general public that might lead to unfavorable comparisons or conclusions. Surveys fielded for advocacy-related reasons may not match the agenda of the leadership of the jurisdiction, leading to respondents feeling uncomfortable responding or declining participation altogether. There also exist legal challenges; some surveys offer incentives (e.g., a gift card), yet some governmental employees

Leider et al.

are not allowed to accept these incentives because of ethics rules. Privacy may also be a consideration, both for the individual and the individual’s organization, because public health agencies fall under open records provisions. Another data collection challenge relates to the increasing complexity of the data being collected. In a traditional survey, the respondent—whether an individual or an organization—is likely to have a list of questions to respond to and will have a relatively easy time navigating the process for providing answers. Increasingly, however, surveys do not collect traditional knowledge- or opinion-related data but attempt to collect more complex types of data to assess system-level information. For example, social network analysis has become increasingly used to collect data on the interrelationships among members of a system (e.g., through the Program to Analyze, Record, and Track Networks to Enhance Relationships Web site).24 A social network survey, in addition to asking respondents to answer questions about themselves (or their organizations), asks respondents to provide information about other people or organizations to which they are connected, such as the frequency, type, and quality of their interactions. This can result in a more complex survey with a much higher respondent burden than a traditional survey—and one that yields substantially more information about an organization’s collaborations and processes. Finally, researchers’ feedback to survey participants remains a significant challenge: how can survey or study results be provided to the workforce or organization in a manner that is useful, is valuable, and can serve as an

incentive to future survey participation? Although a growing trend toward actively involving participants in the research through a community-based participatory approach is evident, researchers are often reluctant to share results before formal presentations or publications. Studies have shown that practitioners infrequently read academic, peer-reviewed journals for new information, and tracking studies from start to final publication can take years.25 In addition to the validity issues these approaches raise, the targets of the surveys are often not involved in survey development.26

The How Many The power of a study is its ability to detect an effect size considering the number of observations overall or in groups of interest in a study. Group sizes can be difficult to estimate for studies of LHDs and other entities in the public health system. This makes it challenging to accurately estimate the needed sample size for many studies. Surveys of health professionals often have numerous outcomes of interest, making it difficult to decide which effect size to use in the power calculations. Evidence of this difficulty can be found in a 2012 review of methods used in research on public health systems, which reports that just 5.7% (n = 16/282) of empirical articles mention statistical power and only 37.5% (n = 6/16) of those report having adequate power.27 The same review reports that nearly 40% of studies of public health systems and services use only descriptive statistics, so statistical significance and effect size are not available for use in subsequent research.27 Many research findings on LHD performance or impact

AJPH

November 2016, Vol 106, No. 11

AJPH METHODS

indicate significant differences by organizational (rather than individual) characteristics (e.g., LHD size and size of the jurisdictional population).6,26,28 Obtaining adequate sample sizes for each major category of LHD size can be problematic. For example, there are approximately 40 LHDs serving populations of more than 1 million residents, whereas there are nearly 300 LHDs serving populations of fewer than 10 000 residents. So, to have 20 very large LHDs and 20 very small LHDs in a sample, half of very large LHDs but fewer than 10% of very small LHDs would have to agree to participate. When limits on sampling make representative samples difficult to obtain, researchers rely on descriptive statistics. Although useful for understanding basic structures and functions, descriptive statistics limit our ability to draw inferences on the basis of patterns across the public health system.

A Balancing Act Many of the challenges we have outlined relate to balancing research needs with participant burden. Most researchers will devise a method to follow when administering a survey made up of several steps, including sending an introductory letter or e-mail to invite survey participation and following up with a series of reminders. Researchers have different challenges today from when paper surveys were the primary means of survey data collection. Some organizations will not accept unknown or unrecognized e-mail invitations because of Internet security concerns; many, especially government, organizations will block e-mails coming from systems such as SurveyMonkey and Qualtrics, which batch and send

November 2016, Vol 106, No. 11

AJPH

invitations and reminders to respondents. In addition to the typical reminder methods of mailing notices and telephoning, methods such as e-mailing invitations and reminders directly to respondents (e.g., through a mail merge) are increasingly used as a way to ensure reaching the respondents. Finally, surveys may lack endorsement from a trusted party or organizational leadership and may lack appropriate incentives and rewards for completion.

POTENTIAL SOLUTIONS Potential solutions to some of the challenges outlined are described in the box on page 1969. Many of these solutions include deeper and more planned engagement between the academic and the practice communities. Academic health departments (collaborative, formal relationships between public health practice and academia) and public health practice–based research networks (ongoing partnerships between researchers and practitioners) provide opportunities for the up-front engagement of practitioners in survey development and implementation to improve buy-in, relevance, and, ultimately, response rates.29–31 Partnerships with members of the public health practice community during data collection (and other phases of the research) may increase relevance, utility, and response rates.32 Additionally, researchers and membership organizations should continue to work with their members to educate them about potential benefits of participating in these types of studies—as well as whether those potential benefits are realized in the long term.

Another family of solutions involves reducing the number of and condensing surveys practitioners are asked to complete. This has occurred somewhat with organizational surveys, as numerous, disparate surveys conducted in the past century were largely absorbed into membership organization surveys in recent years or decades.13 Recent national efforts through TRAIN and data sharing from the Public Health Workforce Interests and Needs Survey (including the Research to Action project) and the National Network of Public Health Institutes also allow workforce assessments to become more standardized and less frequent. This may be a viable option for several other types of research involving public health practitioners and may help strike a needed balance between the cost and benefit of survey participation. Working more with gatekeeping or membership organizations may reduce redundant surveys and increase the response rate that membership organizations tend to achieve in surveys of their members.13,14 The field may also consider methods used in other health professions, such as a public health worker registry and a repository of questions and studies conducted among public health practitioners.

Questions to Ask When Planning a Study As part of survey design, researchers will need to make a series of choices related to fielding the survey (see the box on the following page, as well as Boxes A and B [available as supplements to the online version of this article at http://www.ajph.org]). First, the respondent should determine to what extent the survey plausibly

Leider et al.

benefits the field. For instance, federal surveys must pass the Office of Personnel Management’s test of having “practical utility.”33 Further choices include commonly asked questions about study design, such as population, sampling, and fielding, as well as less commonly considered issues, such as metadata collection and long-term use of survey data. Just as researchers ought to carefully consider their expected response rate as it relates to sample size and power, they should also consider whether the data will someday become public (and how confidentiality can be protected), whether they need to work with a membership association to access their study population, and whether the instrument is being designed for 1-time use, multiple cross-sectional use, or longitudinal use.

Dissemination of Findings Although not unique to surveys, the dissemination and the translation of findings remain challenges in the field, because public health researchers often devote inadequate resources to research dissemination.34 Many studies do not have a structured dissemination plan to identify all the audiences that could benefit from the findings and how best to reach those audiences to increase the awareness and impact of the findings. The research community tends to use peerreviewed journals and professional conferences as primary channels to disseminate study results. However, a survey among LHD practitioners shows that they are more likely to rely on seminars, workshops, and information from professional associations to learn about public health research than on

Peer Reviewed

Surveillance

1971

AJPH METHODS

QUESTIONS TO ANSWER WHEN PLANNING A SURVEY OF PUBLIC HEALTH PRACTITIONERS Who or what is being studied? Who are the specific individuals or organizations invited to participate in the study? Are portions of the intended applicant pool invited to participate (sample), or is the entire population invited to participate (census)? Does the survey invite a particular respondent type? Is the survey designed so that an individual or organization can respond to all questions in the survey, or is the content better suited to a collective response supported by multiple individuals and organizations? Will the survey be successful only if it is administered through a partner organization? What topics are included in the survey; does it address a single specific aspect of the public health system such as how public health agencies set budgets and priorities or a broad selection of topics to provide a complete picture of an organization such as the ASTHO and NACCHO profile surveys? What is the target length of the survey (in minutes or questions)? What are the dates the survey is expected to be in the field? Is the survey intended to be used once on the same population, once on multiple populations, or multiple times on the same population (longitudinal)? What is the needed response rate (is statistical power a consideration)? Is this survey optional for respondents to participate in, or must they participate because of statute, regulations, grant requirements, or the like? Will this survey require institutional review board clearance? Does this survey need to be anonymous or confidential? Will sensitive information be collected? Over which medium will this survey be administered? How will this instrument be pretested or validated? How many reminders will there be, and how will they occur? What types of incentives can be used in this survey? What kinds of data validation can be used in this study? What types of questions will be useful to address the research questions? What metadata can be collected alongside the survey that might be useful later? Will the survey data be publicly available? Who will be included in the development of the survey instrument and the sampling and fielding design? Will the instrument be publicly available? Note. ASTHO = Association of State and Territorial Health Officials; NACCHO = National Association of County and City Health Officials.

traditional academic sources.1 The gap between information dissemination and use has been a barrier to the effective translation of research into practice. We have learned that researchers should identify dissemination partners and strategies before conducting research so that those who might adopt the findings will be collaboratively engaged in the survey process from the beginning. Engagement between the practitioners in the workforce and researchers at the outset may enhance the

1972

Surveillance

Peer Reviewed

identification of the most appropriate means for providing feedback; this will enable researchers to communicate clearly about what they can and cannot provide (and the timeline for doing so) and practitioners to provide clear guidance on how best to communicate results to the workforce. The engagement of practitioners in the initial stages of research can also increase the relevance of the research to practitioners, thereby increasing the likelihood that they will want to access the disseminated results.

Leider et al.

Social media may also be a useful means of dissemination. This suggests several areas in which additional attention is warranted: placing a greater emphasis on building strategic partnerships early in the dissemination process; establishing new and more rapid methods for determining when a new program or policy is ready for adoption in a nonresearch setting (e.g., exploratory evaluation35); and ensuring that the research products are developed in ways that match well with adopters’

needs, assets, and time frame. Many of these challenges need particular attention when disseminating data to practitioners serving populations with a high prevalence of health disparities, for whom public health system constraints are greatest and delivery systems are underdeveloped. Several methods for “designing for dissemination” have been described.34 For example, providing issue or policy briefs ahead of or alongside peer-reviewed publications is a possible approach to addressing

AJPH

November 2016, Vol 106, No. 11

AJPH METHODS

the lag time between surveying and the publication of results.

CONCLUSIONS Survey research is an important facet of public health services and systems research, and it is becoming increasingly used by membership organizations, funders, and researchers. The subjects of this research—public health practitioners and the organizations they serve—are seeing benefits from an increased interest in practice. Additional research into the workforce has yielded a more accurate enumeration of staff 35–37 as well as an identification of the recession’s impact on the public health workforce.6,38 Surveys of practitioners have identified better ways to translate and disseminate all types of research in support of evidence-based public health as a standard of practice.39 Surveys have also identified major training needs and other gaps, as perceived by workers themselves. However, the increased use of surveys has a significant cost: for its respondents, there is an increased burden and survey fatigue; for those conducting surveys, there are potential declines in response rates and the quality of the results. We have identified numerous ways that the public health practitioner population is unique among target populations of survey research. We have offered survey researchers who include practitioners in their study population potential solutions to survey challenges, including a set of concrete questions they may ask while designing the survey to maximize response rates, reduce burden, and maximally engage practitioners during the design, dissemination, and translation phases of research.

November 2016, Vol 106, No. 11

AJPH

We have also proposed greater coordination and comparability in survey efforts. Both researchers and practitioners would benefit from national repositories of validated survey items, accessible data from surveys of practitioners, and efforts from funders and grantees to share their data. CONTRIBUTORS J. P. Leider coordinated drafting the article and participated in writing. G. Shah, N. Rider, P. C. Erwin, and R. C. Brownson led section drafting and participated in writing. A. Beck, B. C. Castrucci, J. Harris, K. Sellers, D. Varda, and J. Ye participated in writing the first draft of the article. All authors participated in conference calls and consensus work to outline content in this article and gave final approval of the article.

ACKNOWLEDGMENTS The authors thank Glen Mays for his valuable contributions to group discussions before this article was written and Ed Hunter for his thoughtful questions and comments, which improved the quality of the article.

HUMAN PARTICIPANT PROTECTION Human participant protection was not required because this work did not involve human participant research.

REFERENCES 1. Fields RP, Stamatakis KA, Duggan K, Brownson RC. Importance of scientific resources among local public health practitioners. Am J Public Health. 2015; 105(suppl 2):S288–S294. 2. Smith KA, Goekler SF, Williams A, Sellers K. ASTHO affiliates find value in PH WINS. J Public Health Manag Pract. 2015;21(suppl 6):S168–S169. 3. Hunter EL. Rebooting our boots on the ground. J Public Health Manag Pract. 2015; 21(suppl 6):S1–S2. 4. Beck AJ, Boulton ML. Building an effective workforce: a systematic review of public health workforce literature. Am J Prev Med. 2012;42(5, suppl 1):S6–S16. 5. National Association of County and City Health Officials. 2013 national profile of local health departments. 2014. Available at: http://nacchoprofilestudy. org/wp-content/uploads/2014/02/ 2013_National_Profile021014.pdf. Accessed August 14, 2016. 6. Willard R, Shah GH, Leep C, Ku L. Impact of the 2008–2010 economic recession on local health departments. J Public Health Manag Pract. 2012;18(2): 106–114.

7. Pourshaban D, Basurto-Davila R, Shih M. Building and sustaining strong public health agencies: determinants of workforce turnover. J Public Health Manag Pract. 2015;21(suppl 6):S80–S90. 8. Liss-Levinson R, Bharthapudi K, Leider JP, Sellers K. Loving and leaving public health: predictors of intentions to quit among state health agency workers. J Public Health Manag Pract. 2015;21(suppl 6):S91–S101. 9. Kaminski MM, Meier S, Staebler S. National Association of Neonatal Nursing Workforce Survey. Adv Neonatal Care. 2015;15(3):182–190. 10. Jones JA, Banks L, Plotkin I, Chanthavongsa S, Walker N. Profile of the public health workforce: registered TRAIN learners in the United States. Am J Public Health. 2015;105(suppl 2):e30–e36. 11. Draper S. TRAIN reaches million learners milestone. 2015. Available at: http://www.phf.org/news/Pages/ TRAIN_Reaches_Million_Learner_ Milestone.aspx. Accessed August 14, 2016. 12. Sellers K, Leider JP, Harper E, et al. The Public Health Workforce Interests and Needs Survey: the first national survey of state health agency employees. J Public Health Manag Pract. 2015;21(suppl 6): S13–S27. 13. Leep CJ, Shah GH. NACCHO’s National Profile of Local Health Departments study: the premier source of data on local health departments for surveillance, research, and policymaking. J Public Health Manag Pract. 2012;18(2): 186–189. 14. Association of State and Territorial Health Officials. Profile of state public health. 2014. Available at: http://www. astho.org/profile. Accessed August 14, 2016. 15. Goldberg M, Chastang JF, Leclerc A, et al. Socioeconomic, demographic, occupational, and health factors associated with participation in a long-term epidemiologic survey: a prospective study of the French GAZEL cohort and its target population. Am J Epidemiol. 2001;154(4): 373–384. 16. National Center for Education Statistics. IPEDS datacenter. 2015. Available at: https://nces.ed.gov/ipeds/datacenter. Accessed August 1, 2015. 17. Leider JP, Harper E, Bharthapudi K, Castrucci BC. Educational attainment of the public health workforce and its implications for workforce development. J Public Health Manag Pract. 2015;21(suppl 6):S56–S68. 18. US Census Bureau. Educational Attainment in the United States: 2013. Washington, DC; 2013. 19. Shah GH, Madamala K. Knowing where public health is going: levels and determinants of workforce awareness of national public health trends. J Public

Leider et al.

Health Manag Pract. 2015;21(suppl 6): S102–S110. 20. Leider JP, Shah GH, Castrucci BC, Leep CJ, Sellers K, Sprague JB. Changes in public health workforce composition: proportion of part-time workforce and its correlates, 2008–2013. Am J Prev Med. 2014;47(5, suppl 3):S331–S336. 21. National Association of County and City Health Officials. 2013 National Profile of Local Health Departments. Available at: http://www.naccho.org/ topics/infrastructure/profile/upload/ 2013-National-Profile-of-Local-HealthDepartments-report.pdf. Accessed August 14, 2016. 22. Honoré PA, Leider JP, Singletary V, Ross DA. Taking a step forward in public health finance: establishing standards for a uniform chart of accounts crosswalk. J Public Health Manag Pract. 2015;21(5): 509–513. 23. Leider JP, Bharthapudi K, Pineau V, Liu L, Harper E. The methods behind PH WINS. J Public Health Manag Pract. 2015; 21(suppl 6):S28–S35. 24. Retrum JH, Chapman CL, Varda DM. Implications of network structure on public health collaboratives. Health Educ Behav. 2013;40(1 suppl):13S–23S. 25. Brownson RC, Diez Roux AV, Swartz K. Commentary: generating rigorous evidence for public health: the need for new thinking to improve research and practice. Annu Rev Public Health. 2014;35: 1–7. 26. Shah GH, Lovelace K, Mays GP. Diffusion of practice-based research in local public health: what differentiates adopters from nonadopters? J Public Health Manag Pract. 2012;18(6):529–534. 27. Harris JK, Beatty KE, Barbero C, et al. Methods in public health services and systems research: a systematic review. Am J Prev Med. 2012;42(5, suppl 1):S42–S57. 28. Lovelace KA, Aronson RE, Rulison KL, Labban JD, Shah GH, Smith M. Laying the groundwork for evidencebased public health: why some local health departments use more evidence-based decision-making practices than others. Am J Public Health. 2015;105(suppl 2): S189–S197. 29. Erwin PC, Keck CW. The academic health department: the process of maturation. J Public Health Manag Pract. 2014; 20(3):270–277. 30. Erwin PC, Barlow P, Brownson RC, Amos K, Keck CW. Characteristics of academic health departments: initial findings from a cross-sectional survey. J Public Health Manag Pract. 2016;22(2): 190–193. 31. Mays GP, Hogg RA. Expanding delivery system research in public health settings: lessons from practice-based research networks. J Public Health Manag Pract. 2012;18(6):485–498.

Peer Reviewed

Surveillance

1973

AJPH METHODS

32. Mays GP, Shah G, Lovelace K. Practice-based research in public health. NACCHO Exchange. 2012;11(1):1, 6–8. 33. Office of Personnel Management. Agency Information Collection Activities; Proposals, Submissions, and Approvals: Information and Instructions on Your Reconsideration Rights. Washington, DC; 2014. Federal Register No. 2016–11720. 34. Brownson RC, Jacobs JA, Tabak RG, Hoehner CM, Stamatakis KA. Designing for dissemination among public health researchers: findings from a national survey in the United States. Am J Public Health. 2013;103(9):1693–1699. 35. Leviton LC, Khan LK, Rog D, Dawkins N, Cotton D. Evaluability assessment to improve public health policies, programs, and practices. Annu Rev Public Health. 2010;31:213–233. 36. Beck AJ, Boulton ML, Coronado F. Enumeration of the governmental public health workforce, 2014. Am J Prev Med. 2014;47(5, suppl 3):S306–S313. 37. Gebbie KM, Raziano A, Elliott S. Public health workforce enumeration. Am J Public Health. 2009;99(5):786–787. 38. Association of State and Territorial Health Officials. Budget Cuts Continue to Affect the Health of Americans: Update March 2012. Available at: http://www. aahd.us/wp-content/uploads/2012/04/ ASTHO-Budget-Cuts-ImpactResearch-0312.pdf. Accessed August 14, 2016. 39. Jacobs JA, Duggan K, Erwin P, et al. Capacity building for evidence-based decision making in local health departments: scaling up an effective training approach. Implement Sci. 2014;9:124.

1974

Surveillance

Peer Reviewed

Leider et al.

AJPH

November 2016, Vol 106, No. 11

Challenges and Innovations in Surveying the Governmental Public Health Workforce.

Surveying governmental public health practitioners is a critical means of collecting data about public health organizations, their staff, and their pa...
675KB Sizes 0 Downloads 10 Views