Journal of Clinical Epidemiology 66 (2013) 1317e1318

EDITORIAL

Improving the quality of quality of care research For improving quality of care continuously and consistently, we must be able to validly measure and monitor it, taking essential and representative characteristics of content, process, and performance into account [1,2]. Outcome-related measuring of care quality seems ideal, but there is often a tension between the long time that it may take to establish health outcomes and the need to implement the results of quality assessment in actual health care as soon as possible. Therefore, a key question is whether and how indicators of process and performance quality, to generate more direct input for quality improvement, can provide the required information. Accordingly, developing, validating, and appropriately using and evaluating quality indicators is a major issue for quality improvement and for those working to achieve it. We therefore welcome the series of two articles on this topic by Stellfox and Straus. In a first article, they focus on describing how to develop a conceptual measurement framework and how to evaluate the need to develop quality indicators, illustrated with recent work in injury care. Their second paper describes and reviews different conceptual approaches to quality indicator development, and addresses the evaluation and maintenance of quality indicators. In a commentary to the work by Stellfox and Straus, Shekelle makes a plea for more agreement on the best methodological approach for developing quality indicators and performance measures, and makes a comparison with what has been achieved in the fields of clinical guideline development and systematic reviews. Also, the quality of reporting of clinical research has a (more indirect) effect on aspects of health care quality, and it represents a key responsibility of the community of clinical investigators. In this context, the review and proposal by Abraira et al. is a good example. Based on a systematic review of clinical research articles published in high-impact medical journals, they conclude that the application of survival analysis in medical research is increasing, but improvement in reporting quality is slow. Therefore, they propose a list of minimum requirements for improved application and description of survival analysis. In this connection the article of Mariotto and her group deserves special attention. To provide more accurate estimates of the survival and life expectancy of cancer patients with respect to non-cancer disorders, they estimated comorbidity-adjusted life tables and health-adjusted age. Using data from the Surveillance Epidemiology and End 0895-4356/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jclinepi.2013.10.002

Results program linked to Medicare claims, the authors found that the health-adjusted age and the life tables adjusted by age, race, sex, and comorbidity can provide useful information to facilitate treatment decisions. Quality of research methods in relation to study outcome is addressed by Jacobs and co-workers. In a metaepidemiologic review, they evaluated whether the influence of methodological features of studies on observed treatment effects differed between types of intervention. It was found that the influence of allocation concealment and double blinding on treatment effect is consistent across studies of surgical, pharmaceutical, and therapeutic interventions. But the influence of randomization may differ between surgical and non-surgical studies, and the authors make recommendations on this issue. In addition, quality of systematic reviews does not stop keeping us busy. Passon and her team analyzed differences in conclusion, statistical significances, and quality of systematic reviews on preventive effects of blood glucose lowering on macrovascular events in type 2 diabetes patients. As the results show various relevant discrepancies, the authors conclude that common quality assessment instruments in meta-analyses are necessary but not sufficient. They discuss the implications of this finding. Relevant discrepancies between protocol submissions and subsequent journal publications were found in a study of outcome reporting in randomized drug trials. Based on a cohort study of trials submitted for ethical review, Redmond and colleagues compared submitted research protocols with the related journal publications. They identified factors associated with discrepancies: statistical significance of results, type of outcome, and specialty area. Therefore, they emphasize the need for free availability of protocols and description and justification of any changes made to protocol-defined outcomes. An important aspect to specify in a research protocol for comparing treatment is the smallest worthwhile effect of treatments that should be detected. A clinically relevant question is to what extent this effect would differ for the same outcome according to type of treatment. Ferreira and co-authors studied this using the benefit-harm tradeoff method in a before-after comparison of effects of treatment with nonsteroidal anti-inflammatory drugs (NSAIDs) and physiotherapy for nonspecific low back pain (LBP). They conclude that LBP patients need to see larger effects with NSAIDs than with physiotherapy to consider these

1318

Editorial / Journal of Clinical Epidemiology 66 (2013) 1317e1318

interventions worthwhile. The results can be useful for sample size calculations and interpreting trial findings. Can physicians’ prescription preferences be valid instrumental variables for prescriptions issued on their patients? This question was addressed by Davies et al., based on data on antidepressants prescriptions by general practitioners. They showed that physicians’ prior antidepressant prescriptions were strongly associated with their subsequent prescriptions, and that physicians’ prescribing preferences are valid instruments for evaluating the short-term effects of antidepressants. Clinical research on multimorbidity can be more realistic than focusing on one disorder, especially in studying chronic patients and the elderly, but is also much more complex. This is demonstrated in the article by Lappenschaar and her group, who analysed the joint evolutionary course of multiple disorders using multilevel temporal Bayesian networks. Based on clinical data from general practice registries, they found clear synergies between health risks and chronic disease development patterns over time in cases of multimorbidity. The authors recommend their method both for research and the development of more tailored clinical guidelines. As participation in epidemiological and clinical studies is always a matter of concern, developing methods to increase response rates keeps relevant. Appropriately testing such methods is equally important, also to avoid too high expectations based on sometimes plausible assumptions. In a randomized trial, Carey et al. tested the impact of an advance letter on response and cooperation rates in a nationwide telephone survey investigating the prevalence of occupational exposure to carcinogens. They did not find any added value of an advance letter. Also the randomized trial by Xie and Ho, who evaluated the additional effect of prenotification on the response rate and survey quality in a pilot survey among nurses (to collect data on work status, life style, and reproductive and dietary information) yielded a negative result. The discussion on the added value and efficiency of the stepped wedge design continues. Hemming and Gerling

criticize the article by Woertman and co-workers on possible reduction of the required sample size in cluster randomized trials by using this design type [3], and Woertman et al. respond to their comments. Also on the pros and cons of within-person study designs the debate goes on, as demonstrated by the interesting correspondence between Farrington and Nicholas et al., connecting to an article by the latter authors [4]. Many authors have contributed to this issue of the Journal of Clinical Epidemiology. It is therefore especially interesting to read the new one-pager by Cals and Kotz on effective writing and publishing scientific papers, this time focusing on authorship. Finally, the editors want to thank all reviewers who have contributed this year to the work of the Journal of Clinical Epidemiology. Without their enormous expertise, co- operation, and efforts, the Journal would not have been able to maintain and further develop its quality. In this issue, the names of all those who reviewed papers for us have gratefully been listed, with a special thank you to the reviewers of the year 2013: Krista Huybrechts and Tim Croudace. J. Andre Knottnerus Peter Tugwell Editors E-mail address: anneke.germeraad@maastricht university.nl (J.A. Knottnerus) References [1] Grol R, Wensing M, Eccles M, Davis D, editors. Improving patient care: the implementation of change in health care. Ed 2. Oxford: Wiley-Blackwell; 2013. [2] Giuffrida A, Gravelle H, Roland M. Measuring quality of care with routine data: avoiding confusion between performance indicators and health outcomes. BMJ 1999;319:94e8. [3] Woertman W, de Hoop E, Moerbeek M, Zuidema SU, Gerritsen DL, Teerenstra S. Stepped wedge designs could reduce the required sample size in cluster randomized trials. J Clin Epidemiol 2013;66:752e8. [4] Nicholas JM, Grieve AP, Gulliford MC. Within-person study designs had lower precision and greater susceptibility to bias because of trends in exposure than cohort and nested case-control designs. J Clin Epidemiol 2012;65:384e93.

Improving the quality of quality of care research.

Improving the quality of quality of care research. - PDF Download Free
65KB Sizes 0 Downloads 0 Views