Commentary For reprint orders, please contact: [email protected]

Bringing cohort studies to the bedside: framework for a ‘green button’ to support clinical decision-making

When providing care, clinicians are expected to take note of clinical practice guidelines, which offer recommendations based on the available evidence. However, guidelines may not apply to individual patients with comorbidities, as they are typically excluded from clinical trials. Guidelines also tend not to provide relevant evidence on risks, secondary effects and long-term outcomes. Querying the electronic health records of similar patients may for many provide an alternate source of evidence to inform decision-making. It is important to develop methods to support these personalized observational studies at the point-of-care, to understand when these methods may provide valid results, and to validate and integrate these findings with those from clinical trials. Keywords:  cohort studies • comparative effectiveness • electronic health records • personalized prognosis

The demand for information on treatment options and what to expect from them in terms of their effectiveness and safety has grown tremendously, driven by an increase in patients with complex comorbidities as well as in the choice and sophistication of treatments. When providing care, clinicians have limited time to diagnose the presenting problem and provide the patient with a full range of treatment options and associated prognoses. Clinicians are expected to take note of clinical practice guidelines, which offer recommendations based on the available evidence. However, guidelines may not apply to individual patients with comorbidities, as they are typically excluded from clinical trials  [1,2] . Guidelines also tend not to provide relevant evidence on risks, secondary effects and long-term outcomes [3,4] . Additionally, patients have their own preferences, are unfamiliar with guidelines and are uncertain about the way probabilities of benefit and harm for interventions apply to them. Querying the electronic health records (EHRs) of similar past patients may for many provide an alternate source of evidence to

10.2217/CER.15.12 © Gallego et al.

inform decision-making. Past records are a source of information on the way different treatment choices led to different outcomes and can help in the tailoring of medical treatment to the individual characteristics of each patient. These virtual cohorts of past patients are more likely to represent a realistic population with similar comorbidities than those assembled for clinical trials. As long as the limitations of observational analysis are acknowledged, such cohorts can serve as an important adjunct tool in clinical decision-making. The idea of systematically searching for ‘patients like mine’ in datasets of patients from clinical practices was first reported in the 1970s. A 1972 paper by clinicians from Veterans Administration hospitals reported the use of a manually built electronic ‘library of clinical experience’ of 678 lung cancer patients for personalized prognosis information for new patients [5] . “Without delegating any of the prerogatives of clinical judgment to the computer,” they state, “a clinician can obtain a quantified account, in as much or as little detail as he wishes, of the experience with previous patients.” Another study in

J. Comp. Eff. Res. (2015) 4(3), 191–197

Blanca Gallego*,1, Scott R Walter2, Richard O Day3, Adam G Dunn1, Vijay Sivaraman4, Nigam Shah5, Christopher A Longhurst6 & Enrico Coiera1 Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, NSW 2109, Australia 2 Centre for Health Systems & Safety Research, Australian Institute of Health Innovation, Macquarie University, Australia 3 St Vincent’s Clinical School, University of New South Wales, St Vincent’s Hospital, Sydney, Australia 4 Electrical Engineering & Telecommunications, University of New South Wales, Sydney, Australia 5 Biomedical Informatics Research, Stanford School of Medicine, CA 94305-5479, USA 6 Department of Pediatrics, Stanford School of Medicine, CA 94305-5208, USA *Author for correspondence: Tel.: +1 612 9850 2400 [email protected] 1

part of

ISSN 2042-6305

191

Commentary  Gallego, Walter, Day et al. 1975 describes the creation of one of the first electronic patient registries, a dataset of more than 3000 patients with ischemic heart disease [6] . The dataset was used to produce prognostic reports tailored to specific patients, and researchers claimed that: “its use to answer general questions is not sufficient to justify its existence. It must be used to answer specific questions about specific patients.” Since then, important advances from computer science, statistics and the big data revolution have made it possible to implement such decision-support tools at the point-of-care. The first reported use of querying medical records of past patients in near real-time to aid in treatment decision took place in 2011, when clinicians at Stanford University searched for patients similar to a 13 year old girl with lupus nephritis to decide on anticoagulation therapy [7] . We envision future EHR systems will bring querying, visualization and decision support functions together to allow individualized virtual cohorts of similar patients to be assembled and used in real-time to support treatment selection and planning (see Table 1). The policy and ethical implications of such a system were recently considered by a group of authors who branded this EHR capability the ‘green button’ [8] . Operational issues associated with this type of decision support systems have recently begun to be discussed in the literature [9,10] . Here, we outline an approach for the dynamic creation of virtual cohorts in EHR data that combines EHR querying systems with statistical methods from observational studies (see Figure 1). This approach includes: EHR-based phenotyping, to characterize patients; measuring inter-

patient similarity, to form cohorts of similar patients; optimal cohort selection, to tailor clinical decisions to individual patients; cohort visualization to facilitate face validity and refinement by the decision maker; automated confounder control, to minimize bias for validity and integrating results with clinical guidelines and existing evidence from clinical trials for validity as well as to fill inferential gaps. EHR-based phenotyping In cohort studies, correct identification of patients, their treatments and outcomes are important for the estimation of treatment effects. EHR data can be inaccurate or incomplete and can contain codes that classify patients for purposes other than clinical care, distorting meaningful interpretation [11] . Important information is often contained in free-text notes requiring text interpretation  [12] and missing data are common and require appropriate inference methods [13] . EHR-based phenotyping algorithms are being developed that can transform high-dimensional, noisy, structured and unstructured data from laboratories, medications, diagnosis codes, procedure codes and clinical notes into meaningful clinical concepts using knowledge representation, temporal abstraction algorithms, natural language processing and machine learning techniques. Examples of these algorithms include identification of patients with diabetes, cancer, heart failure, rheumatoid arthritis, pneumonia, asthma, hypertension, venous thromboembolism and drug adverse events among others. A review of these applications and methodologies can be found in [14] . While there is still

Table 1. Summary of ideal properties of future electronic medical record systems and current value added by electronic medical record querying systems and observational studies. Ideal properties of future EHR systems

Current EHR tools

Observational studies

Represent real world populations and routine care

Yes

Yes

Low marginal cost for additional analyses

Yes

Yes

Can look at long term effects and rare conditions

Yes

Yes

Capability of real-time, interactive querying

Yes

No

Appropriate inclusion of sequential nature of healthcare events

Yes

No

Includes visualization techniques to help with the study design

Yes

No

Appropriately address measured confounders

No

Yes

Take into account statistics of multiple testing

No

Yes

Appropriately address unmeasured or unknown confounders

No

Sometimes

Data are high quality and fit for purpose

No

Sometimes

Allows comparison with existing evidence

No

Sometimes

EHR: Electronic health record.

192

J. Comp. Eff. Res. (2015) 4(3)

future science group

Bringing cohort studies to the bedside: framework for a ‘green button’ to support clinical decision-making 

1

2 EHR phenotyping

Index patient

Cohort visualization Visualize composition of optimal and neighbouring cohorts

3

Use phenotypes to measure similarity between the index patient and past patients

Characterize the index patient

4

Quantifying patient similarity

5

Use propensity score methods to adjust for confounders

Optimal cohort selection Choose cohort that best represents index patient with sufficient sample size

6

Confounder control

Commentary

Integration with evidence Integrate with findings from clinical trials according to inclusion criteria

Personalized care

Figure 1. Proposed process for generating real-time cohort studies at the point of care. EHR: Electronic health record.

work to be done, particularly around standardization and the use of unsupervised methods to reduce manual intervention, initial results are promising [15] . Quantifying inter-patient similarity Patient attributes (or phenotypes) generated by EHRbased phenotyping algorithms (when required) can then be used to determine inter-patient similarity. Patient attributes can first be aligned along a temporal dimension using significant temporal events such as admission to hospital, or the date of a first diagnosis or first treatment. Once such sequences of temporal events have been arranged, the task is to identify patients who have similar attributes, that would allow a new patient’s likely trajectory in time to be inferred from the trajectories of past similar patients. Making such measures of similarity between two longitudinal sets of patient attributes is not a unique problem, and methods can borrow from techniques in information theory [16] , time-series comparison [17] , association indices in ecology and biological networks [18,19] and metric-learning [20] . Given a set of N patient attributes, a simple way to proceed is to create an N-dimensional vector assigning a value of 0 (not present) or 1 (present) for each attribute. The distance, or alternatively the similarity, between two sets of binary values can then be computed using a variety of measures, such as the Jaccard distance (as demonstrated in [19]). A more sophisticated approach might involve for example computing a weighted Euclidean distance in a transformed feature space [20] , or weighting attributes based on their relative effect on patient outcomes.

future science group

Optimal cohort selection For each index patient that we wish to make a treatment decision for, records of past patients can be queried to form an ordered set of virtual cohorts of similar patients. At the most stringent similarity level, an “exact match cohort” would include those patients who most closely match the index patient, while at the weakest level, a ‘full cohort’ includes all patients who simply satisfy an inclusion criteria. Once inter-patient distance is computed, systematically increasing the cut-off distance from the index patient generates an ordered set of cohorts. Different distance measures can be compared, by looking at how cohort size and composition vary with cut-off distance. Methods such as agglomerative clustering [21] can also be used to grow larger cohorts. In this method, a distance between two cohorts is defined, for example, as the average of the distances between all patients in the cohorts and, in each successive iteration, the closest pair of cohorts is merged. An alternative approach for growing larger cohorts from the index case, which does not require the use of an inter-patient distance measure, is to systematically relax attribute values. An attribute value is relaxed by extending the range of acceptably similar matches to adjacent values. For example, sex = male can be relaxed once to include females, while hypertension = moderate could be relaxed twice: a first time to include all degrees of hypertension, and a second time to also include normal blood pressure. In this approach, relaxation of one attribute value in the match cohort (level 0) will result in a new cohort. All such cohorts, each corresponding to a different attribute value being relaxed,

www.futuremedicine.com

193

Commentary  Gallego, Walter, Day et al. comprise a first layer (level 1) of cohorts. Relaxing one further attribute from a level 1 cohort will result in a level 2 cohort and so on, until at the final level, all attributes have been relaxed to yield the full cohort. To contain computational costs, a possible heuristic is to choose relaxing the attribute with the least impact on outcome at each step. The main advantage of this approach is that the composition of each virtual cohort is immediately evident to the clinician. One technical challenge in this process is that the optimally similar cohort must be selected with enough sample size to draw valid statistical conclusions. Smaller cohorts result in large confidence intervals and oscillations in measures of model fit among neighboring cohorts – indicating results may be due to chance. One approach to selecting the most robust cohort is to pick the cohort for which a selected measure of model fit such as a normalized measure of R 2, reaches a maximum, and which then decreases as cohorts become larger. Cohort visualization Many EHR systems today provide capabilities to explore and visualize stored information [22] . Lifelines2  [23,24] aligns patient records by chosen events and can group patients by matching exact sequences of events. More sophisticated temporal queries (including temporal constraints and time spans between query elements) have been implemented in PatternFinder [25] . Visitors  [26] support queries using higher abstractions based on clinical knowledge and nontemporal patient variables. Other software tools such as Outflow [27] visualize temporal trajectories of patients in preselected virtual cohorts. In particular, Caregiver [28] and CareCruiser [29] focus on visualizing clinical treatments and their effect on patients over time. More complex algorithms cluster patients using similarity measures. Similan [30] relaxes the matching criteria of Lifelines2 by introducing a Match & Mismatch similarity measure that compares differences on temporal categorical sequences. MITHRA [31] measures similarity via a locally supervised metric learning, which accounts for the relative clinical relevance of patient measurements. DICON [32] conducts a cluster analysis based on key patient attributes and displays statistics for each cohort of similar patients. Some systems (e.g., Gravi ++ [33] and DICON [32]) support visualization of patient cohorts as snapshots in a multidimensional space, helping clinicians understand a cohort’s composition. Initially, cohort visualization was used informally to assist decision makers. More recently, however, systems are being developed that combine more sophisticated analytics with cohort visualization and refinement (e.g., MITHRA, [31,34]).

194

J. Comp. Eff. Res. (2015) 4(3)

Confounder control The main challenge faced by cohort studies is treatment selection bias, where factors influencing treatment choice also independently influence patient outcome. In clinical trials, balance among treatment groups is achieved by randomization, and confounders are also minimized by stringent enrolment criteria, often resulting in patients with comorbidities being excluded. If observational studies fail to account for such confounders they may lead to biased estimates. Strategies to deal with measured confounders include restriction, stratification, matching, inverse probability weighting and covariate adjustment. The study population should exclude patients for whom there is no treatment choice (e.g., due to contra-indications or on-going treatment [35]). Subgroup analysis via stratification is useful when there is suspicion that specific patient groups will react differently to the treatments under study, as well as to investigate the consistency and sensitivity of treatment effectiveness. Other preanalysis approaches include matching, often in fixed ratios, and inverse probability weighting, which creates a ‘standardized population’ by weighting individuals relative to their inverse probability of being included. Alternatively, parametric approaches like generalized linear models allow the incorporation of covariates without categorization and of specific temporal or nonlinear effects. Lastly, it has become possible to estimate the probability of receiving treatment conditional on measured covariates (know as a propensity score). By stratifying, matching, weighting or adjusting using covariates or propensity score, treatment bias related to measured confounders can be removed [36] . Propensity scores can also guide exclusion criteria by removing lowscore patients [37] . Analyses on each cohort can be performed adjusting for the potential confounding introduced by the unmatched attributes using propensity scores. These propensity scores can be estimated using automated methods like high dimensional propensity score [38] . Instrumental variables have been proposed as a means to emulate random treatment allocation so as to provide unbiased treatment effects, even in the presence of unmeasured confounders [39] . An instrumental variable is a factor that has a causal effect on treatment selection but that, conditional on treatment, neither has an effect on outcome, nor does it share any causes with it. Practical problems with this approach include the difficulty of finding suitable instruments (particularly in an automated fashion) and the inability to empirically verify whether such variables are unrelated to outcome other than through treatment.

future science group

Bringing cohort studies to the bedside: framework for a ‘green button’ to support clinical decision-making 

Internal validation can be assessed (in real time) by measuring the robustness of the findings in relation to changes in adjustment methodologies, or by looking at variations in outcomes that are known to be unrelated to treatment. Integrating results with existing evidence It is important that results from point-of-care EHR-based cohort studies are not looked at in isolation but that they are presented to the clinician in the context of existing clinical guidelines and, if available, evidence from clinical trials. This integration of information serves two purposes: To validate the observational results obtained from clinical practice data and to identify and fill the inferential gaps that would otherwise have to be bridged unaided by data [40] . Advances in software technology and natural language processing are increasingly allowing for automated querying and meaningful visualization of evidence from published clinical trials [41] . Such a querying capability could be added to EHR systems to allow direct comparison of practice-based evidence and evidence-based practice by extracting results from similar inclusion criteria. Whenever an index patient does not fit the inclusion criteria of existing evidence, the gap can be filled using information from the EHR. Discussion, conclusion & recommendations It is important for the development of a learning health system to evolve new methods to support real-time decision-making at the point-of-care, to understand when these methods may provide valid results and to validate and integrate these findings with those from RCTs. Recently, EHR systems have been used to support RCTs by facilitating recruitment and follow-up [42] . If successful, evidence from these new types of RCTs could directly be integrated in EHR systems. Although they represent a great improvement from the current way clinical trials are performed, these point-of-care trials will still be insufficient to address the evidence gaps created by ethical or other limitations, when randomization is not possible [43] . Understanding the statistical and clinical significance of findings from studies using EHRs is a central problem. Significance levels may be adjusted depending on the treatments under consideration. For example, a more stringent significance level may be chosen before suggesting an invasive treatment with known risks. By doing so, a larger sample size is required to guarantee acceptable power. In some situations it may be necessary to filter the search by date (e.g., recent interventions); or by location or clinical setting (e.g., rural hospitals or specific institutions). On the other hand, combining data from various clinical settings increase the generalizability of findings

future science group

Commentary

and provide larger datasets, needed to look at rare conditions or events [44] . Distributed data networks (see list at [45]) with appropriate privacy algorithms, standards for defining data structures and software tools that can read across formats, are a step in this direction [46,47] . Clustering by provider or clinical setting can be integrated in the adjustment methodology. An appropriate clinical decision would be supported by a statistically significant positive result that makes clinical sense, does not conflict with clinical guidelines and agrees with a patient’s values and expectations. Statistically significant results against intervention or null results can help reduce the tendency to act in order to avoid the regret of omission rather than commission. Results that are inconclusive, whether due to large variance or weak effect size are still valuable as additional evidence. Real-time interrogation of EHRs via virtual cohorts will offer more personalized care by complementing existing evidence, clinical guidelines and clinician experience. Future perspective Today, it is already possible to automate observational studies using information contained in the EHR. As the accuracy, consistency and completeness of EHRs improve so will the validity of these analyses. Advances in text processing will eventually allow for automated querying and meaningful visualization of related evidence from published clinical trials. If these capabilities are combined, both clinicians and patients will be able to visualize the effects of selected treatments on similar patients at the press of a button. Acknowledgements The authors would like to thank YS Low from the Stanford Center for Biomedical Informatics Research for expert ­feedback.

Financial & competing interests disclosure This work was funded by National Health and Medical Research Council (NHMRC) Program Grant 568612, and Project Grants 1045548 and 1045065. Its contents are the responsibility of the authors and their institutions and do not reflect the views of the NHMRC. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. No writing assistance was utilized in the production of this manuscript.

Open access This work is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/bync-nd/3.0/

www.futuremedicine.com

195

Commentary  Gallego, Walter, Day et al.

Executive summary • Electronic health records (EHRs) from past patients are a source of information, which reflects patient’s treatment choices and their effects as they happen in actual clinical practice. • This source of information is readily available and can be queried at the point-of-care to aid in decision-making for individual patients. • Real-time querying tailored to individual patients requires: EHR-based phenotyping; quantifying inter-patient similarity; optimal cohort selection; cohort visualization; automated confounder control and integrating results with clinical guidelines and existing evidence from clinical trials. • Real-time querying also requires real-time validation, an important open area of research. While bias from measured confounders can be minimized using automated propensity scores techniques, bias from unknown or unmeasured confounders can still threaten the validity of results. Evaluating results from cohorts with known estimates can increase confidence in these methods. • Results from point-of-care EHR-based cohort studies should not be looked at in isolation but be presented in the context of existing clinical guidelines and any available evidence from clinical trials. electronic health records. J. Am. Med. Inform. Assoc. 21(2), 221–230 (2013).

References 1

Van Spall H, Toren A, Kiss A, Fowler RA. Eligibility criteria of randomized controlled trials published in high-impact general medical journals: a systematic sampling review. JAMA 297(11), 1233–1240 (2007).

15

Pathak J, Kho AN, Denny JC. Electronic health recordsdriven phenotyping: challenges, recent advances, and perspectives. J. Am. Med. Inform. Assoc. 20(e2), e206–e211 (2013).

2

Mangin D, Heath I, Jamoulle M. Beyond diagnosis: rising to the multimorbidity challenge. BMJ 345(7865), 11 (2012).

16

3

Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ 312(7040), 1215 (1996).

Cao H, Melton GB, Markatou M, Hripcsak G. Use abstracted patient-specific features to assist an informationtheoretic measurement to assess similarity between medical cases. J. Biomed. Informatics 41(6), 882–888 (2008).

4

Ioannidis JP, Lau J. Completeness of safety reporting in randomized trials. JAMA 285(4), 437–443 (2001).

17

5

Feinstein AR, Rubinstein JF, Ramshaw WA. Estimating prognosis with the aid of a conversational-mode computer program. Ann. Intern. Med. 76(6), 911–921 (1972).

Ding H, Trajcevski G, Scheuermann P, Wang X, Keogh E. Querying and mining of time series data: experimental comparison of representations and distance measures. Proceedings of the VLDB Endowment 1(2), 1542–1552 (2008).

18

Bass JIF, Diallo A, Nelson J, Soto JM, Myers CL, Walhout AJ. Using networks to measure similarity between genes: association index selection. Nat. Methods 10(12), 1169–1176 (2013).

19

Bauer-Mehren A, Lependu P, Iyer SV, Harpaz R, Leeper NJ, Shah NH. Network analysis of unstructured EHR data for clinical research. AMIA Jt. Summits Transl. Sci. Proc. 2013, 14–18 (2013).

20

 Wang F, Hu J, Sun J. Medical prognosis based on patient similarity and expert feedback. Presented at: 21st International Conference on Pattern Recognition (ICPR). Tsukuba, Japan, 11–15 November 2012.

21

McLachlan G. Cluster analysis and related techniques in medical research. Stat. Methods Med. Res. 1(1), 27–48 (1992).

22

Rind A, Wang T, Aigner W et al. Interactive information visualization to explore and query electronic health records. Foundat. Trends Human Comp. Interact. 5(3), 207–298 (2013).

23

Wang TD, Wongsuphasawat K, Plaisant C, Shneiderman B. Extracting insights from electronic health records: case studies, a visual analytics process model, and design recommendations. J. Med. Syst. 35(5), 1135–1152 (2011).

24

Wang TD, Plaisant C, Shneiderman B et al. Temporal summaries: supporting temporal categorical searching, aggregation and comparison. IEEE Trans. Vis. Comput. Graph. 15(6), 1049–1056 (2009).

6

Rosati RA, Mcneer JF, Starmer CF, Mittler BS, Morris JJ, Wallace AG. A new information system for medical practice. Arch. Intern. Med. 135(8), 1017–1024 (1975).

7

Frankovich J, Longhurst CA, Sutherland SM. Evidencebased medicine in the EMR era. N. Engl. J. Med. 365(19), 1758–1759 (2011).

8

Longhurst CA, Harrington RA, Shah NH. A ‘green button’ for using aggregate patient data at the point of care. Health Affairs 33(7), 1229–1235 (2014).

9

Celi LA, Zimolzak AJ, Stone DJ. Dynamic clinical data mining: search engine-based decision support. JMIR Med. Inform. 2(1), e13 (2014).

10

Schneeweiss S. Learning from Big Health Care Data. N. Engl. J. Med. 370(23), 2161–2163 (2014).

11

Hersh WR, Weiner MG, Embi PJ et al. Caveats for the use of operational electronic health record data in comparative effectiveness research. Med. Care 51, S30–S37 (2013).

12

Nadkarni PM, Ohno-Machado L, Chapman WW. Natural language processing: an introduction. J. Am. Med. Inform. Assoc. 18(5), 544–551 (2011).

13

Little RJA, Rudin DB. A taxonomy of missing-data methods (Chapter 1.4). In: Statistical Analysis with Missing Data. Wiley, New York, NY, USA, 19–23 (2002).

14

196

Shivade C, Raghavan P, Fosler-Lussier E et al. A review of approaches to identifying patient phenotype cohorts using

J. Comp. Eff. Res. (2015) 4(3)

future science group

Bringing cohort studies to the bedside: framework for a ‘green button’ to support clinical decision-making 

25

Plaisant C, Lam S, Shneiderman B et al. Searching Electronic Health Records for temporal patterns in patient histories: a case study with Microsoft Amalga. AMIA Annu. Symp. Proc. 2008(2008), 601–605 (2008).

26

Klimov D, Shahar Y, Taieb-Maimon M. Intelligent visualization and exploration of time-oriented data of multiple patients. Artif. Intell. Med. 49(1), 11–31 (2010).

27

Wongsuphasawat K, Gotz D. Outflow: visualizing patient flow by symptoms and outcome. IEEE VisWeek Workshop on Visual Analytics in Healthcare. Providence, Rhode Island, USA (2011).

28

Brodbeck D, Gasser R, Degen M, Reichlin S, Luthiger J. Enabling large-scale telemedical disease management through interactive visualization. European Notes in Medical Informatics. Proceedings of MIE 2005. Geneva, Swizerland, 1(1), 1172–1177 (2005).

29

Gschwandtner T, Aigner W, Kaiser K, Miksch S, Seyfang A. CareCruiser: exploring and visualizing plans, events, and effects interactively. Pacific Visualization Symposium (PacificVis), 2011 IEEE, 43–50 (2011).

30

Wongsuphasawat K, Shneiderman B. Finding comparable temporal categorical records: a similarity measure with an interactive visualization. IEEE Symposium on Visual Analytics Science and Technology, 2009. VAST 2009. 27–34 (2009).

31

Ebadollahi S, Sun J, Gotz D, Hu J, Sow D, Neti C. Predicting patient’s trajectory of physiological data using temporal trends in similar patients: a system for near-term prognostics. AMIA Annu. Symp. Proc. 13, 192–196 (2010).

32

Gotz D, Sun J, Cao N, Ebadollahi S. Visual cluster analysis in support of clinical decision intelligence. AMIA Annu. Symp. Proc. 2011(2011), 481–490 (2011).

33

Hinum K, Miksch S, Aigner W et al. Gravi++: interactive information visualization to explore highly structured temporal data. J. Univers. Comput. Sci. 11(11), 1792–1805 (2005).

34

Hang Z, Gotz D, Perer A. Interactive visual patient cohort analysis. In: Proceedings of IEEE VisWeek Workshop on Visual Analytics in Healthcare. Seattle, WA, USA (2012).

35

Schneeweiss S. Developments in post-marketing comparative effectiveness research. Clin. Pharmacol. Ther. 82(2), 143–156 (2007).

future science group

36

Austin PC. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behav. Res. 46(3), 399–424 (2011).

37

Kurth T, Walker AM, Glynn RJ et al. Results of multivariable logistic regression, propensity matching, propensity adjustment, and propensity-based weighting under conditions of nonuniform effect. Am. J. Epidemiol. 163(3), 262–270 (2006).

38

Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology 20(4), 512 (2009).

39

Martens EP, Pestman WR, De Boer A, Belitser SV, Klungel OH. Instrumental variables: application and limitations. Epidemiology 17(3), 260–267 (2006).

40

Stewart WF, Shah NR, Selna MJ, Paulus RA, Walker JM. Bridging the inferential gap: the electronic health record and clinical evidence. Health Affairs 26(2), w181–w191 (2007).

41

Tsafnat G, Dunn A, Glasziou P, Coiera E. The automation of systematic reviews. BMJ 346, f139 (2013).

42

Lauer MS, D’agostino RB Sr. The randomized registry trial – the next disruptive technology in clinical research? N. Engl. J. Med. 369(17), 1579–1581 (2013).

43

Faden RR, Beauchamp TL, Kass NE. Informed consent, comparative effectiveness, and learning health care. N. Engl. J. Med. 340, 766– 768 (2014).

44

Observational Health Data Sciences and Informatics (2014). www.ohdsi.org

45

PCORnet: The National Patient-Centered Clinical Research Network. Clinical Data Research Networks (25 March 2014). www.pcornet.org

46

Brown JS, Holmes JH, Shah K, Hall K, Lazarus R, Platt R. Distributed health data networks: a practical and preferred approach to multi-institutional evaluations of comparative effectiveness, safety, and quality of care. Med. Care 48(6), S45–S51 (2010).

47

Ohno-Machado L, Bafna V, Boxwala AA et al. iDASH: integrating data for analysis, anonymization, and sharing. J. Am. Med. Inform. Assoc. 19(2), 196–201 (2012).

www.futuremedicine.com

Commentary

197

Bringing cohort studies to the bedside: framework for a 'green button' to support clinical decision-making.

When providing care, clinicians are expected to take note of clinical practice guidelines, which offer recommendations based on the available evidence...
2MB Sizes 0 Downloads 9 Views