npg

© 2014 Nature America, Inc. All rights reserved.

NEWS a cell’s developmental potential. Scientists routinely rely on staining with the TRA1-60 antibody as a proxy measure of the pluripotent status of embryonic stem cells or induced pluripotent stem cells. Other markers can reveal states of cell differentiation. But as Anne Plant, chief of the Biosystems and Biomaterials Division at NIST, points out, there’s a large degree of experimental variability in these kinds of assays. “When you start thinking about these things in a tangible way, you come up with a whole bunch of uncertainties,” Plant says, “and it’s part of the reason there’s so much confusion in the literature.” For pluripotent stem cells, NIST is developing a reference test in which the binding intensity of the antibody of interest is measured relative to the total amount of protein in the cell. Different scientists can then compare ratios for more consistent evaluations across laboratories. “Researchers need to have metrics that work across labs and across countries,” says Frances Ligler, a biosensor and microfluidics expert at the Joint Department of Biomedical Engineering at North Carolina State University and the University of North Carolina at Chapel Hill who heads the National Research Council’s committee. The hope, according to Plant, is that

these kinds of metrics will ultimately enable more robust in vitro predictions of how stem cell–based therapies will behave in the human body. “We think the measurement components are key to successful products.” And the biopharmaceutical industry seems to agree: a survey of companies released earlier this year by the Alliance for Regenerative Medicine, a Washington, DC– based advocacy organization, found that lack of standards was widely perceived as one of greatest challenges facing the field. Measuring up Standard setting is on the radar of scientists working with adult stem cells, too. In the June issue of Stem Cells and Development (23, 1157–1167, 2014), a team led by Sowmya Viswanathan, associate director of the Cell Therapy Program at the University Health Network in Toronto, and Mahendra Rao, former director of the US National Institutes of Health (NIH) Center for Regenerative Medicine and now vice president for regenerative medicine at the New York Stem Cell Foundation Research Institute, proposed establishing a reference cell bank for evaluating mesenchymal stem cells (MSCs), a type of multipotent stromal cell found in bone marrow, fat tissue, umbilical cords, placentas and dental pulp.

Such a resource would provide a ‘ruler’, the authors propose, against which all MSC preparations could be compared for quality assurance. “We’re not looking for an exact representative gold standard,” Viswanathan says, “but rather for a reference material to calibrate whatever it is we want to measure.” Some scientists worry, however, that such a reference cell bank could pigeonhole MSC research. “The field would like to believe that there’s a common mesenchymal stem cell found in many different tissues throughout the body, but by rigorous analyses we know that these cells are not the same,” says Pamela Robey, acting director of the NIH Stem Cell Unit in Bethesda, Maryland. “So, trying to make a ruler to compare all of these different tissue sources of mesenchymal stem cells, I think, is going to be futile.” But according to Robey, who studies bone regeneration within the NIH’s National Institute of Dental and Craniofacial Research, this problem is limited to MSCs. Pluripotent stem cells, by comparison, have long-established reference lines. And hematopoietic stem cells express a suite of well-defined surface markers. “Those cells are all apples,” Robey says. “MSCs are apples, oranges, kiwis, grapefruits. They’re all fibroblastic cells, but they’re not the same.” Elie Dolgin

Group seeks standardization for what clinical trials must measure In 2006, clinical trial developer Paula Williamson was trying to set up a standard study comparing two asthma drugs. When she met with the chief investigators, however, she realized that they were unsure exactly what their trial should measure. She turned to previous studies and reviews to get some clarity but found that few assessed the same outcomes, and most included completely unique ones. Williamson, a statistician at the University of Liverpool, UK, was shocked. “It was clear that there wasn’t a core outcome set,” she says, referring to a list of the minimum measurements that all clinical studies in a given field should record. It became increasingly clear to her that in many areas of medicine, studies select what they measure in isolation, making it impossible to directly compare their results and leading to redundancies in research. “The status quo is wasting huge amounts of research money,” she says. In 2010, Williamson co-founded the Core Outcome Measures in Effectiveness Trials 798

(COMET) Initiative to address the lack of uniformity in clinical trial outcomes, also known as endpoints. And in a systematic review article published in June, she and her colleagues describe the state of core outcome set development across various clinical fields, from cancer to neonatal health (PLoS One 9, e99111, 2014). Her group identified 198 studies detailing available core outcome sets in a total of 25 disease areas. Rheumatoid arthritis, for example, included a whopping 25 studies, whereas the report found just four for eczema and two for malaria. For other areas, such as tuberculosis, there was no record in the literature of any core outcome research. Without data to support one outcome set over another, many researchers come up with their own, leading to confusion among those trying to use clinical trial results. “As a clinician with an interest in childhood eczema, it is almost impossible for me to compare studies and decide whether a new treatment is better than existing ones, as

over 20 named scales are in use,” says Hywel Williams, a dermatologist at the University of Nottingham, UK, who helped found the smaller-scale Harmonising Outcome Measures for Eczema (HOME) Initiative. Doug Altman, a statistician at the University of Oxford, UK, and a member of COMET’s management group, says COMET took inspiration from another such initiative called Outcome Measures in Rheumatology (OMERACT), which has been surveying doctors and patients to develop and rank validated outcome measures for rheumatic diseases for the last two decades. COMET maintains an online database (http://www.comet-initiative. org/) that indexes reports from OMERACT, HOME and other similar projects. In June, COMET announced the Core Outcomes in Women’s Health Initiative, a planned effort to gather information on what primary and secondary endpoints trials involving women should include (BJOG doi:10.1111/14710528.12929, 2014).

VOLUME 20 | NUMBER 8 | AUGUST 2014 NATURE MEDICINE

Paula Williamson

npg

© 2014 Nature America, Inc. All rights reserved.

Just use it According to Williamson, the COMET database is the only searchable resource of completed and ongoing studies into core outcome sets. Now that it is up and running, the initiative’s next objective is to make sure that researchers use it. COMET has had success in getting the international group behind the SPIRIT (short for Standard Protocol Items: Recommendations for Interventional Trials) guidelines and the UK’s National Institutes for Health Research to recommend that trial designers take into account any currently published core outcome sets and either use one of them or explain why they don’t. Williamson says that representatives from the US National Institutes of Health (NIH) and the US Food and Drug Administration attended a COMET workshop in April in Baltimore, Maryland, and she is hopeful they, too, will consider adopting similar recommendations. Irmgard Eichler, the scientific administrator in pediatric medicines at the European Medical Agency (EMA) in London and a member of COMET’s International Advisory Group, says that several EMA guidelines already include core outcome sets that are in COMET’s database. “The ultimate goal for all of us is to make trial results comparable,” she says. Jerry Sheehan, assistant director for policy development at the NIH’s National Library of Medicine (NLM) in Bethesda, Maryland,

Endpoint enthusiast: COMET's Paula Williamson.

Paula Williamson

NEWS

Stellar outcomes? The COMET Initiative website hosts a database of research into outcome sets.

agrees that consistency in outcome sets would make comparative meta-analyses easier. Last year, the NLM launched its own database of ‘core data elements’ that various NIH institutes recommend or require the trials they fund to follow. Sheehan says there may be opportunities for collaboration with COMET, but he does not know what such a collaboration would look like just yet. Patient input A major finding of Williamson’s June report in PLoS One was that only 15% of the studies included patients or caretakers in core outcome set design. “Previous work has been quite poor about involving patients,” Williamson says. In the field of rheumatoid arthritis, for example, patients rank fatigue as the most important outcome to measure in a clinical study, but fatigue is not even listed on the official core outcome set for the International League of Associations for Rheumatology (Arthritis Care Res. 62, 647–656, 2010). “The results you focus on in research make all the difference to patients,” says David Flum, a surgical epidemiologist at the University of Washington School of Medicine in Seattle and a methodology committee member at the Patient Centered Outcomes Research Institute, a Congressbacked nonprofit that funds research aimed at making clinical trial results more useful patients. For example, he says that many cancer drug trials measure vomiting as a side effect, “but if you ask patients what they care about, it’s not how often they vomit, it’s how often they are nauseous, and those things are not always related.” He thinks that COMET is a worthwhile initiative because

NATURE MEDICINE VOLUME 20 | NUMBER 8 | AUGUST 2014

it promotes patient involvement in core outcome set design. Although the goals of the COMET Initiative will probably help trial developers and patients, some are not convinced it will help researchers. “COMET is clearly noble, but in practice it’s also rather flawed in being overly prescriptive,” says Brian Lipworth, an allergy and pulmonology specialist at the Scottish Centre for Respiratory Research at the University of Dundee, UK. For example, the number of asthma attacks in a given time is a common measurement among several asthma core outcome sets indexed by COMET, but Lipworth says that a drug meant to immediately open up the airways should not have to be tested for its ability to prevent such attacks. Aziz Sheikh, an asthma researcher at the University of Edinburgh’s Center for Population Health Sciences, says he is interested in using COMET as a resource for his study designs but finds the database difficult to navigate. COMET lists 14 core outcome sets for asthma, and Sheikh says that even with the database it is hard to discern which is best for a particular study. Williamson understands that some currently published core outcome sets on the database may be outdated or may not be representative of a field’s current consensus, which is why COMET is involved in developing new ones and designing methods to test their quality. She says that one of the goals of COMET is to give researchers an opportunity to get involved. If they see gaps in core outcomes in their fields, she says, “I hope that they’d join in to improve the situation.” Amanda B Keener 799

Group seeks standardization for what clinical trials must measure.

Group seeks standardization for what clinical trials must measure. - PDF Download Free
660KB Sizes 0 Downloads 4 Views