Interview For reprint orders, please contact: [email protected]

Clinical trial design: increasing efficiency in evaluating new healthcare interventions

Professor Shaun Treweek speaks to Adam Born, Assistant Commissioning Editor: Professor Shaun Treweek is Chair in Health Services Research at the University of Aberdeen (UK) and has over 18 years experience as a health services researcher specializing in trial methodology. He is active in the field of pragmatic trial design, the design and pretrial testing of complex interventions, interventions to improve recruitment to trials, and theory-based methods to assess the implementation potential of interventions. Professor Treweek previously helped create the Tayside Clinical Trials Unit at the University of Dundee (UK) and served as the Assistant Director 2010–2013. His interest in trial methodology began before his time in Dundee when he spent 6 years in Oslo (Norway) working at the Norwegian Knowledge Centre for the Health Services. Keywords:  efficiency • pragmatic • trials

QQ Could you tell our readers a little

about your career to date & how you came to your current role?

My original background was physics and through that got into medical physics, which was the area of my PhD. What really interested me from that was using an engineering and science background within healthcare and (by a series of accidents, if truth be known) ended up working in Oslo (Norway) for Andy Oxman. Through Andy I got really interested in health services methodology, how to improve the way we do evaluations of new innovations in healthcare. And through that I ended up in my current role in health services research. I am particularly interested in improving treatments and healthcare, obviously, but I am really interested in how we improve the methods we use to evaluate new healthcare interventions. So, what gets me really excited are the methods that we use to evaluate new healthcare innovations.

QQ What would you say has been your greatest professional or academic achievement to date?

10.2217/CER.14.13 © 2014 Future Medicine Ltd

I had to have a bit of a think about this one. If I was forced to find something: a few years ago I was involved with an extension of CONSORT for pragmatic trials. Pragmatic trials are extremely relevant in comparative effectiveness research (CER) – designs that are actually more relevant to users, or at least that’s the intention. What I would consider one of my greatest achievements was to be part of that initiative to develop this idea that if you want to design a trial that is likely to be useful to the people you want it to be useful to, often clinicians, patients and policymakers, then the design decisions you make directly influence how useful that trial will be. And you can make really good or really bad design decisions that will render your trial results really useful or completely useless.

Shaun Treweek Health Services Research Unit, University of Aberdeen, Health Sciences Building, Foresterhill, Aberdeen, AB25 2ZD, UK streweek@ mac.com

QQ What is your research or what are your efforts focusing on at present?

Right now, they’re focused on increasing efficiency in clinical trials. There is, I believe, a lot of waste in how we do clinical trials and there’s not a great deal of evidence on how

J. Compar. Effect. Res. (2014) 3(3), 233–236

part of

ISSN 2042-6305

233

News & Views  Interview to do trials in an efficient way. Right now we’re doing a big push here at the University of Aberdeen (UK) to set up programs of work looking at improving the methods we use to do clinical trials and particularly to provide a much greater evidence base for that.

QQ Why is trial design so important, & why is the

field only relatively recently gaining momentum?

I think it is important for the reasons I mentioned a moment ago. If you design your trial in a thoughtless way and you don’t give sufficient consideration to what the implications are of each of your design decisions then you could spend 5 years doing a beautifully conducted, irrelevant trial. Because you’ve taken a decision that means, for example, that none of the people who are in your trial are actually like the people in the community with the condition who are possible candidates for this particular innovation. So, the people providing that treatment think “Well these are nothing like my patients, I can’t use this.” So, that one decision, made 10 years ago, renders all of that expense, work, energy and goodwill wasted. Therefore, design is absolutely crucial, it’s a make or break part of a trial – you can ruin a trial easily by making poor decisions.



If you design your trial in a thoughtless way and you don’t give sufficient consideration to what the implications are of each of your design decisions then you could spend 5 years doing a beautifully conducted, irrelevant trial.



I think it’s only recently gaining momentum because people are only really understanding the consequences of those decisions right now. People are, perhaps because of the current economic situation, realizing that we cannot afford to spend so much money on trials that do not have direct relevance to clinicians, patients, the public or policymakers. This piece of work has to inform someone’s decision, it has to be useful. That really points the spotlight on the design decisions we make, we can’t afford to waste money on trials that have design decisions that render the results irrelevant.

QQ What do you feel are the most common

problems associated with trial methodology currently?

The design decisions that I’ve already talked about are problematic, so people may make some design decisions simply because that’s how people always have taken them. For example, we might think we need to control for absolutely everything. So, we’re only interested in participants with a particular condition, let’s say diabetes, and we’re not interested in anyone with anything else – we filter out all those patients that don’t have

234

J. Compar. Effect. Res. (2014) 3(3)

only the index condition, diabetes in this case. I think that’s almost instinctive, but that thought, that decision, now leads to this highly selective group of participants, and it’s highly unlikely that they represent the people being seen by the doctors who are making the decisions regarding diabetes care, thus killing the utility of the trial. So, I think perhaps the most important problem when considering trial design is people not thinking really carefully about the consequence of each of their design decisions – so who goes into the trial? Who is delivering the intervention? Can this intervention actually be delivered in routine care now, or are we throwing an army of resources at it in the trial which means we could never put it into routine care without that army? Are we trying to collect way more data than either we need to answer our research question or than we can actually do something with, because we don’t have the analytic resources within our funding envelope? And all that extra data collection; somebody has to collect it, somebody has to provide it, perhaps that might affect recruitment or retention – that decision to put an extra outcome measure in has consequences. Not thinking about those consequences carefully at the design stage can kill trials.

QQ Would you say it is almost counterintuitive?

Sure, obviously you would want people with diabetes if you were doing a trial of a new diabetes drug. The question would be: what about people who are obese and have CVD as well? Or do you just want people who have diabetes? And what about people who are 83? What about pregnant women? Clearly you want people with the condition of interest, that’s a given and everyone would recognize that, but what then often happens is that people say “We only want them. We’re not interested in anybody who has comorbidities.” What that often leads to is trials with a group of participants who are a highly selected subset of the actual population that caregivers are dealing with. So, they look at the trial results that you have beautifully collected over the past few years, and they’ll see that no one in this trial was over 65 with heart disease, they look at their patient population and they see that almost everyone has heart disease and is elderly and so they think the trial’s a waste of time, it doesn’t look like their patients and they’ll ignore it. That trial could have been brilliantly conducted with 100% data collection and excellent retention, but it’s irrelevant. The people in the trial are not like the people who doctors see. Now if the trial organizers had thought through their design decisions they may have said that there’s no reason to believe that people who have these comorbidities are different from the people we’ve put in to the trial and then they could have had a discussion about it and made an informed decision one way or the

future science group

Interview 

other. My point here is that people do it thoughtlessly, they just don’t think. So, they make the decisions, but haven’t considered the consequences – does this decision make it easier or more difficult for the person I want to use these results to actually use these results?

QQ How can these methodological issues be

resolved? What are the hurdles that need to be overcome to achieve this?

I think a raising of awareness is key. We need to encourage any opportunity for trialists, methodologists and clinicians, particularly people who are at the coalface, to actually raise these issues and say this research that you guys are doing – it’s not relevant and these are the reasons why. I think listening more to the voices of clinicians, the public and policymakers and answering their questions is crucial for researchers. Of course, thinking more about who you put into the trial, what are the outcomes and how they should be measured are important, but just listening more to the people that are actually out there dealing with patients on a day-to-day basis and listening to what patients are asking for are paramount. I think that as researchers we can provide tools to people who are designing trials. There are some tools out there that should help people think through consequences of a design decision and those tools are not really intended to push people one way or the other, they’re mainly intended to get people to think – what are the consequences of this decision? Could we implement this intervention if it was found to be effective? Who are we putting into this trial (both clinicians to deliver it and patients to receive it)? Do we monitor adherence a lot? And compliance has to be a factor in that latter decision there as well – are we doing things in the trial that mean that people are more likely to stay on the treatment than we might reasonably expect in routine care?

“There are some tools out there that should help people think through consequences of a design decision and those tools are not really intended to push people one way or the other, they’re mainly intended to get people to think...



So I think there are tools that can help. Journals, as well, can enforce standards of reporting and these feed backwards, if you expect something to be reported someone has to make sure the trial is up to standard at the design stage, thus they are also a way of changing how trials are designed. And funders, of course, can push agendas on improving trial design. Here in the UK, the big funders are very strong on patientrelevant outcomes. Funders can really drive quality in a good direction.

future science group

News & Views

QQ How successfully do you think CER is

currently being translated into policy & practice worldwide? How do you think this can be improved?

I think it’s slow but getting better. I was just talking to some colleagues about exactly this issue. I think it is problematic, but as researchers listen more to how people are using our results then as our research becomes more relevant – and I think that’s what CER is about – then that translation process will become easier. As we listen more, and as we involve people like policymakers and patients in our research designs and the research process then that research will be more relevant because you involve people who really know what’s going on at the coalface. So, the translation process becomes less of a translation process because its implementation potential was built into it at the design phase. If something turns out to be effective then you’ve already thought about how to implement it and you’ve talked with the people who know about how to do that right at the outset of the trial or, even better, they’re working with you continuously. I think all of these things will help the implementation of CER in the future, and it will take time, but I think there’s an awareness now. The fact that CER is gaining momentum is, I think, a good sign that people are recognizing that we have to think more carefully about the relevance of the research we plan to do.

QQ What are you excited about working on over the next year?

Two things. One is something called Trial Forge, which is an initiative to try and push forward methodology research within trials. The basic idea is to identify gaps in our knowledge about how to do trials and when we do identify a gap, seek to identify groups of individuals who are interested in filling those gaps. It’s all about collaboration and identifying what we know and what we don’t know. The other new thing that I’m interested in is looking much more to business and organizational change expertise, to see if we can learn something from them about how they do things in industry. And I don’t mean the pharmaceutical industry in particular, just industry in general. What can we learn from business that can be applied to healthcare and our trials? If you think of a healthcare innovation as a product then we’re trying to market an idea for a product to funders, clinicians and patients, and we’re trying to build loyalty among clinicians and patients so that they stick with it; I’m sure that there’s a lot we can learn from business and I’m kicking that off sometime this year.

www.futuremedicine.com

235

News & Views  Interview QQ If you had infinite resources, what research would you instigate first & why?

The answer is linked to both of the things I just mentioned, but it’s all about improving the efficiency of the process. If I had unlimited resources, I wouldn’t be running lots and lots of trials of new innovations; I would be focusing on how to improve the way we evaluate. So, identifying gaps in our knowledge, looking for what we know, spreading what we know (i.e., implementing what we know) and building collaborations so that we don’t have separate patches of activity but are working towards shared goals. And also ways of making it easier for people at the coalface to inform trial design so that we’re asking the right questions and we’re designing trials to answer those questions, so that the results are going to be useful.

QQ Finally, what do you think will be the hot

topics in CER & trial design over the next few years?

Well I’m going to come back to efficiency again. I really do think there is a growing recognition that we are

236

J. Compar. Effect. Res. (2014) 3(3)

throwing money down the drain and that we have been doing it for years. Perhaps tough times can do good, because you can’t afford to keep doing that while there’s so little money floating about. For me, the big topic is trial efficiency and relevance. Disclaimer The opinions expressed in this interview are those of the interviewee and do not necessarily reflect the views of Future Medicine Ltd.

Financial & competing interests disclosure S Treweek has no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties. No writing assistance was utilized in the production of this manuscript.

future science group

Clinical trial design: increasing efficiency in evaluating new healthcare interventions.

Professor Shaun Treweek speaks to Adam Born, Assistant Comissioning Editor: Professor Shaun Treweek is Chair in Health Services Research at the Univer...
2MB Sizes 0 Downloads 3 Views