Journal of Psychiatric and Mental Health Nursing, 2015, 22, 155–156

Editorial Has the Research Excellence Framework killed creativity?

Lines have closed, the votes have been counted and verified and I can now reveal that the winner of the Research Excellence Framework (REF) 2014 is, well, everyone it would seem. REF has been a six-year process to review the quality and (new for 2014) impact of research produced by UK universities. REF was a major endeavour to assess the quality and impact of the research outputs of a nation and is a model of performance-based research that other countries – particularly developing countries with emerging academic models – will inevitably look to. It has been a complex and arduous process for those involved and one that is not without its critics. Certainly, it has been expensive, costing the UK taxpayer at least £60m (universities spent an estimated £47m, a further £12m on administrative costs) with some more recent estimates putting this figure at nearer £1b (http://www.timeshighereducation.co.uk/news/ academic-estimates-real-cost-of-ref-exceeds-1bn/ 2018493.article). In preparing for the REF, senior academic colleagues have spent – literally – years locked in small dingy offices poring over piles of papers, agonizing over who would and – significantly for their careers – would not have their research submitted. There are many of my own peers who feel, with some justification, that they have been unfairly excluded from REF. The process has – it might be argued – created a negative culture of fierce inter-collegial competitiveness. Researchers pitted against researchers, like kids in a playground, fighting over which papers are rightfully theirs. Academics who eventually received the ‘REF returned’ stamp of approval each had up to four outputs (mostly papers) submitted. The REF panel reviewed these, giving each a star rating ranging from 0 (unclassified, below the standard of nationally classified work) to 4* (world leading in terms of originality, significance and rigour). Outputs were the main (65%) element of the overall research quality profile. The environment (vitality and sustainability) and impact (reach and significance, demonstrated through case studies), respectively, made up the remaining 15% and 20%. The more 3* or 4* research in the quality profile, the greater the share of central government research funding universities get (currently the pot stands at around £1.6bn). © 2015 John Wiley & Sons Ltd

Headlines on REF day – the 18th December 2014 – were that 76% of research from UK universities were judged to be internationally excellent or world leading (www.ref.ac.uk). So fierce was the competition an impressive 14 universities claimed to be in the research top 10. How did nursing fair? And of more interest to readers of this journal, how did mental health nursing do? Back in 2001 – when REF was called the Research Assessment Exercise (RAE) – it is probably fair to say that nursing did rather badly. The profession responded positively to this challenge, and by the time of 2008, RAE was seemingly in much better research shape; there was evidence of outputs of at least an internationally excellent standing (3* or 4*) from a number of universities. The panel observed that nurses were more actively participating – and in some cases leading – multidisciplinary research groups working on externally funded projects. In the 2014, REF nursing ceased to be a unit of assessment in its own right. A reconstituted panel (UoA 3) now included Allied Health Professions, Dentistry and Pharmacy sitting comfortably (or perhaps not) alongside Nursing. The process by which panels were constituted was – to say the least – opaque. I have asked people – in the know – how people were selected to be panel members and received very mumbled responses. In my overly simple brain, I would have thought that members would have been the top researchers – perhaps defined by h-index – in our field (nursing). This was not the case back in the 2008 RAE (an argument that has been well made by Thompson & Watson 2010) and I suspect it was not the case this time either.

How were papers judged? As part of preparing for the REF, my university – as did most institutions – undertook a mock REF to get a sense of how papers might be rated by the (actual) panel. Mock research assessments were not new for this assessment exercise but there were added financial implications of getting it wrong this time around. This was because in 2010 the funding allocation model was changed, outputs rated 2* or below no longer generated any income to universities. Consequently, universities were under pressure to ensure as 155

Editorial

many 3* (and above) outputs as possible were returned. Vast effort (time and money) was devoted trying to secondguess the decisions of the panel members. To my mind, there are serious questions to ask about validity of the process that panels followed. Questions that make me wonder whether it was worth all the effort after all. Panels rated papers based on their own judgement; journal impact factors or citation data from the paper were not used to inform ratings. One colleague told me that they had no more than 15 min to read and rate each paper. Nursing is a broad methodological church that spans clinical trials and epidemiology through to social policy research. Can we really have faith that our work was judged fairly by our peers? Does it matter when the result was so good? The reconstitution of panels makes a direct comparison between 2008 and 2014 impossible. Some noteworthy trends can still be extracted. Compared with 2008, fewer staff (down 8%) submitted fewer papers (down 17%). But overall research quality improved; 77% of outputs classified as internationally excellent or world leading. It was pleasing to see that mental health research, particularly in the area of suicide and self-harm, was commended as a particular strength in the REF sub-panel report. The (reassuring) story that nursing can tell itself from the last three UK research reviews is of a profession that has pulled itself up by its stockings and is now producing world-leading research that is at least on a par with colleagues from clinical medicine, social policy and health services research. The system has worked?

Research with impact Impact has perhaps been the big ‘story’ to emerge from the REF process. Proving that research has had an impact is tough. An exemplar case study from nursing is perhaps CBT (Cognitive Behavioural Therapy) for depression. Researchers were able to demonstrate that CBT delivered in innovative ways (guided self-help, telephone) was as effective as face to face treatment at supporting recovery from depression (http://www.mhs.manchester.ac.uk/research/impact/ psychologicalintervention). When writing impact case studies, researchers (myself included) have tried to argue the impact of their work simply by highlighting that it is cited in NICE (the National Institute of Health and Care Excellence) or other national/

Reference

international guidelines. What this CBT (and other) case studies show is that this is not evidence of impact, rather it is evidence of a path to impact. This CBT case study has impact because large numbers of patients have benefitted (significance and reach).

What does REF 2014 mean for mental health nursing research? Mental health nurses in REF2014 have produced some world-class research that has had demonstrable positive impacts on the mental health of the nation and this is to be celebrated. This has come at a hefty (both financially and emotionally) price. The REF process has been expensive and divisive. It has created competition not fostered collaboration. One of the most worrying criticisms of REF and one that I have not yet touched on is that the process has crushed innovation. By chance, driving home the other evening I was listening to BBC Radio 4s ‘the Infinite Monkey Cage’ (a comedy/serious science programme presented by Professor Brian Cox from the University of Manchester). This particular episode was about serendipity in research, about how great ideas occur when scientists have space to think. The discovery of antipsychotic drugs – perhaps – a classic example of a chance finding in psychiatric research. REF has been described as a monster; if it is it is a monster it is one that likes to suck the life out of creativity. I wonder where the idea of delivering CBT over the phone came from. There are some big questions in mental health care and treatment: How do we improve the physical health of people with schizophrenia? What can be done to get people with mental health problems back to work? How can we better treat the negative symptoms of schizophrenia? We (society) need mental health nurses – yes, to get more engaged in doing world-leading research – but more than anything, we need them to be imaginative. Judging the quality of research is not like judging a talent competition, we cannot afford, particularly in nursing, to discard talent. Researchers need to be cherished, nurtured and supported. This is how we will improve mental health. R. GRAY PhD RN Professor and Assistant Executive Director of Research, Hamad Medical Corporation, Doha, Qatar and The University of South Australia, Adelaide

the UK. Journal of Clinical Nursing 19, 2957– 2958.

Thompson D.R. & Watson R. (2010) H-indices and the performance of professors of nursing in

156

© 2015 John Wiley & Sons Ltd

Has the Research Excellence Framework killed creativity?

Has the Research Excellence Framework killed creativity? - PDF Download Free
49KB Sizes 0 Downloads 9 Views