Westminster Higher Education Forum Keynote Seminar: Protecting research integrity: reproducibility, the impact of the REF and improving governance
Posted by s.aragon
on 21 November 2018 - 9:00am
By Martin Donnelly, Research Data Support Manager at University of Edinburgh, and Software Sustainability Institute Fellow.
Reproducibility and integrity rank highly among the justifications for the ever-increasing attention to the mindful management and preservation of research data and software that we have seen in the last decade. These issues are often at the front of my mind in my day job managing my institution’s Research Data Support function, so I was naturally very happy to get the opportunity to travel to London in October to attend the most recent Westminster Higher Education Forum Keynote Seminar on the topic of “Protecting research integrity: reproducibility, the impact of the Research Excellence Framework (REF) and improving governance”. I have attended (and indeed spoken at) these events in the past, and always find them good value, squeezing as they do a great deal of food for thought into a compact agenda.
For those of us earning a crust in the academic echo chamber, they can help bring us out of our comfort zones and help situate what we do within the bigger picture of national and international research and innovation. This morning’s session was no exception, and the presentations and discussions, while concise, were often very wide-ranging, so for the sake of brevity in this blog post I have extracted and summarised only what I felt to be the most relevant bits from my lengthy notes.
This was an event of two halves, the first of which was chaired most impressively by Baroness Young of Old Scone, in her capacity as Member of the House of Lords Science and Technology Select Committee (although she may be better known to some in her capacity as Chancellor of Cranfield University.) As chair, she emphasised the need to engage with research stakeholders “all along the supply chain” to ensure that research was seen to be of high quality, honest, rigorous, transparent, open, and respectful. Ethics has two assurance aspects: one protective, ensuring the safety of research subjects, and the other assuring the quality of the conclusions. The roles of data and software in research generally belong to the latter sort, but we must ensure that the choices we make are informed and deliberate.
The first presentation of the day came from James Parry, Chief Executive of the UK Research Integrity Office, who addressed “Key issues and best practice within research.” James’s emphasis was on culture change, as opposed to a compliance regime around research integrity. Interestingly he took a somewhat different line on reward and recognition than I’m used to hearing, namely that we shouldn’t feel a huge need to reward researchers for doing what should be generally considered as The Right Thing To Do, in terms of sharing data and following good research practices, but that we seek to foster a (peer-driven) culture wherein this is normalised. James also addressed systemic pressures within the professional research environment, as described in the Nuffield report on Culture of Scientific Research in the UK (2014), and noted the need to be sympathetic to disciplinary norms and differences, as well as emphasising the directly-realised benefits of greater transparency.
James was followed by Stephen Metcalfe MP, in his capacity as a Member of the House of Commons Science and Technology Select Committee, who noted that integrity, ethics and proper research conduct have been constant themes for the Committee for much of the past decade. The problem is far from being solved, but the general view is that establishing a statutory regulator is not the preferred option, for more or less the same reasons as James gave earlier: the will to do transparent, ethical and high quality research ought to emerge organically from the per community rather than being imposed from “on high”. A question from the floor raised the spectre of Big Data, specifically the difficulties inherent in tracking provenance and demonstrating integrity in massive, diverse datasets which may originate in lots of smaller data. The view seemed to be that trust was important, in the absence of bulletproof provenance tracking mechanisms. How this might work in practice was hard to quantify, though.
Next up was Samuel Roseveare, Policy Manager at Universities UK, who spoke about the 2012 Concordat to Support Research Integrity, which addresses data but not software. They are aware that this has been weakly accepted by the sector, and would like to move towards a situation where universities are complying and self-reporting on how they meet the Concordat commitments. The Concordat is due for a renewal soon, and hopefully we can take that opportunity to get other outputs (like software) included too.
Providing a publisher’s perspective, Iain Hrynaszkiewicz, Head of Data Publishing at Springer Nature, emphasised the benefits of demonstrable integrity to the product they sell, i.e. scholarly research. As an influential stakeholder (researchers usually want and need to get published) Iain’s view is that publishers should be increasingly demanding transparency from researchers, effectively obliging them to adopt good, robust practices. Iain also noted that we can all do our bit to encourage different kinds of publications, such as data articles, software articles and methods articles, and also that we measure citations to supporting evidence such as datasets and software, etc. An encouraging view.
The next couple of presentations were somewhat polemical, and all the more entertaining for that! Dr Stephanie Mathisen, Policy Manager at Sense about Science, spoke about clinical trial transparency and data sharing. This is a legal requirement now (by law all clinical trials on the EU register must report within one year of completion), but it is not universally followed and only around half of the data evidencing clinical trials is currently available: 51.4% to be exact. In Stephanie’s view, this betrays the trust of patients who have provided access to their data, and indeed to their bodies. There is a frustration that this has been talked about for decades, but real change happens very slowly. Sense about Science use unorthodox tactics to try to improve the state of affairs, naming and shaming unreported trails via an “unreported trial of the week/month” in the British Medical Journal.
Stephanie was followed by Dr Stephen Eglen, Reader in Computational Neuroscience at University of Cambridge, a Software Sustainability Institute Fellow, and a member of the Bullied Into Bad Science campaign. Stephen noted the need to change the culture whereby the perception of the quality of the journal in which you publish is given more weight that the research conclusions themselves; again, this is an issue for policymakers (about whom Stephen had some fairly strong words!), and the pressure to publish first means that short cuts are sometimes taken. Stephen ended by encouraging us to read and sign the DORA (Declaration on Research Assessment)and to stop sending the wrong signals by celebrating/congratulating high profile publications more than others, and “dressing up” impact case studies to seek to “game” the REF.
After coffee, the chairing of the second part of the event passed to Dr Frances Rawle, Director of Policy, Ethics and Governance, at the Medical Research Council (MRC). Wouter Haak, Vice President for Research Data Management Solutions at Elsevier, spoke first, addressing “The potential of open data in supporting research integrity” and citing some high-profile instances of scientific misconduct / irreproducible research. Frances gave the next talk herself, noting that funders also had strong influence on researcher practices, and outlining the revisions recently made to MRC application paperwork to address integrity and reproducibility issues more explicitly.
Professor Nick Plant, Dean for Research Quality and Impact at the University of Leeds spoke about his weariness at hearing variations on “I’m an academic, trust me” – which is at odds with the sceptical tradition in science. Dr Daniel Hook, CEO of Digital Science, took a fairly gloomy view of the existing reward and recognition structures in academia, noting that human nature dictates that as soon as you put an evaluation system in place, people will seek to optimise their own outcomes. Daniel also noted the responsibility of funders (and other stakeholders) to follow up on their policies, and to impose sanctions when researchers fail to do what is expected and required of them.
The final presentation of the day came from Dr Elaine Morley, Senior Policy Advisor at UK Research and Innovation (UKRI), who noted an ambition to improve statistical and data management competencies among researchers, in conjunction with partners such as the Royal Statistical Society. Universities like Edinburgh are already on board with this, but it is difficult to reach all of the necessary people at the right time, i.e. when they are in the mood to learn new behaviours: it’s a bit of a moving target, and hard to hit.
After the event wrapped up I had just enough time to take the tube down to the House of Commons for lunch and a quick drink with a friend of mine who also happens to be an MP (and whose constituency covers part of my University’s estate). I did manage to steer the conversation somewhat towards the role that data and code sharing have to play in ensuring demonstrable research integrity, but I daresay a more formal and intensive lobbying approach might bear even greater fruit! Still, every little helps.