HomeNews and blogs hub

Critique software, but understand the constraints it’s written under

Bookmark this page Bookmarked

Critique software, but understand the constraints it’s written under

Author(s)
Neil Chue Hong

Neil Chue Hong

Director

Simon Hettrick

Simon Hettrick

Director of Strategy

Posted on 1 June 2020

Estimated read time: 5 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Critique software, but understand the constraints it’s written under

Posted by g.law on 1 June 2020 - 7:51am codePhoto by Markus Spiske on Unsplash

Demonising researchers who publish their code discourages openness, say Neil Chue Hong and Simon Hettrick.

This article originally appeared in Research Fortnight, and is reproduced here with permission.

Three months ago, few would have believed that the trustworthiness of a piece of academic software could become the focus of intense international scrutiny. But that was before the epidemiologist Neil Ferguson and his team at Imperial College London released their 16 March preprint modelling the toll of Covid-19 in the UK and the effect of measures to control it.

The grim numbers in this work seemingly contributed to the government’s decision to impose a lockdown, with its associated and devastating effect on the UK’s economy and our way of life. Since then, Ferguson’s work and personal life have been put through the wringer.

Much of the attention—and more than a little opprobrium—has focused on the computer code underlying the model. Initially, this centred on the fact that the group had not released its software.

When Ferguson tweeted on 22 March that he “wrote the code (thousands of lines of undocumented C) 13+ years ago to model flu pandemics”, the debate expanded to include the work’s age, robustness and applicability to coronavirus.

After the software was released in late April, the issue even reached parliament. On 11 May, the Conservative MP Desmond Swayne questioned the role of Ferguson’s work in the lockdown decision, telling the house that the code had received “very significant peer criticism”.

Sharing is rare

The affair has seen legitimate scientific concerns and debate mixed up with efforts to undermine the lockdown and deflect responsibility for policy decisions. But most of those criticising Ferguson for sharing his code too late probably don’t realise that sharing software at any time is far from the norm in academia.

Fundamentally, this is because most researchers don’t have the necessary skills, and those who do lack any incentive to invest the necessary time.

Even those who publish their software have little reason to clean up and document their code for release, and support it afterwards. Researchers are judged on their publications, not the quality of their code. With no incentives, and amid an already busy schedule of research, teaching and administration, time is too precious to expend on software.

Industry is often held up as an example of how to do things right. Some of Ferguson’s critics even suggested that academic software development should be outsourced to industry.

Compared with academia, industry has better resources, better training and far stronger incentives for building good software. It has practices to ensure more eyeballs see the code, and for getting these reviews earlier in the development process. Bugs still find their way into the software, but the loss of revenue or users is a powerful incentive to fix them.

Researchers can certainly learn from industry—although we shouldn’t forget there are also many examples of gold-standard software development in research. But we cannot expect to emulate the commercial world without its resources and incentives.

Building trust

None of this is to deny that openness is vital. It’s something that the Software Sustainability Institute has advocated and supported for 10 years. We work for a culture change to improve the recognition, reproducibility and reusability of research software.

If researchers are willing to publish their results, they should be willing to publish their software. If we are to trust those results, research—including the software that underpins it—must be transparent.

But we will not incentivise openness through toxic, ‘reviewer 2’-type behaviour. Criticise software by all means, but bear in mind that its author is likely to be under-resourced and their work with software under-appreciated.

Even well-meaning researchers and software engineers can do more harm than good if they forget they are fortunate to have expertise and knowledge that is not universal across academia. If we attack researchers who take the plunge and make the effort to release their code, we will only drive fewer to publish their code.

With software permeating every aspect of research, it was only a matter of time until a catastrophe forced research results and the software that generated them into the limelight. This will not be the last time. If something good is to come of this situation, we should use this interest in software to change research culture.

We must accept that trust in research is inexorably tied to trust in software, and use this to lead the research community to adopt better software engineering practices. A good first step would be to applaud researchers who are brave enough to publish their code.

Share on blog/article:
Twitter LinkedIn