By Andrew Edmondson, Mike Zentner, and Cristian A. Marocico. We’re writing this blog from the perspective of people who are responsible for helping researchers in our institutions develop their own software for their own research purposes. We want to help our communities to make the right decisions about the sustainability of their software – and therefore about their time and money.
By Jeremy Cohen, Niels Drost, Vahid Garousi, Dafne van Kuppevelt, Reed Milewicz, Ben van Werkhoven, and Lasse Wollatz. Software plays an increasingly important role in all aspects of the modern scientific enterprise. The practice of developing scientific software, however, is still young and uncultivated compared to more traditional methods and instruments.
By Ben Companjen, Nicky Nicolson, Marcin Wolski, Graeme Andrew Stewart, and Anastasis Georgoulas. As more research fields develop some computational aspects, teaching good software practices and development becomes essential across the scientific spectrum. With the exception of some disciplines with a strong computing tradition, students and staff in most other areas have to adopt generic materials, which are quite often limited in scope or make unrealistic assumptions about the background of their audience.
By Daina Bouquin, Christopher Ball, Anna-Lena Lamprecht, Catherine Jones, Tyler J. Skluzacek. Containers, virtual machines, Jupyter notebooks, web applications, and data visualisations that run in a browser are all examples of complex digital objects made up of multiple components. Each of those components may have unique dependencies (hardware, software, external datasets, etc.) and different “authors”. Each component will also have different expected functionalities and may even have different licenses.
By Mateusz Kuzak, Maria Cruz, Carsten Thiel, Shoaib Sufi, and Nasir Eisty. We argue that research software should be treated as a first-class research output, in equal footing to research data. Research software and research data are both fundamental to contemporary research. However, the recognition of the importance of research software as a valuable research output in its own right is lagging behind that of research data.
By Alexander Struck, Chris Richardson, Matthias Katerbow, Rafael Jimenez, Carlos Martinez-Ortiz, and Jared O'Neal. Software written to solve research questions gains more recognition as a research result of its own and requires evaluation in terms of usability as well as its potential to facilitate high-impact and reproducible research. Indeed, in recent times there has been increased emphasis on the importance of reproducibility in science – particularly of results where software is involved. In order to tackle this problem, there have been efforts for evaluating reproducibility of research…
By Stephan Druskat, Daniel S. Katz, David Klein, Mark Santcroos, Tobias Schlauch, Liz Sexton-Kennedy, and Anthony Truskinger.
Like the behemoth cruise ship leaving the harbour of Amsterdam that overshadowed our discussion table at WSSSPE 6.1, credit for software is a slowly moving target, and it’s a non-trivial task to ensure that the right people get due credit. In this blog post, we aim to review the current state of practice in terms of credit for research software. We also attempt to summarise recent developments and outline a more ideal state of affairs.
By James Grant, University of Bath, Andrew Washbrook, University of Edinburgh, Louise Brown, University of Nottingham, Niels Drost, Netherlands eScience Center, and Andrew Bennett, European Centre for Medium-range Weather Forecasts
By Toby Hodges, EMBL, Roman Klapaukh, UCL, David McKain, University of Edinburgh, and
Subscribe to WSSSPE
The citation of research software has a number of purposes, most importantly attribution and credit, but also the provision of impact metrics for funding proposals, job interviews, etc.