Software and research: the Institute's Blog

Making Software a First-Class Citizen in Research

Latest version published on 28 November, 2018.

By Mateusz Kuzak, Maria Cruz, Carsten Thiel, Shoaib Sufi, and Nasir Eisty. We argue that research software should be treated as a first-class research output, in equal footing to research data. Research software and research data are both fundamental to contemporary research. However, the recognition of the importance of research software as a valuable research output in its own right is lagging behind that of research data.   

How do we evaluate research software to meet different requirements?

Latest version published on 27 November, 2018.

By Alexander Struck, Chris Richardson, Matthias Katerbow, Rafael Jimenez, Carlos Martinez-Ortiz, and Jared O'Neal. Software written to solve research questions gains more recognition as a research result of its own and requires evaluation in terms of usability as well as its potential to facilitate high-impact and reproducible research. Indeed, in recent times there has been increased emphasis on the importance of reproducibility in science – particularly of results where software is involved. In order to tackle this problem, there have been efforts for evaluating reproducibility of research…

Credit and recognition for research software: Current state of practice and outlook

Latest version published on 3 December, 2018.

By Stephan Druskat, Daniel S. Katz, David Klein, Mark Santcroos, Tobias Schlauch, Liz Sexton-Kennedy, and Anthony Truskinger. Like the behemoth cruise ship leaving the harbour of Amsterdam that overshadowed our discussion table at WSSSPE 6.1, credit for software is a slowly moving target, and it’s a non-trivial task to ensure that the right people get due credit. In this blog post, we aim to review the current state of practice in terms of credit for research software. We also attempt to summarise recent developments and outline a more ideal state of affairs.

Dabbling In Deep Learning

Latest version published on 27 November, 2018.

By Adam Tomkins, Software Sustainability Institute Fellow. Like most programmers, I’ve heard all about all the latest advances in Artificial Intelligence and Deep Learning, but, being stuck deep in a software project of my own, deep learning has remained thoroughly on the outskirts of my professional life. However, I’ve been ruminating on some problems that could serve as my nice on-ramp into the deep earning development community, and I’ve given it a fair few shots over the years. Unlike any other technology I’ve dabbled in, there always seems to be something in the way of what should be a…

Westminster Higher Education Forum Keynote Seminar: Protecting research integrity: reproducibility, the impact of the REF and improving governance

Latest version published on 27 November, 2018.

By Martin Donnelly, Research Data Support Manager at University of Edinburgh, and Software Sustainability Institute Fellow. Reproducibility and integrity rank highly among the justifications for the ever-increasing attention to the mindful management and preservation of research data and software that we have seen in the last decade. These issues are often at the front of my mind in my day job managing my institution’s Research Data Support function, so I was naturally very happy to get the opportunity to travel to London in October to attend the most recent Westminster Higher Education Forum…

The main obstacles to better research data management and sharing are cultural. But change is in our hands.

Latest version published on 27 November, 2018.

By Marta Teperek and Alastair Dunning, TU Delft. Recommendations on how to better support researchers in good data management and sharing practices are typically focused on developing new tools or improving infrastructure. Yet research shows the most common obstacles are actually cultural, not technological. Marta Teperek and Alastair Dunning outline how appointing data stewards and data champions can be key to improving research data management through positive cultural change.

Bash Scripting Workshop

Latest version published on 19 November, 2018.

By Becky Arnold, University of Sheffield. On the 7th of November, Raniere Silva of the Software Sustainability Institute gave a one day workshop on bash scripting at the University of Sheffield. The Unix shell has tremendous power. This workshop was geared towards researchers that had some experience of working on Unix-like systems, but wanted to build on that to better exploit its full potential. Will Furnass of the University of Sheffield Research Software Engineering group also helped out at this event.

Science, Awards and Reproducibility

Latest version published on 19 November, 2018.

By Raniere Silva, Community Officer, Software Sustainability Institute As researchers, we have our own motivation to spend years digging into something until we, hopefully, find something new. Independently from our motivation, it is always nice when we receive recognition for our work, specially in the form of a famous award like the Nobel prize. During the announcement of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2018, one of the awardees, Paul M. Romer, highlighted the need to communicate science clearly – not only using simple words and language if you’re…

9th ConfOA: the Brazil-Portugal Conference on Open Access

Latest version published on 19 November, 2018.

By Raniere Silva, Software Sustainability Institute, Stephan Druskat, Humboldt-Universität zu Berlin. ConfOA is the Brazil-Portugal Conference about Open Access and the 9th edition was hosted in Lisbon, Portugal between the 2nd and 4th October 2018. Although the conference only has open access in its name, it is the place to talk about the broader concept of open science with many stakeholders.

Open Community Metrics and Privacy: MozFest18 Recap

Latest version published on 15 November, 2018.

By Raniere Silva, Software Sustainability Institute, and Georg Link, University of Nebraska at Omaha. Open communities lack a shared language to talk about metrics and share best practices. Metrics are aggregate information that summarise raw data into a single number, stripping away any context of data. Pedagogical metric displays are an idea for metrics that include an explanation and educates the user on how to interpret the metric. Metrics are inherently biased and can lead to discrimination. Many problems brought up during the MozFest session are worked on in the CHAOSS project.