HomeNews and blogs hub

Reprohacking at CarpentryConnect Manchester 2019

Bookmark this page Bookmarked

Reprohacking at CarpentryConnect Manchester 2019

Author(s)
Anna Krystalli

Anna Krystalli

SSI fellow

Posted on 7 August 2019

Estimated read time: 8 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Reprohacking at CarpentryConnect Manchester 2019

Posted by s.aragon on 7 August 2019 - 10:31am Coding.jpgPhoto by Markus Spiske.

By Anna Krystalli, Software Sustainability Institute Fellow

It was great to be at CarpentryConnect Manchester 2019 a few weeks ago. Not only did I join a fantastic group of people who are a joy to spend time with, extremely knowledgeable and generous with that knowledge, but, with such a focus on active learning, it was also the perfect event for running the first ReproHack in the series and gathering feedback to guide future development.

What's a ReproHack?

Reprohacks are one-day reproducibility hackathons where participants reproduce papers from published code and data. While I've been part of running two of these since 2016 as part of OpenCon satellite events, returning to ReproHacking and taking it to the next level is the focus of my 2019 Software Sustainability Institute Fellowship.

Reprohack%20stick.pngThe run-up

In their current format, ReproHacks begin with a call for authors to propose their papers for reproduction. With a little help from friends and supporters of the initiative we managed to get 18 papers proposed, which is good going! One observation I'd make about the papers proposed is that, although papers were from across domains, the majority of them were coded in R. Not sure what to make of this and don't want to read too much into it. It is no secret that I live in a giant #rstats bubble. I did try to engage other research communities as much as I could but I may well just not have been successful. I do wonder however whether work across the R research community to formalise how code, data and papers is brought together in a research compendium and build tools to support this are paying off.

I was also really excited about interest in the ReproHack in general. In particular, I was contacted by Nicolas Rougier of ReScience C, an online journal publishing open source replications of already-published research. While replications are a step above reproductions, requiring variation in technical details (for example, using different software, running a simulation from different initial conditions, etc.), they are nontheless the most useful if reproducibility has already been verified. With that in mind, a proposal worth exploring was made to create a new “reproducibility report” category.

I also heard from Cassio Amorim, the creator of scigen.report, a web portal through which reproducers can log the status of their efforts against a paper's DOI! Both initiatives provide really exciting avenues for producing useful outputs during ReproHacks, creating incentives for participants and a way for findings to be logged for the benefit of the research community. We are working on integrating them into the ReproHack workflow and welcome feedback and ideas!

The most exciting development for me, however, was that Florencia D'Andrea, a postdoc at the National Agricultural Technology Institute of Argentina joined the team! She has already helped loads by reviewing and preparing materials, contributing ideas and most importantly just being there to bounce ideas off and plan future directions.

On the day

First and foremost, we had a brilliant time on the day! We got a total of 11 participants passing through which I was really happy with, given the diversity and quality of the multiple sessions running in parallel.

Hacking the ReproHack

Part of this first session was to crowdsource ideas and decide on strategy and next generation infrastructure for making ReproHacks reproducible. I was really pleased to have Radovan Bast of the University of Tromsø/Code Refinery helping with this. We've been sketching out plans and hope to collocate for a sprint on materials in the autumn.

Reprohacking

Finally, the main activity of the day was of course reproducing papers! I'm pleased to report that the majority of papers attempted were reproduced, and the stumbling blocks we did come across were instructive in their own way.

Automation is great, but storytelling is even better!

Huge thanks to @EnviroKaty for submitting a fab ? ?? paper to the #CCMcr19 #ReproHack! I had loads of fun reproducing the analysis for this really cool paper https://t.co/v1ww2D5xhg pic.twitter.com/r8rYMAMvPm

— Jessica Ward (@JKRWard) June 27, 2019

One general observation is that participants mostly enjoyed working with papers that brought them closer to the analysis, most notably through the use of literate programming. The more they felt walked through the materials, the more they felt they understood the analysis, what the code was doing and got ideas on how to reuse.

Awesome work, I had a lot of fun reproducing the analysis and investigating the paper. I'm now looking forward to playing around with the code for wrangling inaturalist data!!
--- Jessica Ward on "Comparisons of Citizen Science Data-Gathering Approaches to Evaluate Urban Butterfly Diversity" submitted by Kathleen Prudic

It was great working with someone else's markdown script - the .Rmd file itself was written in a really clear and transparent manner with lots of helpful comments and signposting as to what each chunk of analysis did.
--- Andrew Stewart on "Comparing theory-driven and data-driven attractiveness models using images of real women’s faces" submitted by Ben Jones

In this respect, another paper attempted with a fully automated workflow that went from a single command to a fresh copy of the pdf of the paper, while easily the most robust setup in terms of reproducibility, provided little opportunity for participants to be introduced to the code and analysis. So it's important to consider not only if something is reproducible but what the aim of the reproducibility is? Validation that the code produces the same results? An opportunity to examine and understand the code itself? The ability to reuse it?

A need for appropriate infrastructure

The day also highlighted the requirement of appropriate archiving of materials. Attempts to access the code and data published alongside nature communications article "Sea level regulated tetrapod diversity dynamics through the Jurassic/Cretaceous interval" on their platform lead initially to a 404 error.

In the intro talk to the #ReproHack we stress that reproducibility is hard.

We weren't quite expecting this! https://t.co/48CtrYP45g

— annakrystalli (@annakrystalli) June 27, 2019

Following that, the error changed to "Service Unavailable" and after a couple of hours or so it was fixed, but without a single word of engagement from the publishers themselves about it.

While I don't want to make too big a deal of this, mistakes do happen, it does make me wonder whether traditional publishers are actually taking this seriously, especially given the 6,000 EUR price tag attached to publication of this particular paper. Reproducibility IS hard and it bothers me to think that while authors go through the painstaking effort of making their work reproducible, publishers aren't also upping their game. Ultimately it shows that engaging with the materials is necessary if reproducibility is not to become just a tick-boxing exercise that nobody checks.

It also feeds into an important point regarding standardisation of access to such materials raised by a participant in the event collaborative notepad which I share here as an open ended question:

I am wondering if there is any automated way to pull the data and code from supplementary or additional files. Should the scientific community start to recommend some regulations and formats to smooth the way of how/where these kinds of data are made available and beneficial?
--- Manal Albahlal, University of Manchester

tl;dr

Overall, I feel that efforts like the ReproHack, which aim to engage people with the published raw materials behind the science are an important vehicle for showing the value of the practice, evaluating the current state of approaches and using that to inform best practice going forward, not only for reproducibility but also for reuse, transparency and ultimately for spreading knowledge, understanding and capacity more broadly.

P.S. If you are interested in running a ReproHack or helping out, please check out our reprohack-hq repository or say hello in our Gitter channel.

Share on blog/article:
Twitter LinkedIn