CW21 - Mini-workshops and demo sessions

Photo by fauxels on pexels

Mini-workshops and demos sessions will give an in depth look at a particular tool or approach and a chance to query developers and experts about how this might apply to participants’ areas of work.

Here is the list of mini-workshops and demo sessions that will take place at CW21.

 

Back to the CW21 agenda

man giving a presentation to group of people

Day 1: Tuesday, 30 March 2021 from 15:50 - 16:20 BST (14:50 - 15:20 UTC)

1.1 Interactive Introduction to the FAIR Research Software discussion

Facilitator(s): Anna-Lena Lamprecht (Utrecht University) and Michelle Barker (Research Software Alliance)

Abstract: This session offers a compact, interactive introduction to the discussion around applying the FAIR principles to research software (aka FAIR4RS). It is meant as a quick primer for people who have not yet been involved in the FAIR4RS discussion, but would like to join later CW21 workshops or other initiatives on this topic. Basic familiarity with the FAIR Guiding Principles for scientific data management and stewardship (https://www.go-fair.org/fair-principles/) is assumed. We will discuss similarities and differences of data and software and what these mean for the application of the FAIR principles. We will take a look at software aspects requiring specific attention, and share recommendations for FAIR4RS in practice.

 

1.2 The RSE landscape: Central, service, embedded, academic - what is your RSE type, and how do you want to develop it?

Facilitator(s): Teri Forey, James Graham, Marion Weinzierl, Paul Richmond and Anna Brown (Society of Research Software Engineering)

Abstract: Software is integral to research, yet those whose job it is to create and maintain that software can come from a variety of places and can follow very different career paths. As the Society of Research Software Engineering our aim is to form an inclusive and diverse community, which represents and supports the full range of the RSE landscape. As part of that we want to discover what RSE types there are, how those should be defined and how we can help to support those different career paths.

In this mini-workshop, we’ll run a series of activities aimed at finding what you think the RSE types are and how they differ. We’ll also ask what your RSE type is, how this relates to your job title, main areas of focus and career goals. Towards the end of the workshop we will focus on what you might want from the society to better support you and your RSE type.

This session will be a combination of live surveying and information gathering as well an opportunity for you to give us some feedback on the Society. We welcome all participants who are involved in writing, maintaining or delivering research software, no matter their level of experience or job title.

 

1.3 README tips to make your project more approachable

Facilitator(s): Hao Ye (University of Florida)

Abstract: READMEs are one of the primary ways that new users first encounter your project; and hopefully not the last! This mini-workshop will review practices in creating READMEs so as to best welcome people to your project, communicate the vision of your work and its unique value, and demonstrate how to get started using or contributing to your project. Participants will engage in short group discussions about READMEs and user personas, and then engage in scaffolded practice on crafting READMEs for their own personal projects or provided examples.

 

1.4 How FAIR is your research software?

Facilitator(s): Carlos Martinez-Ortiz and Faruk Diblen (Netherlands eScience Center)

Abstract: FAIR software is a topic of growing importance in the research software landscape. Even though the definition of the FAIR software principles is still in flux, recommendations are available to improve software in accordance to the spirit of the FAIR principles (https://fair-software.eu/).

However there is a gap between the principles at a definition level, and the application of these principles in practice. If compliance with these principles needs to be verified for every new release of the software, it is desirable to have tools which can perform this task automatically and which will ease the work of RSEs who are developing the software.

In this session we would like to introduce howfairis: a Python package to analyse a software's compliance with the FAIR software recommendations. We will describe how howfairis Github Action can automatically test your software to measure its FAIRness.

In this code-along workshop we will show you how to add a fair-software badge to your GitHub repository showing to the world how FAIR your software is!

 

1.5 (Do not) make it new: On Reusing Research Software and Tools in Digital Humanities Scholarship

Facilitator(s): Emily Bell (University of Leeds) and Anna-Maria Sichani (University of Sussex)

Abstract: There are many significant obstacles to reusing research software in digital humanities (DH) scholarship: firstly there is the issue of awareness, given that DH encompasses many fields of study, time periods, and methodologies that do not usually publish research in the same publishing venues, or necessarily collaborate on research aims and objectives. Where a specific tool might have applications beyond its field or domain, this becomes an even bigger problem, beyond the already substantial challenges of limited communication/visibility of the tool/software beyond its original community of practice, or even national/geographical borders of initial development and use. Secondly, much funding in the humanities focuses on novelty; existing projects find it difficult to locate funding that allows further development of, let alone maintenance of, existing tools, and new projects must promise brand new software in order to appeal to funders, posing further challenges regarding their sustainability and long-term maintenance.

This roundtable with DH researchers and stakeholders will consider how to encourage creative software reuse in DH, with time for participants to discuss possible funding sources and methods of sharing.

Day 2: Wednesday, 31 March 2021 from 14:15 - 14:45 BST (13:15 - 13:45 UTC)

 

2.1 Tips and traps on the road to FAIR software principles

Facilitator(s): Patricia Herterich (DCC, University of Edinburgh) and Morane Gruenpeter (Software Heritage)

Abstract: Software has an important place in academia and as such it has an important place in the FAIR ecosystem. This mini-workshop sponsored by the European Commission funded project FAIRsFAIR - Fostering Fair Data Practices in Europe which aims to supply practical solutions for applying the FAIR data principles. It will introduce the project’s assessment report on 'FAIRness of software'.

The session will consist of a presentation by the facilitators introducing the report and its main findings. We will present an analysis of nine resources that call for the recognition of software in academia and that present guidelines or recommendations to improve its status - either by becoming more FAIR or by improving the curation of software in general. With this analysis we demonstrated to what extent each of the FAIR principles is seen as relevant, achievable and measurable. Finally, we will cover 10 high-level recommendations for organizations that seek to define FAIR principles or other requirements for research software in the scholarly domain.

The presentation will be followed by an interactive feedback session (silent concurrent writing and some vocal share-outs) where participants are asked to comment on the recommendations, indicating if they agree with them or would rephrase aspects to make them more actionable.

 

2.2 Good Practices for Designing Software Development Projects (The Turing Way)

Facilitator(s): Malvika Sharan (The Alan Turing Institute)

Abstract: Effective methods for project design are crucial to the success of a research project. Particularly in software development, design principles can lead to better code, maintainability, and extendability. Project design encompasses a variety of aspects, starting from defining the purpose, main research questions, expected users and target audience, available resources and skills required in the project. It is also important to explore the possible outcomes, plans to address expected challenges, ensure diversity of stakeholders and reduce possible barriers to participation.

The Turing Way is an open-source, community-led book project that aims to bring together diverse contributors and collaborators to share resources and practices that make data science reproducible, ethical and inclusive. The project is developed and maintained on an online project repository (https://github.com/alan-turing-institute/the-turing-way) and invites contributions to its 5 guides, including the guide for project design.

In this session, we will introduce the guide for ‘project design’ to discuss good practices for designing software-related research projects, including how to use open-source tools and resources for meeting project goals and apply agile methods for project management. We will demonstrate The Turing Way project design to prompt discussions on developing inclusive engagement pathways and setting software projects that are open for contributions from people with diverse skills. Through this discussion, we will highlight the importance of designing projects for inclusion and selecting effective processes for project onboarding, code review, acknowledgement, communication and project sustainability.

 

2.3 Promoting code review for research software: feedback from the Oxford Code Review Network

Facilitator(s): Thibault Lestang (University Of Oxford)

Abstract: The Oxford Code Review Network enables researchers to practice code reviewing with colleagues across the University Of Oxford, mutually benefiting both software authors and reviewers. Code reviewing is a standard practice in the software industry, as well as open source software communities. Providing developers with human feedback has been demonstrated to be an efficient way to improve code quality as well as transferring knowledge among developers. In contrast, researchers software very rarely engage in code reviews, despite many individual seeking external feedback on their programming and software engineering practices. The Oxford Code Review Network was launched in July 2020 as a platform facilitating contact between researchers interested in participating in code reviews. This initiative aims at democratising code review in academia, as way to efficiently transfer software engineering knowledge and raise software quality standards.

This session will consist of a short description of the Oxford Code Review Network, leading into an open discussion aiming to (i) gather feedback on the initiative; (ii) explore potential application to other institution; and (iii) discuss the challenges of establishing code review as a standard practice in research software development.

 

2.4 PresQT – Services to Improve Re-use and FAIRness of Research Data and Software

Facilitator(s): Sandra Gesing, Natalie Meyers, Rick Johnson and John Wang (University of Notre Dame)

Abstract: Sharing, preservation and FAIRness of data and software is a crucial topic for many academic projects and open science. Researchers face the challenge to choose from a diverse set of repositories to publish, preserve and share their data and perform their research in their computational environment. Ideally, they have features available to seamlessly integrate the preservation and sharing step in their daily research routine. The project PresQT (Preservation Quality Tool) eases the use of repositories and serves as boilerplate between existing solutions and science gateways while adding beneficial metadata and FAIR tests (Findability, Accessibility, Interoperability, and Reuse). PresQT and its standards-based design with RESTful web services have been informed via user-centered design and is a collaborative open-source implementation effort. The PresQT services extend the preservation tool landscape in a way that stakeholders can keep working in their chosen computational environment and receive additional features instead of having to switch to a different software. PresQT services form the connection between tools, workflows and databases to existing repositories. Current partners or implementations for open APIs include OSF, CurateND, EaaSI, GitHub, GitLab, Zenodo, FigShare, WholeTale, Jupyter and HUBzero. The diversity of partners contributes to understanding the needs of the stakeholders of PresQT services.

PresQT services are easily integratable and target systems can be added via extending JSON files and Python functions. Data is packaged as BagITs for uploads, downloads and transfers. The current services include transfers with fixity checks supporting diverse hash algorithms, keyword enhancement via SciGraph, upload, download and connection to EaaSI services. FAIR tests are available via the services provided by FAIRsharing and FAIRShake. PresQT provides indicators how FAIR the data in the target repository is stored with additional hints for improvement. To present the capabilities to interested developers of computational solutions, users of PresQT services and funding bodies, we have developed a demo user interface that allows for demoing and testing the different features of PresQT services.

We will demo the services of PresQT in the user interface and present the API. (https://presqt-prod.crc.nd.edu/ui/https://presqt.readthedocs.io/en/latest/)

 

2.5 Opening Closed Data - Exploring new models for stewardship of sensitive data

Facilitator(s): Gary Leeming (University of Liverpool)

Abstract: Access to sensitive data, such as health records, can be difficult. Meanwhile much of the data about how we live our lives is being captured in corporate silos with little accountability. Models such as data co-operatives and data commons are being discussed and trialed[1] as alternatives to big tech monopolies. In research the FAIR principles also promote greater transparency in sharing research data but do not necessarily consider the role of the data subjects or those legally responsible for the data.

In this session we will explore different models for governance of sensitive data, how to communicate how data are used, and what types of collaborative models could work for enabling more open research on data. We will also investigate the challenges and constraints that can prevent sharing and access, and what practical solutions are needed to enable these difference types of stewardship for the benefit of all.

Mozilla Foundation - Data Futures

 

Back to the CW21 agenda