Software and research: the Institute's Blog

Latest version published on 13 September, 2017.

Human-like computingBy Caroline Jay, University of Manchester.

Software engineering is difficult. This is particularly true in a research environment, where code is often intended to be a precise representation of a scientific entity, process or system. Developers must grapple with the difficult issues that affect every software development project, but also deal with the fact that the formal representations used by machine computation are frequently at odds with the heuristics used by the human brain (an issue discussed in a recent Institute blogpost on code/theory translation).

Over the past two years, a new research domain has started to emerge, that may ultimately offer a solution to this problem. “Human-Like Computing” is the shared endeavour of researchers from psychology and computer science, with a common desire to improve the interface between technology and people. At first glance, the aim of this domain might appear familiar: research areas such as robotics and natural language processing have been working towards naturalistic communication with people for a long time. The difference with human-like computing, is how this aim is achieved: the focus is on understanding human cognition, and using this to produce a step-…

Continue Reading

Latest version published on 12 September, 2017.

Europython 2017By Alice Harpole, University of Southampton

Coding is often seen as a tool to do science, rather than an intrinsic part of the scientific process. This often results in scientific code that is written in a rather unscientific way. In my experience as a PhD student, I've regularly read papers describing exciting new codes, only to find that there are number of issues preventing me from looking at or using the code itself. The code is often not open source, which means I can’t download or use it. Code commonly has next to no documentation, so even if I can download it, it's very difficult to work out how it runs. There can be questionable approaches to testing with an overreliance on replicating "standard" results, but no unit tests exist to demonstrate that the individual parts of the code work as they should. This is not good science and goes against many of the principles of the scientific method followed in experimental science.

In the following sections, I shall look at how we might go about writing code in a more scientific way. This material is based on the talk I recently gave at Europython 2017 on Sustainable Scientific Software Development.

Scientific code

Continue Reading

Latest version published on 8 September, 2017.

Fellowship Programme 2018By Raniere Silva, Community Officer, Software Sustainability Institute.

Register for the Fellowship Programme Launch Webinar on Friday 15 September from 2.00pm to 3.30pm BST. Raniere Silva, our Community Officer, will talk about the programme, the application process, and some of our Fellows will share their experiences. Whether you’re planning on submitting an application or you want further information about the programme, our webinar is the perfect platform to learn more and ask questions about the Fellowship.

This year, sorted by last name, Nikoleta Glynatsi, Gary Leeming, David Perez-Suarez, Iza A. Romanowska and Melody Sandells will join us on the webinar to talk about their experience as Institute fellows. Their background is in operational research, health informatics, astronomy, archaeological sciences and Earth system sciences, which will provide our attendees with a broad but not limited view of the research areas that can benefit from better software or coding practices. Applications from all research areas…

Continue Reading

Latest version published on 7 September, 2017.

By Kenji Takeda, Microsoft Research.

It is a privilege to announce the Research Software Engineering Cloud Computing Awards at the RSE 2017 conference! It is clear that cloud computing is helping researchers worldwide, across all disciplines, and it is a key enabler for AI and machine learning at scale. With these awards, Microsoft wants to empower RSEs to explore, educate and extend cloud computing for researchers. The goal is to create a community bridging researchers, university stakeholders, regional teams, and national services, to better understand how Microsoft Azure can enable better, faster and more reproducible research in everyday use.

We are looking for people who are passionate about exploring how cloud computing can be used in research, sharing their experiences with cloud computing, and advocating best practice in their research domain, institution, and/or community. The awards are flexible and will support training, workshops, cloud computing prototype designs and research solutions, and publication of open-source code and frameworks for Microsoft Azure. We are particularly interested in RSEs using AI, machine learning, and data science in their projects.

Each award provides £2000 GBP for education, outreach, and implementation of research solutions using the Microsoft Cloud. This is complemented by 12 months of Microsoft Azure credits at $250 USD per month, for one year. Awardees will be able to use the title RSE Cloud Computing Fellow.

Apply…

Continue Reading

Latest version published on 6 September, 2017.

pandas_in_space copy.jpgBy Simon Hettrick, Deputy Director.

This is a story about reproducibility. It’s about the first study I conducted at the Institute, the difficulties I’ve faced in reproducing analysis that was originally conducted in Excel, and it’s testament to the power of a tweet that’s haunted me for three years.

The good news is that the results from my original analysis still stand, although I found a mistake in a subdivision of a question when conducting the new analysis. This miscalculation was caused by a missing “a” in a sheet containing 3763 cells. This is as good a reason as any for moving to a more reproducible platform.

2014: a survey odyssey

Back In 2014, I surveyed a group of UK universities to see how their researchers used software. We believed that an inexorable link existed between software and research, but we had yet to prove it. I designed the study, but I never intended to perform the analysis. This was a job better suited to someone who could write code, and I could not. Unfortunately, things didn’t go to plan and I found myself in the disquieting situation of having an imminent deadline and no one available to do the coding. Under these circumstances, few people have the fortitude to take some time out to learn how to…

Continue Reading