Human-like computing: the future of research software engineering?

Human-like computingBy Caroline Jay, University of Manchester.

Software engineering is difficult. This is particularly true in a research environment, where code is often intended to be a precise representation of a scientific entity, process or system. Developers must grapple with the difficult issues that affect every software development project, but also deal with the fact that the formal representations used by machine computation are frequently at odds with the heuristics used by the human brain (an issue discussed in a recent Institute blogpost on code/theory translation).

Over the past two years, a new research domain has started to emerge, that may ultimately offer a solution to this problem. “Human-Like Computing” is the shared endeavour of researchers from psychology and computer science, with a common desire to improve the interface between technology and people. At first glance, the aim of this domain might appear familiar: research areas such as robotics and natural language processing have been working towards naturalistic communication with people for a long time. The difference with human-like computing, is how this aim is achieved: the focus is on understanding human cognition, and using this to produce a step-change in the way our technology works. It aims to produce a system that can explain its reasoning in a way that would make sense to a person, and can resolve issues through communication. The implications for this go far beyond the user interface - if we can get machines to share the representations that we use, then the frequent disparities between a concept that exists in someone’s mind, and its translation into software, could ultimately disappear.

The human-like computing movement is in its nascent stages. In the UK there has been an EPSRC workshop exploring its position in the UK research funding landscape, and a Machine Intelligence workshop where researchers from cognitive and computer science presented position papers on how we might achieve it, and where it could potentially lead. Whilst the UK focus is currently on cognitive science, initiatives combining neuroscience and computation, such as the Human Brain Project, are also drawing on knowledge of human information processing capabilities to inform machine design.

What might human-like computing mean for research software engineering? At the recent Dagstuhl Seminar on Human-Like Neural-Symbolic Computing, we discussed how the approach could be used to address the fact that complex software—built by humans—remains difficult for humans to understand. Understanding the inner workings of software is not important in many situations (indeed, user interface design paradigms emphasise the need to hide this complexity), but research software is an exception to this rule.  Understanding the precise workings of an algorithm may be of critical importance when it comes to reviewing software intended to produce knowledge that we can rely on as a robust source of evidence. How we resolve the issue of software peer review remains an open question, but it will be interesting to see what this new field can offer. Human-like computing is just starting to find its feet, but its champions believe it has the potential to transform the relationship between humans and machines, and, possibly, the way we write and understand our research software.


Thanks to the Software Sustainability Institute for funding Caroline Jay's attendance at the Dagstuhl Seminar on Human-Like Neural-Symbolic Computing via her 2016 Fellowship.

Posted by s.aragon on 13 September 2017 - 10:24am

Add new comment

The content of this field is kept private and will not be shown publicly.
By submitting this form, you accept the Mollom privacy policy.