HomeNews and blogs hub

Metrics: they’re dull, flawed and vital

Bookmark this page Bookmarked

Metrics: they’re dull, flawed and vital

Author(s)
Simon Hettrick

Simon Hettrick

Deputy Director

Posted on 6 September 2019

Estimated read time: 4 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Metrics: they’re dull, flawed and vital

Posted by g.law on 6 September 2019 - 8:33am Measuring toolsPhoto courtesy of Fleur Treurniet

By Simon Hettrick, Deputy Director of the Software Sustainability Institute

In the weeks running up to the RSE Conference, some colleagues and I will be providing our thoughts on the questions people have submitted for our panel discussion with senior university management about how RSEs are being supported within academia. (You can submit more questions and vote on the current questions on Sli.do.)

Today’s question is currently the highest rated on Sli.do “What are universities doing to change the metrics by which they measure academic career progression?”

This is a subject close to my heart. I was advised to apply for a promotion last year in recognition of my work which, in summary, involves campaigning for success in research to be measured based on the work people actually do. Ironically, to achieve the promotion I had to provide an argument based on all the metrics I was fighting against: the papers I don’t write, the teaching I don’t conduct, and the PhD students I don’t supervise. This Catch-22 is a good example of academia’s relationship with metrics.

This is not to say that universities and research organisations are blind to any success that can’t be measured in papers or funding. It’s just many times easier to register a success if it can be supported with conventional metrics. Thankfully, things are changing. We are seeing increasing evidence that universities want to update their policies in line with modern research practices. In addition to the growing acceptance of RSEs, there are efforts like the Royal Society’s Changing Expectations programme and the Technician Commitment which are testament to a desire for change. And I must admit that I have been surprised by the agility that some universities have displayed in quickly supporting the RSE movement. They may be in the minority, but they show what is possible.

There are some significant barriers. Money and papers are favoured metrics because they are wonderfully atomic and hence countable. Who brings in the most money? Who writes the most papers (well... the most top-rated papers)? These are questions that can be answered and produce conclusions that can be evidenced. Is it fair, or indeed accurate, to judge the success of research roles on this basis? Well… probably not. I understand that we have to sacrifice accuracy to produce a workable system, but even for traditional researchers such focus on limited metrics risks overlooking exceptional staff. Ultimately, with thousands of staff to compare, many universities start to look at what is easy to measure as being the most important aspect of the measurement process.

If not funding and papers, then what? I’m frequently asked this question when I present a talk on RSEs. People want me to say something easy to conceptualise and measure, like lines of code written, downloads or something equally terrible, but I refuse to do so. Do we really need new atomic metrics for RSEs? Industry has managed to judge the success of its developers for decades without limiting itself to two metrics. Can’t academia do the same? And why do all universities have to agree on the same metrics for RSEs? Wouldn’t it be better to allow different universities to develop their own systems, then let the best (or bests) of breed win out?

The barriers to new metrics seemed insuperable when we first developed the concept of an RSE seven years ago. During those early days, it seemed silly to worry about how to measure the performance of a concept that might not exist in a few minutes time, so we postponed dealing with the problem. Happily, we’re living through something of a revolution in the way that academia treats its RSEs, so it is now time to start thinking about metrics. We should not expect quick wins. For one thing, many universities have the ability to make an oil tanker appear positively mercurial, so adoption across academia might be slow. Despite this, I am optimistic. Any community that’s coming under pressure to develop something as boring as metrics to measure its members’ success is no longer a community that needs to worry about its continued existence.

 

Share on blog/article:
Twitter LinkedIn