By Simon Hettrick, Deputy Director.
How do you choose which categories are needed to represent all possible research outputs? That's the problem we're facing at the Hidden REF. Rather than solve it ourselves, we've handed the problem to the research community who have much more knowledge about the range of new categories we're going to need. In this post, I’m going to take a look at the suggestions people have made over the last couple of months. What do you think about these new categories?
Before we get to the new categories, I want to raise a point about plans for the Hidden REF in the current era of lockdown and Coronavirus. This is obviously a difficult time for people to get their day-to-day work done, never mind doing extra things like Hidden REF. We’re glad to see that the REF has pushed back deadlines to allow people more time to get their submissions together, and we’re currently reviewing whether we should do likewise. Keep an eye on the Hidden REF website for more news.
It’s going to be difficult to be objective with this one, because I’ve had first hand experience of the impact gained from training up a new community. Our work with “the Carpentries” has helped train over 8,000 UK researchers in basic software engineering over the last 12 years. This important work is not easily REFable.
If we want people to share their skills and invest time into preparing good training materials, we need to reward them. The case for a new category for training seems indisputable.
With the Technician Commitment and the National Technician Development Centre as Supporters, it’s unsurprising that we received suggestions to support technicians. I’ve work with technicians a lot (I was an experimental physicist some time ago) and, without them, there wouldn’t have been any research in my lab.
This seems like another excellent suggestion for a category, but I can see the problem is going to be understanding how to report on this category. Technicians conduct a huge variety of tasks over radically different domains. Rather than base the category on a specific type of output, it might be a better idea to have technicians as one of a set of “people categories”. In other words, you nominate the person rather than their outputs. Is this a sensible approach to these categories? Let us know what you think.
A new word for my vocabulary - and an instant favourite - is “Grimpact”. The REF Impact agenda is biased towards benefits and successes, but the wider impact of a discovery can be far more complex and unpredictable - and not always in line with the recommendations made by researchers in underpinning research articles. Look at the initial promise, and the unexpected consequences, of chlorofluorocarbons or leaded petrol (both invented by Thomas Midgley, who has some answering to do) as good examples. This award sheds light on research that had an equal and opposite effect (in terms of significance and reach) on society.
At first I was struggling to see how Grimpact makes it into the Hidden REF, which is after all about recognising hidden roles in research. However, it’s well known that researchers are incentivised to publish only their “good” research and not the mistakes, failures and unintended consequences of their work. The same occurs for Impact, with HEIs choosing to put forward success stories of Impact, rather than cases of where things went inadvertently wrong. The late case arguably demonstrates the complexity of Impact/Grimpact more, effectively hiding its true nature. Anyone who publishes their mistakes is benefiting the research community and, completely wrongly, probably adding some tarnish to their own reputation. I think that’s a good reason to celebrate Grimpact in the Hidden REF.
The research community relies on research services, finance, HR and many other professional services. Without them, we could not prepare bids or run a research group or project. Despite this, I have never seen anyone from Professional Services acknowledged in research literature, websites, impact case studies, etc.
This brings us back to the idea of having “people categories”. We’ll have to have sub-categories: it’s not going to be straightforward to compare the impact of a technician to that of a person working in HR. But how do we judge people within these sub-categories? Maybe the submission should be in the form of a testimonial from a person (or people) who have been helped?
Data and metadata standards
Standards are vital if communities are to share work and collaborate. Without standards, there can be no collaborative data sharing. Rather than adding data standards as their own category, it seems sensible to include them with the “Research datasets and databases” REF category. There were only 68 submissions into this category in the last REF (out of 191,000 submissions in total), hopefully broadening the way we think of data will help the category attract a few more submissions.
Citizen Science harnesses the power of crowds to conduct research, which produces valuable results and massively increases public participation in research. What’s not to like? Take Galaxy Zoo: it’s enticed almost 40,000 people to categorise over a million images of galaxies. Citizen Science should definitely be a category.
In my experience, the university community appears to think quite highly of Citizen Science projects - and anything else that involves the public in our academic world. I’ve certainly seen REF case studies based on Citizen Science. I don’t think this means that we can’t also reward Citizen Science in the Hidden REF, but we might need some help identifying how best to recognise the people who make Citizen Science possible, but aren’t rewarded through traditional mechanisms.
Being a good researcher isn’t just about writing the best papers or bringing in the most funding, it’s also about improving the people in the research community. Who wouldn’t want to see people rewarded for selflessly investing their time in helping other people and improving other people’s research?
This looks like another people category that could be recognised through nomination by someone other than the nominee. Where are the boundaries on this category? My instant reaction is that anyone in academia, regardless of their role or position, should be open for nomination. At the same time, we might not want to overlap with the awards for other people categories. What do you think?
Facility time allocation
Applications for facility time can be as time-consuming and competitive as grant applications, but they are rarely recognised as such. As with any scientific endeavour, success is not guaranteed, but if an experiment at a synchrotron fails, the researcher receives no credit. We could judge this category through the number of successful applications (or hours of awarded time), size of team brought to the facility, number of student trained to use the facility.
This seems like a fairly straightforward category that's missing from traditional success metrics, and one that might require a huge investment of time within some fields.
The categories for the Hidden REF are open for suggestions and debate until 29 June. Take a look at the new categories and let us know if we're missing any important research outputs that are not currently recognised by traditional mechanisms.