Ian Gent

By Vince Knight, Cardiff University, Olivia Wilson, University of Southampton, Shoaib Sufi, Software Sustainability Institute, Steve Crouch, Software Sustainability Institute, and Ian Gent, University of St Andrews.

A speed blog from the Collaborations Workshop 2016 (CW16).

"Congratulations Dr Smith"

The words every PhD student dreams of hearing, at least if your name is Smith. First from your examiners, and then soon afterwards from your supervisor. And then those words every PhD student dreads of hearing from your supervisor…

"Just before you go to your super-rich quant futures job on Wall Street, could you just …

…. hand over your code to my new PhD student please?"

You remember the same conversation three years ago, when you saw your predecessor Dr Jones stammer and make excuses, and promise to send their code to you "in a few weeks after I’ve tidied it up." You suppose technically that 156 weeks might be described as "a few weeks" but certainly you’ve never seen that software. Software that was good enough to get a PhD for Dr Jones, but not good enough to pass on to the next student. All you can say is

"No, I’m sorry Prof Patel, I can’t hand it on,... ".…

Continue Reading

IanHolmesTweet.pngBy Ian Gent, Professor of Computer Science, University of St Andrews.

At the start of this year there was a wonderful stream of tweets with the hashtag #overlyhonestmethods. Many scientists posted the kind of methods descriptions which are true, but would never appear in a paper. My favourite is this one from Ian Holmes.

Although every scientific primer says that replication of scientific experiments is key, to quote this tweet, you'll need luck if you wish to replicate experiments in computational science. There has been significant pressure for scientists to make their code open, but this is not enough. Even if I hired the only postdoc who can get the code to work, she might have forgotten the exact details of how an experiment was run. Or she might not know about a critical dependency on an obsolete version of a library.

The current state of experimental reproducibility in computer science is lamentable. The result is inevitable: experimental results enter the literature which are just wrong. I don’t mean that the results don’t generalise. I mean that an algorithm which was claimed to do something just does not do that thing: for example, if the…

Continue Reading
Subscribe to Ian Gent