Myths about computing and coding are everywhere. You only have to look at movies for many examples – films like Hackers (1995), The Net (1995) and Swordfish (2001) immediately spring to mind – with the often genius-level protagonists able to weave techno-spells with their fingers to accomplish amazing feats of coding and/or hacking prowess in a bewildering, complex dance of graphical beauty. Fortunately, although somewhat less impressively, the truth of coding is far more straightforward than that, and this guide is aimed at researchers that haven't yet started to code. Here we call out some of the myths surrounding learning it as a skill you can apply in your research, and anywhere else really.
1. Learning to code is too hard
Have you ever used some kind of software – whether it's a spreadsheet application, a software tool, a mobile app, a game, or anything else really – and thought: "That must’ve taken a lot of skill to code!" Well, quite possibly, it did. But artists don't start out painting something like the Sistine Chapel: they learn skills over time that are applied to other smaller, successful works first, and the same is true for learning to code. Here's the thing: you don't need years and years of practice or loads of experience to write some code that's genuinely useful to you.
Like learning any skill, it's going to take effort. But like with learning a second language, you don't need to be a completely fluent speaker to be able to successfully communicate in that language. You can start with a small program to do a single task and build on that. Computers work best with problems that require the same set of simple steps repeated over and over, perhaps something that's done manually at the moment. Do you have any tasks in your research that fit this pattern?
2. Training isn't that useful
Have you encountered a training course for learning a skill that was perfectly useless? If so, this certainly isn't a unique experience, particularly with traditional programming courses in academia aimed at those not studying a computing-related degree. Assuming too much prior knowledge of their learners, early negative experiences with such courses often dissuade them from pursuing it further.
Ways of teaching coding and other 'computational' skills have since vastly improved, with courses aimed specifically at novices. And unlike learning the programming languages of yesteryear, there are a lot of resources to help you start learning and to develop your skills even further. There are numerous online courses like those provided by Codeacademy and Khan Academy that provide well put together code-teaching websites that introduce coding for a number of languages. There are also novice two-day training courses for learning data and computational skills for research, such as those provided by The Carpentries, which are incredibly popular in academia.
Whilst the fundamental operations of computers are certainly rooted in mathematics, writing code by and large doesn't involve a lot of maths at all. Of course, there are some research domains that make heavy use of mathematics which is reflected in the software that is developed in those fields, but coding is much more about problem solving: decomposing a problem into a sequence of steps that a computer can do, and writing the code for each of those steps.
4. You have to write a lot of code to do anything useful
We've talked about coding as defining a sequence of steps. What's really helpful is that in many cases a lot of those steps are already coded for us! With many older programming languages like C and C++, due to the nature of these languages you often did have to write quite a bit of code to do something. However many modern programming languages today, such as Python, have taken on board many lessons learnt from these earlier languages and are specifically designed to be easier to pick up for those just starting out, whilst giving you the legroom to do more advanced stuff too.
Also, by using sets of code written by other people to do various things - commonly referred to as libraries - we can use their functionality within our own code, which means even code we write to do some amazing things doesn't necessarily need to be an opus. Plus, many languages have libraries to do all sorts of things, such as with Python. If you can think of a thing you need to do, the chances are there's a library (or libraries) that may help, for many general-purpose and scientific applications.
5. Code isn't that valuable
It's often believed in academia that the value in writing code is simply to illustrate a research idea or method, and that's it - after the project involving the code is over, it's forgotten about. But what can happen - and quite often does, in one form or another - is that the code gets reused.
It could be that over time, what started out as a proof of concept or designed to generate results for a single publication, is seen as valuable for another publication or project. Or maybe just part of it. Perhaps a part of its functionality, a particular set of functions perhaps, is useful. As well as this, even if the code itself isn't reused, the lessons learnt when writing it may be reusable - perhaps it solved a particular problem that's arisen in another project. Code can also be valuable as an exemplar for how technologies are used, such as how you may have used particular domain-specific code libraries. So intrinsically, code itself has value.
Like with any skill, start small, grow your coding ambition over time, and look for resources such as guides and training that can help you with aspects you need to improve.
Lastly, are there any colleagues you know that code, particularly in your field? If they're familiar with your research field, they may already know what's useful to learn when you start and be able to give you some pointers.