Last month Mike Jackson and I were tutors at a Software Carpentry event at Newcastle University. If you haven’t heard of the Software Carpentry project, it’s a great idea. It teaches scientific researchers a core set of useful, software-development skills, and uses short, intensive workshops followed by self-paced online tutorials. Anyway, during one of the practical sessions, the issue of when to optimise code came up. In other words, at what point should you optimise your code to make it run faster?
At a previous Software Carpentry course run by it’s project lead Greg Wilson, Greg introduced a great page that presents three rules of optimisation (I’ve added some comments):
- Don’t. Focus on getting the right results from your code, and keeping it readable for others as well as yourself.
- Don’t… yet. Since optimisations often reduce code readability, you will find that optimising your code can negatively affect its maintainability.
- Profile your code before optimising. Even experienced programmers can be very poor at predicting where a computation will get bogged down. If you really need to speed up the execution time, analyse your code to identify where time is being wasted, and use that to guide where to optimise.
I’d add that the process of optimising code also runs the risk of introducing new bugs. Essentially, if performance really isn’t an issue, don’t optimise. You’ll risk sacrificing maintainability and accuracy – as well as your time – for negligible improvements in speed. And if the newly optimised part of the algorithm needs to change later, you’re only making it harder on yourself (or someone else)!