Got this question the other day from Andrew Milstead, also from the University of Southampton…
Test-driven development is a way of working – a discipline – when developing software. Essentially, you develop the unit tests for code before you write the code itself.
In general, testing is often given short-shrift in academic projects, because academia generally lacks the resources to place a proper emphasis on testing – functionality is often prioritised over quality. This is completely understandable because you are often developing proof-of-concept software. However, when the quality of the results from the software becomes important – when your software matures and is used by others – testing can become fundamental to showing that these results are accurate; all the more important in academia today.
Test-driven development forces you to think about how to validate the code that you write, and there are many advantages. You can readily use the results of tests as evidence of the software working correctly, for a start, and the software, and its tests, are always closer to a releasable state. Also, since testing becomes part of development rather than being something that’s bolted on afterwards, it can help pick up on problems earlier, especially if it’s integrated into an automated build and test system. As a discipline, it also helps you to write code that is easily testable, more modular and simpler in design.
Personally, I think test-driven development is great, but you need to decide if it’s right for your software. And, dusting off a cliché, with all things computer, garbage in = garbage out. So decide on an appropriate test strategy for the software you are developing, and write good tests!