HomeNews and blogs hub

Why Do Engineers Write Bad Software?

Bookmark this page Bookmarked

Why Do Engineers Write Bad Software?

Author(s)
Edward Smith

Edward Smith

SSI fellow

Posted on 25 May 2017

Estimated read time: 7 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Why Do Engineers Write Bad Software?

Posted by s.aragon on 25 May 2017 - 10:48am

Software in engineering By Edward Smith, Institute’s fellow, Imperial College London.

As an engineer, software design concepts are not only familiar, they are central to the education we are forced to endure. These include standardisation, quality testing and the importance of outlining a clear specification. However, when it comes to software development, engineering academics seem to forget these principles; principles that shaped the industrial revolution and allowed us to engineer the modern world. In this short blog, I want to explore why academic engineers don't apply these best-practice concepts to software.

It is clear we are in the middle of a revolution; one which arguably will change the world more rapidly than the industrial revolution over a hundred years ago. Aside from the scientific developments, among them steam power, electricity and mastery of materials such as iron and steel, it was the methodologies forged during this period that were pivotal to the revolution. The key concept for mass production was the division of labour and automation, allowing much greater production by fewer people. In addition, the standardisation of parts allowed each person to specialise and optimise a given part fitted together by agreeing on the required interface between these parts.

Consider a car. Before the industrial revolution, a master craftsman might build an entire carriage. Any repairs would be the responsibility of this craftsman. Then came the production line; each component was the responsibility of a single person, resulting in vast improvements in quality and speed. Having passed the early days of rickety steam boilers and hand-made nails; cars today are simply assembled from a range of specialist suppliers, often with only key parts made in house. Each component is so specialised, it is sufficient to warrant a dedicated company in its own right; from tyre manufacturers optimising chemical compounds to the tuning of swirl and tumble in the engine cylinder. The interface between these parts is ensured by standards which guarantee they will be within agreed tolerance, so they can be slotted together in the final car like Lego. Quality control ensures the tolerance of the interface between each part through extensive testing. A testament to the success of this approach is the seven-year warranty.

We are at the production line stage with software, having passed the early days of vacuum tubes and punch cards. In industrial software companies, large teams of programmers divide tasks and work on each part. There are clear interfaces between components of the software and whole teams of quality testers. Complex functionality is brought in through interfaces to libraries from dedicated companies. This approach has resulted in software that has scaled in complexity and reliability far beyond the early days of programming. Software has not reached the levels of specialisation seen in engineering, but there is no doubt this is the way things will go in industry. However, this approach is not widely followed in academia, even in an engineering department where standardisation, quality and design are an essential part of the undergraduate education. Why?

Consider the car analogy again. The required conservatism to maintain quality means car designs will vary slightly from generation to generation. The big companies are naturally very slow to change and address new developments. By contrast, an academic project could see a researcher and small group aim to build an entire car. The end result is less polished, with far more bespoke parts and novel designs. They are not held up by an adherence to a load of standards. Despite some use of off-the-shelf parts, many components will be custom. The car will naturally be unreliable and it'd be lucky to make a lap of the track. The internal workings are only understood by the designers and, if they leave, there will be a significant problem with maintenance. However, an entire car can be built by a small group with minimal resources. They will have an understanding of the entire process and be able to see ways to improve this design. The novel improvements may become a feature of the next generation of cars for all the major manufacturers. Would these novel developments still have been possible if the academic had adhered to the design process of the large company? Maybe not, but then are we sure the novel designs will work beyond the prototype, especially over a seven-year warranty period?

It's the same with software. Despite using standard languages and compilers (four wheels), linking a few standard software libraries (off-the-shelf constant-velocity joints joint) and wrapping it together with bash and make (the M8 screw), the majority of academic software is written in house. The code may be unreliable and you'd be lucky to get reliable results beyond the cases you test. The internal workings are only understood by the designers and if they leave, there will be a significant problem with maintenance. However, the result is a rapid development of novel software by a small team. This team understands the entire process and, more importantly, the science. This is demonstrated to reviewers and the community by their ability to code it themselves. Having spent a week reading about extreme programming, dependency injection, mock objects, code coverage and an extensive debate on the testing of private functions, I have written about 100 lines of actual algorithm. I'm pretty sure these lines are correct, I have the tests to prove it, and Travis CI will email me if and when this changes. During my PhD, I wrote hundreds of thousands of lines of algorithm; say what you want about Fortran, it is very quick to develop scientific code. I obviously tested the code before publishing scientific output (energy conservation, analytical solutions, experiments) but it isn't unit-tested (or even modular). Would I be willing to offer a seven-year warranty on this software? No. But I do wonder if I would have written the same volume of code using test-driven development and endlessly refactoring until my code smells okay. In the same way the car companies move slowly, the burden of quality is heavy without an agile team of software developers. I do however wish I had known about continuous integration and made an effort to automate the extensive tests I wrote back then. Maintenance on this "legacy" code is no small part of my time now.

This analogy between cars and software is just that, and the limitations are important. Software is not a consumable. If we write a good bit of software, then there is no limit to its reuse far beyond seven years. But I would argue there is a place for rapid, novel prototyping, as evidenced by the massive uptake of Python, R and Ruby in academia. In an ideal world, the best bits of this prototype should be taken by a team of research software engineers and built into a standardised framework with testing and a clear interface. In practice, I think we should all aim to get a little better at ensuring we package the best bits of our research with a clear interface demonstrated by tests and documentation.

Would you be willing to offer a seven-year warranty on your software? If not, it may be worth going back to the notes from your engineering degree on standardisation, quality testing and the design process. Then setup a repository, create a standard interface to your software and write a few tests. After all, you may only need a basic set of testing to get that ISO 9000 accreditation.

Share on blog/article:
Twitter LinkedIn