Sometimes you need a ‘fresh eye’ on things. That’s what the team at Imperial College’s Musculoskeletal Mechanics Group realised as they prepared their internally developed software for a public release.
The Imperial College Lower Limb Model (ICLLM) is a C++ and MATLAB-based musculoskeletal modelling software package that is used as part of the Medical Engineering Solutions in Osteoarthritis Centre of Excellence. It allows the prediction of subject-specific muscle and joint forces at the hip, knee and ankle by using motion data gathered from gait analysis.
Developed over years by a mix of students and researchers, the software was tricky to work on, and had little documentation to help new users. Research Fellow Angela Kedgley was aware of, and concerned about, these limitations in a product that was aiming for public release. Having heard of the work the Software Sustainability Institute was doing for others, she decided to apply through the Open Call for Projects – and gained access to expertise that has helped transform the software.
The Institute's consultants were able to bring a broader picture of what makes good code, she says – and some worthwhile advice on how to make a better product. “We went to the Institute without any particular issues – just a sense that we wanted someone external to look at the project and help us make it more accessible,” says Kedgley. It was a worthwhile move, and “they found quite a few things we could improve on.”
After evaluating both the software itself and the development processes in place, the Institute recommended a new workflow that included adhering to coding standards, an online subversion (SVN) code repository and revision control system, and a code testing framework. The coding standard was based on a combination of the Google C++ standard and the main developer’s own coding style. That has enforced discipline in writing code, and so brought about more consistency. “Some of the standards improve readability, while others make it more robust,” Kedgley says.
The revision control system and repository provide a memory-efficient archiving system and allow the team to ‘roll back’ to earlier versions of the model when necessary, while the testing framework involves splitting the code into discrete components – or units – and testing each one thoroughly. “We now have a strategy in place for verifying that the model has been implemented correctly,” Kedgley explains. “Unit testing is also helpful for ‘regression testing’, where a new version of the code is subjected to the same tests as the original, and any resulting bugs are easily traced back to the units that have failed the test.”
One of the strongest benefits of working with the Institute was that the team did this work themselves, and understands exactly what has been done, as Kedgley points out. “The Institute's consultants didn’t do the coding work – they gave us advice and helped us to do the work ourselves, which was exactly what we needed."
What the Institute did do, however, was create a user manual for revision control and worked with the Imperial team to refine that. “We can use that manual for other clients, now – we worked with them on it, picking up any issues and areas that weren’t clear. So that was a two-way benefit, helping us and helping the Institute.”
The team now hopes to be ready for public release within a year, Kedgley concludes. “We need to do more validation, and finish the user manuals, all based on the Institute's advice. They have been great, given us great advice and also a clear overview of where we are, and how to improve our usability and sustainability.”