Postdoctoral researcher, Department of Biochemistry, University of Oxford
I use computer simulation to understand how cells work at the molecular level.
All our cells have many different proteins embedded in their membranes which control the access (and exit) of different chemicals and ions. I simulate on computers how these proteins move, and hence how they do their job, and how strongly different molecules bind to different proteins. For example, I am collaborating with experimentalists to understand how hPepT1 (a protein found in your small intestine) is able to also transport many naturally-derived drugs such as amoxicillin from your gut into your blood stream. Without it you couldn't take antibiotics orally.
Despite the continued increases in the speed of computers, these simulations still require large amounts of computational resource. My field of computational biophysics is quite mature, hence the simulation codes are very large, well-maintained, actively developed and have been optimised and parallelised so they can use hundreds, if not thousands, of CPUs to run a single simulation. As simulations are becoming more complex (and more realistic) there is a shift towards researchers writing their own bespoke software to analyse a set of simulations: thus I propose to focus on analysis software. My first goal is to improve the use of software engineering practices and techniques in my field, primarily through the running of Software Carpentry workshops.
Another trend is that we are typically running more simulations than we were ten years ago - this is mainly as the field shifts from qualitative observations to quantitative predictions. My second goal therefore is to develop a crowd-sourced computational grid that can both supply a large amount of computer power and also engage with the public.
To find out more, follow my blog philipwfowler.wordpress.com and find me on Twitter @philipwfowler.
Check out contributions by and mentions of Philip Fowler on www.software.ac.uk