HomeNews and blogs hub

The 25th IEEE Computer Based Medical Systems (CBMS 2012)

Bookmark this page Bookmarked

The 25th IEEE Computer Based Medical Systems (CBMS 2012)

Author(s)
Simon Hettrick

Simon Hettrick

Deputy Director

Posted on 14 August 2012

Estimated read time: 7 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

The 25th IEEE Computer Based Medical Systems (CBMS 2012)

Rome, Italy, 20-22 June 2012.

Event website.

Report by Laura Moss, Agent and Clinical physicist, NHS Greater Glasgow & Clyde and the University of Aberdeen.

Highlights

CBMS 2012 featured a wide variety of technology used by clinicians and in medical research.

In the future, software, technology and people are required that can manage and analyse the vast amounts of physiological data generated by patient monitoring equipment; this will be a challenge for the community.

Conference report

An extremely warm Rome was the setting for the recently held 25th IEEE Computer Based Medical Systems (CBMS) conference. Technology is increasingly being used in medicine to both enable and advance medical knowledge and understanding and this was reflected in this popular annual meeting which brought together over 100 clinicians, technologists, and computer scientists working in medical informatics to discuss the latest technological advances in the field. Research topics at this conference included the development of novel software and medical systems, the intelligent analysis of physiological signals and images, machine learning, and technologies which allow for personalised healthcare. The conference consisted of three distinguished keynote speakers and was split into 13 different streams during which both long and short papers were presented, covering topics as diverse as image processing for ophthalmology to the use of grid and cloud computing in biomedicine and life sciences.

The keynote speech on the first day was given by Prof. Nada Lavac and it was a good primer for one of the key topics covered in the conference; the mining of medical data. During the talk she provided a brief history covering the development of algorithms which automatically mine data to generate predictive or descriptive models; particular attention was paid to relational data mining and inductive logic programming for sub group analysis as this is the focus of her research group at the Jozef Stefan Institute in Ljubliana, Slovenia. The mining of data has huge potential for medical research, some of which is already being realised. During the talk, she provided several examples of successful work that she has been involved in, including the discovery of rules which help doctors to identify individuals who might benefit from coronary heart disease screening, and the extraction of rules to identify genes that are characteristic of leukaemia, distinguishing it from 13 other cancer types. A number of tools to help researchers analyse data were also described including the Orange Data Mining Platform Toolkit (http://orange.biolab.si/), WEKA (http://www.cs.waikato.ac.nz/ml/weka/), KNIME (http://www.knime.org/), and RapidMiner (http://rapid-i.com/content/view/181/190/). The Orange Data Mining Platform Toolkit has also recently been extended to include support for web services (http://orange4ws.ijs.si/). 

After the first keynote, I attended sessions on signal and image analysis and the use of ontologies (a formal specification of a domain), terminologies and language systems. During the second session I presented a paper describing the use of linked data to determine the quality of medical datasets. At the end of this talk I publicised the work of the Software Sustainability Institute in my final slide; additionally I placed leaflets on the conference welcome desk which remained available for the duration of the conference. Additional interesting work presented in these sessions included the reduction of high dimensionality time-series data using an ensemble approach to generate segments used in piecewise aggregate approximation by Hyokyeong Lee and Rahul Singh from San Francisco State University, and the alignment of clinical pathways (or guidelines) for the standardization of care across multiple institutions; this approach by Abidi and Abidi from Dalhousie University, Canada, was interesting as ontologies were used to represent the individual pathways and then semantic similarities were identified between the different pathways to align the common components.

On the second day, a keynote talk was given by Prof. Sergio Cerruti from the Department of Biomedical Engineering at Politecnico di Milano, Italy. The focus of his talk was on the increasing availability of biomedical signals from complex physiological systems and the challenges posed for analysts. The MMMM (multivariate, multiorgan, multimodal, and multiscale) paradigm was proposed as a way to describe this complex physiological data. Several examples were presented which captured the components of this paradigm. For example, multivariate can be observed in the need for 12 leads to accurately record an ECG. The main message that I took from this talk was that this rich data could potentially help unravel many medical mysteries; however, the challenge is how to analyse the volume of complex data generated by such sensors. Prof. Cerruti suggested that in the future, researchers will be required to have both medical and computing expertise. This could require a change in the way in which we train future researchers. For example, on many biomedical postgraduate courses, students are taught both computing skills and basic medical knowledge. However, medical students are very rarely taught any technical skills.

Over the rest of the second and third days I attended a number of different sessions. The meeting was international; however research was featured from quite a number of British institutions. An interesting talk was given by Fiona Collard from the Intelligent Modelling and Analysis Research Group at the University of Nottingham. This group is interdisciplinary, consisting of computer scientists and biomedical specialists, and focuses on the development of models and techniques for the analysis of data. In her talk, Fiona examined an important component that is often overlooked when developing decision support systems; the evaluation of the system. This is often tricky to perform, for example, how do you know whether the right decision was made by the system? Phillip Worrell and Thierry Chaussalet from the School of Electronics & Computer Science, University of Westminster were working on the application of grey modelling to predict demand and costs for long term medical care. Other work featured from Adele Marshall and colleagues from the Centre for Statistical Science and Operational Research based at Queen’s University, Belfast, in which they presented a discrete conditional survival model consisting of two components: the first uses a classification algorithm to predict patient outcome and the second models the survival distribution to predict patient length of stay in the neonatal ICU. Penelope Hill from Newcastle University presented an analysis of how technology is currently used in social care assessments in England. 

One particular piece of research interested me and I spoke to the authors Petros Papapanagiotou and Jacques Fleuriot from the School of Informatics, University of Edinburgh, after their presentation. Their presentation described a possible approach to formalise collaboration between clinicians when medical teams handover to other medical teams using logic-based patterns. This is particularly important as potentially harmful errors can be introduced at these handover points. They plan to apply a series of safety principles associated with these patterns relating to accountability, responsibility and assurances of competence to ensure safer workflows for collaborative work. 

Towards the end of the final day the third keynote talk was given by Dr. Julia Schnabel during which she described the application of image analysis in cancer imaging. As more and more sophisticated imaging technology becomes available, it generates vast amounts of spatiotemporal data which is challenging for humans to analyse. For example, when a patient takes a breath whilst being scanned it can result in distortion of the image. To counteract some of these problems, an outline was given on work done over the past five years at the Biomedical Image Analysis lab at the University of Oxford on multi-modal and dynamic image motion correction, in particular for lung and colorectal cancer.

 

 

Share on blog/article:
Twitter LinkedIn