Research fellow, the University of Southampton
Semantic web, agents, artificial Intelligence.
Large information sources, such as ontologies, can be invaluable in making decisions. However, the use of large ontologies can be expensive in terms of computation. This is particularly the case when there are computational limitations, such as mobile device constraints, memory limitations, and low or unreliable bandwidth, coupled with high costs associated with hosting, managing and using the data.
I am researching how to build a task-focused ontology automatically, using an online ontology evolution algorithm, so that information from large ontologies can be used, regardless of constraints. For my PhD, I developed two learning and one forgetting algorithm for evolving ontologies. This work was awarded a best paper and best student paper at two top-tier conferences on Agents (IAT 2010) and the Semantic Web (ISWC 2010), respectively. I was also awarded the Doctoral Prize for my PhD, which allows me to extend my research.
I am extending my forgetting algorithm so that it can predict which concepts will be the least useful, using a predictive model, which allows the algorithm to remove concepts earlier in a scenario than without prediction. This means that the ontology can remain smaller than other state-of-the-art approaches, thus outperforming them because of the reduced costs of using the ontology. I used prediction to improve my first learning algorithm, which saw an approximately 40% improvement, and I hypothesise using prediction with forgetting will see an improvement with a similar magnitude. I am also working on making my RoboCup OWLRescue software open source, because there is currently no platform for testing the performance of evolving ontologies.
Check out contributions by and mentions of Heather Packer on www.software.ac.uk