Interoperability test harness for Provenance Tool Suite
Warning: please be aware that Travis CI, which is mentioned in this article, has a security issue with its Free Tier service. By design, “secret” data such as access credentials are exposed within historical clear-text logs which are accessible by anyone via the Travis CI API. Please see this article for more information.
By Mike Jackson, Software Architect.
In May I started a consultancy project with Trung Dong Huynh, Luc Moreau and Danius Michaelides of Electronics and Computer Science at the University of Southampton. As part of their research into provenance, they have developed the Southampton Provenance Tool Suite, a suite of software, libraries and services to capture, store and visualise provenance compliant with the World Wide Web Consortium (W3C) PROV standards. The goal of the consultancy was to develop an infrastructure, which systematically checks convertibility and round-trip conversions across combinations of Provenance Tool Suite packages and services operating collectively. Last week I completed development of an interoperability test harness which is now under review by Dong, Luc and Danius.
From ideas from Dong, Luc and Danius, I drafted an initial design which was reviewed by Dong, and then updated before implementation started (see the design prior to implementation). More detailed aspects of the design changed during implementation (see the design as implemented).
The test harness uses a repository of test cases. A single test case consists of a set of documents in each of the give PROV representations: PROV-N, PROV-O (Turtle and TRiG), PROV-XML and PROV-JSON. The documents within each test case are semantically equivalent to the others within the same test case. These test cases are curated manually and are published on GitHub as a community resource, which will evolve over time.
The test harness can test the PROV document conversions done by:
- prov-convert, a Python script, part of the ProvPy Python library.
- provconvert, a Java executable, part of the ProvToolbox Java library.
- ProvStore, a free repository for PROV documents, which also supports their conversion, and which can be used via a browser or a REST API.
- ProvTranslator, a service to translate PROV documents, which can be used via a browser or a REST API.
The test harness is extensible and allows support for testing other components, whether these be invoked via the command-line or REST (or, come to that, via direct Python-to-Python or Python-to-Java calls), to be added in future.
The test harness source code and documentation is hosted on GitHub.
Python's useful packages
The test harness is implemented in Python, a language both the team and myself are comfortable with.
prov-convert and provconvert are invoked using the subprocess module, which allows command-line tools to be invoked from within Python and captures return codes, output and error streams.
ProvStore and ProvTranslator are invoked using requests, a library which provides a very useful, high-level wrapper for HTTP calls, including REST API invocations. Using this library was preferable to using the, far lower-level, urllib2 module.
The test harness runs conversions across every combination of every possible pair of documents within every test case. As there are 5 PROV representations, this means 120 conversions per test case. Implementing these as separate test functions would be both unscalable and very monotonous! Using a single test method that iterates across all the test cases, would be easy to do, but a failure of any conversion would prevent subsequent conversions and test cases from running. The nose_parameterized library provided a solution. Iterating over the test cases, and combinations of documents within these, it can auto-generate a test function, based on a template, for each conversion at runtime. If one conversion, and, so, one test function, fails, the others will still be run.
The interoperability tests can be run via the nose test runner. This provides test logging and report generation built-in and saved having to implement this explicitly.
Who tests the tester?
Unit tests were implemented for the test harness itself, to ensure it would perform as expected. I found the requests-mock library invaluable here as it can mock (mimic) REST services for testing code that uses the requests library. This allowed ProvStore and ProvTranslator-related code to be tested without needing access to the live services.
TravisCI is a hosted continuous integration server which can be used to test software hosted on GitHub (see, for example, ProvPy, ProvToolbox, and the interoperability test harness unit tests). TravisCI runs tests whenever changes are pushed to the associated GitHub repositories.
It was a requirement that the interoperability test harness can be run under TravisCI so I created a GitHub repository with a TravisCI job. This job tells Travis CI to clone the repositories with the test harness and test cases, get ProvPy and ProvToolbox likewise, install them and their dependencies, and run the interoperability tests for ProvPy, ProvToolbox, ProvStore and ProvValidator (see the interoperability test results)
I was impressed by TravisCI, both by how easy it was to get started, and also how much they allow to be done within the scope of a test job (e.g. installation of Linux packages, cloning of Git repositories etc), and all for free! However, a down-side of TravisCI is that jobs cannot be scheduled to run regularly. They can, though, be rerun either by clicking a button within TravisCI's web pages, or using the Ruby Travis Client client, which, in turn, can be invoked regularly (e.g. using a Unix CRON job).
Testing of closed source packages and a private test infrastructure (e.g. hosted on a local machine, rather than Travis CI) was also a requirement. So, the test harness can be run from the command-line, but it can also be run from within the popular open source continuous integration server, Jenkins. The test harness repository documents both standalone and Jenkins usage.
Dong, Luc and Danius are now looking at testing the latest versions of their software with the test harness, as well as reviewing the test harness and its supporting documentation - a reversal from the outset of the consultancy where I reviewed their software! As a developer, I look forward to hearing about their experiences and, based on their feedback, I'll make updates later in the year.
This consultancy arose from Dong's application to the previous round of our open call for projects. Our next round closes on 30th September. For more information, and an application form, please see our Open Call for Projects.