By Chris Cannam, SoundSoftware.ac.uk.
These are interesting times in consumer computing. We’re in the middle of what seems likely to be a generational shift from the traditional desktop and laptop PC to what have been called post-PC devices.
The term post-PC typically refers to modern smartphones running iOS or Android, and also to devices such as touch tablets that are closer in size and utility to a conventional PC. From a user perspective the most distinctive thing about these devices is the touch interface. But just as significant, from the point of view of the researcher developer, is the way software development and distribution are handled.
The app store model
The thing that every post-PC device has in common, no matter what the device or who makes it, is the app store model for software distribution. This is a means of simplifying the discovery and installation of software for the end user, by restricting distribution from the developer through a single managed channel.
For most users this is an improvement over earlier, more ad-hoc distribution models, but it has implications for users who also develop software, as is typical of researchers. There are two main limitations of the app store model for researcher developers.
You can’t necessarily install software you and your colleagues have written. You can only install software on your own device through the manufacturer's app store, or using a specially configured connection with your own development PC. If you want to put your own software in the app store so anyone else can get it, you need to pay an ongoing annual fee and pass the manufacturer’s review process.
You can’t just develop on the device. Using the device itself to develop applications that behave like proper, first-class native apps is usually either impractical or forbidden. There is a very limited range of development environment options, and many of the facilities relied on for developer best practice are absent (such as a filesystem to manage version control with).
These limitations vary in extent—Android for example is less restrictive than iOS—but they exist to some degree on all post-PC platforms.
Limitations like these could cause serious problems for open and sustainable scientific software development. Or the changed landscape could present an opportunity to start working at a higher level and in a more collaborative way than ever before. Either way, it's vital to be aware of what is happening and what it means to us.
Why does it matter? We still have PCs…
It matters partly because these new devices offer exciting new capabilities that we should be taking advantage of. But it also matters because we—the existing body of research software developers—don’t get to control what devices our current and future colleagues will want to use. If a generational shift is happening, it happens whether we want it to or not.
Let’s look at the figures. During 2011, around 350 million traditional PCs were sold worldwide. That was the first full year in which the iPad was available, and in that year 67 million touch tablets running iOS or Android were sold. (Sources: Canalys, Strategy Analytics.) The iPad alone took about 10% of the combined PC market in that first year.
In 2012 the market for touch tablets grew by about half. Apple sold nearly 60 million iPads by the end of October and numerous new competitors appeared including Google's Nexus 7, Samsung's Galaxy Note series and the Microsoft Surface. Meanwhile sales of traditional PCs were flat at best, giving touch tablets perhaps 25% of the combined PC market in 2012. Apple are now selling more iPads than any individual PC manufacturer is selling PCs. (Sources: IHS iSuppli, Apple.)
What’s more, each of the tablets is broadly compatible with its equivalent smartphone, and there are 750 million of those out there. That means a very high level of user familiarity with the new devices’ interaction models and software distribution mechanisms. Android is now level with Windows 7 as the world’s most widely-used operating system. (Sources: Apple, Google, Microsoft.)
Equally revealing is the response of the major incumbent in the PC market: Microsoft. With the Windows 8 family they now have a broadly unified operating system across smartphones, tablets, and hybrid touch PCs in which the app store is the primary distribution model. In every current Windows operating system, applications distributed outside the app store—including all Windows 7 applications—are either presented as legacy software or forbidden entirely.
There’s a typical cycle when a new class of computing device appears. A smaller, initially lower-powered device appears that’s cheaper and more widely available than the previous generation. Users quickly become familiar with it and start to rely on its new capabilities—greater portability or a more powerful interaction model—making it their primary device for personal use. These users want to do more and more of their everyday work on them; initially this means accepting a compromise on power, but the newer devices quickly catch up with the previous generation in power and there is then no reason to go back. We’ve seen this with the arrival of workstations and laptops, and although of course it can’t be expected to happen every time a new device is invented, it seems plausible that it may happen again here. It might soon seem ridiculous to have to depend on a traditional PC for your lab work when you’re carrying a touch tablet of almost the same processing power around with you already. Research students will be using these devices most of the time, and they will want to develop with them too.
How does one develop software for an app store device?
Using a separate development PC. The standard app developer model for these devices consists of using a separate PC to write software, which is then pushed to the device, where it runs.
Two of the three current mainstream post-PC platforms—Android and Windows RT—allow you to write and deploy software to your own device for testing without paying a fee. But all three platforms require an annual fee, a contractual agreement, and manufacturer approval if you want to distribute software to the public at large.
Technically, this approach is complicated by the fact that the three mainstream platforms all have quite incompatible development environments and frameworks. Writing apps that work across outwardly-identical devices that happen to run different operating systems is presently a difficult task even for a professional software house. It’s also not always ideal to have to build a self-contained app for every research idea.
Besides cost and complexity, there is a tension between the app developer model and open publication and open access. Partly this is a problem of principle: what does it mean to publish your software and methods openly, if you can then only distribute it through a locked-down delivery channel? But there are practical implications as well:
- Is it realistic to consider your work to be reproducible, if it would be necessary to build an app on a separate device and perhaps even get it accepted in an app store in order to reproduce it?
- How realistic is it to reuse code that can’t be compiled on the only platforms it runs on?
- How sustainable is a piece of software if it can only exist for as long as its developer keeps paying the app store deployment fees?
There are commonly incompatibilities between app store terms and the use of popular open-source software licences such as the GPL.
In the end though, the fundamental problem with this model is that it can’t satisfy the desire to use one’s everyday device as a development tool. The eternal compulsion to hack suggests that ways must be found to do research development without demanding a separate PC to do it on.
Developing on the device itself
If the app developer model isn't very practical for sustainable research work, is there an alternative? Can we write and run software on the device itself? Of course these devices are not yet suitable for large-scale data processing, but many daily research development tasks are smaller exploratory works that should run fine on an iPad or other tablet.
Only Android offers the option of developing first-class native apps on the device itself using the AIDE environment. Although in principle this removes a major obstacle to development, the issues of complexity, portability, and redistribution remain.
For higher-level or dedicated scientific development the options appear limited. Some development environments using high-level portable languages such as Python are available (for example, Pythonista on iPad), as well as environments such as Codea originally designed for game development. But the iPad lacks a traditional user-visible filesystem or support for dynamic loading of code modules, rendering some facilities often relied on for good management of code, such as version control or modular plugin systems, inefficient or unavailable.
Coding in the cloud
A suitably configured environment hosted at an institution or on a commercial cloud service can support exploratory code development and remote execution through a browser interface using a language like Python (with the IPython Notebook) or Julia. A comparable option is available for MATLAB in the shape of MATLAB mobile from MathWorks.
If the facilities are available, and the requirement to be always online is not an issue, a setup like this has many advantages. It provides persistence across multiple client devices and eliminates the immediate portability problem of getting software running locally. The persistent nature of these systems should also make it possible to retain a readable record of code experiments, facilitating academic publication and reuse. And it can potentially give access to much greater computing power than is available locally.
On the other hand, setting up such an environment can be challenging. While a lab notebook format should be suitable for academic dissemination, it might not be so appropriate for software to be distributed to users outside the immediate academic environment. Although in-browser cloud environments have more opportunity to provide interactive or graphical facilities than a terminal emulator, they can’t work with all of the local hardware, particularly for features like sensors or audio.
Even so, it’s clear that cloud computing has a lot of potential for active exploratory research programming, not only in its more traditional role coordinating offsite processing tasks.
A hybrid approach?
If the most fundamental problem with developing on the device is a lack of support for code management, persistence, provenance and sharing, and the problem with developing in the cloud is a lack of access to native facilities, can a hybrid alternative work? A local high-level language environment, backed by versioning and project support in the cloud, could be a powerful approach. Services such as GitHub already provide many of the necessary online facilities.
One thing that is apparent is that there should be fewer technical and organisational obstacles to development using Android than iOS or Windows RT. However, in line with the view that users will want to develop for the device they already have, it's clear that an ideal environment would be usable regardless of which platform your device happens to run. How close we can get to the ideal remains to be seen: it's likely that manufacturers would reject any solution that made it possible for researchers to develop sophisticated software and distribute it to non-researchers on multiple platforms without going through the app store.
There are plenty of possibilities to be investigated, and we’re going to be putting some time into looking more closely at them during the coming months. If you have any further thoughts, or experience to share, we’d very much like to hear from you!