HomeNews and blogs hub

The politics of instruments

Bookmark this page Bookmarked

The politics of instruments

Author(s)
Heather Ford

Heather Ford

SSI fellow

Posted on 7 June 2016

Estimated read time: 5 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

The politics of instruments

Posted by s.aragon on 7 June 2016 - 11:03am

Streams of Consciousness: Data, Cognition and Intelligent Devices Conference 2016

By Heather Ford, University Academic Fellow, University of Leeds School of Media and Communication and Software Sustainability Institute Fellow.

What are the politics of instruments? Researchers are using new tools to harness collective intelligence in the form of vast quantities of digital data that we parse and find patterns in using algorithms. We use these new data sources and tools to discover security threats and to understand epidemics, to predict and to control. To what extent are we using new tools to help us think through important questions about the world, or are the tools using us? This was one of the key questions posed at the Streams of Consciousness: Data, Cognition and Intelligent Devices Conference at the University of Warwick that I attended last month (21st & 22nd April 2016).

I’m a Software Sustainability Institute Fellow, and my mission for the fellowship is to learn everything there is to know about how researchers are thinking about the ethics of what has been called a revolution in digital data. The Warwick Conference was held at the end of a fascinating ESRC project led by Nathaniel Tkacz called ‘Reading dashboards‘ that looked at the role of the dashboard in everyday life. Philosophers, social scientists, anthropologists and computer scientists got together for two days to talk about the ways that thinking is changing as the instruments we use to think with change. As Caroline Bassett said at the conference, there is a need to talk about the politics of the instruments. “Computational tools help us make decisions but have the decisions partly been made for us already?”

The key theme of many of the talks in Warwick was the issue of delegation. Scholars spoke about the implications of what they see as society delegating responsibility for thinking to new calculative instruments. Delegation, it seems, is a response to our perceived failure to make fair, rational judgements about the world through observation and analysis. No matter how hard we humans try, we’ve found that bias and subjectivity plague us. Even science has become political as scientists support politicians’ statements about the world from opposite standpoints —particularly pertinent at a time when researchers have come out with contradictory statements about the impact of Britain leaving the EU in the runup to the upcoming referendum.

Society’s response to our perceived failures in judgement has not been to embrace subjectivity in a transparent manner as someone like Donna Harraway argued for in the 1980s. Instead, we’ve delegated responsibility for calculation to machines. As geographer Louise Amoore noted at the conference, the benefits of using the cloud to make decisions about security threats, e.g. from terrorism, are that “no human eyes see it therefore no errors of judgement can be made.” We let the algorithms find the patterns; we let them do the thinking for us, but in doing so, we abdicate responsibility for the outcomes of such decisions. Amoore knows about the perils and consequences of such decision-making as she studies the impact of cloud computing technologies in border control and the “war on terror”. The problem, according to Amoore, is that in the process of delegating responsibility for decision-making to the cloud, we’re letting the apparatus decide what matters.

One of the usual responses to critics of new technology is that they’re merely Luddites —people who are afraid of new technology because they believe it threatens old ways of doing things. At least two keynote speakers (David Berry and Caroline Bassett) talked about American computer scientist, Joseph Weizenbaum, who became critical of artificial intelligence after he developed ELIZA, one of the most famous AI experiments at MIT in the 1960s. ELIZA was a simple natural language processing program driven by a script called DOCTOR that engaged humans in conversation inspired by the practices of empathic psychologists.

Weizenbaum was shocked how seriously users engaged with the program. He became one of the leading critics of artificial intelligence in what Bassett described as “a rejection of computing power from the temple” (i.e. MIT). Bassett said that Weizenbaum’s resurgence is interesting because it’s a hostile response to technology from the past, resurfacing now as we ask whether human thought can be reducible to computational logic and reason. Speaking at the end of the conference, David Berry noted what Weizenbaum realised: that which cannot be counted by machines is imagination. Significant problems arise when we abstract and simplify the world in the belief that every aspect of human thought can be computable.

All in all, the conference reinforced what we software geeks need to come to terms with: we need to decide what is important rather than letting our instruments decide that for us. But most importantly, we need to recognise the social consequences of the software-driven research, and that, like Weizenbaum, we may be shocked at how differently the world understands that knowledge and the role of computing within it.

Image courtesy of Luke Robert Mason, CC BY 4.0

Share on blog/article:
Twitter LinkedIn