SUBPRIME LANGUAGE: The Precarious Value of Words in an Age of Linguistic Capitalism, Digital Advertising and #FakeNews

As the value of words shifts from conveyor of meaning to conveyor of capital, has Google become an all powerful usurer of language, and if so, how long before the linguistic bubble bursts?

I’m giving a talk at Trinity College Dublin next week as part of the CONNECT centre and Engineering Fictions. I’ll be using a lot of the material from the talk I gave at NUIG a couple of weeks ago, but I also want to try out some of the new ideas I’ve been developing around the idea of subprime language and linguistic liquidity. Below is an extended abstract/intro for the new stuff. It is work in progress – any thoughts are welcome…. I hope also to develop these ideas at the AAG in Boston and at the RGS-IBG in London later this year. 

As tech companies such as Google increasingly mediate and monetise the informational landscape through search and advertising platforms such as AdWords and AdSense, the ongoing effects on and of the language they auction, sell and exploit are becoming more and more palpable. In the viral spreading of fake news and political click-bait, and in the daily battles for exposure, it seems that words are being lent against a narrative so tenuous as to make their linguistic function negligible. Infused with a neoliberal logic which favours advertising dollars over truth and the systemic bias of algorithmic processing, the discursive side-effects of this semantic shift reveal a deep-rooted weakness in the linguistic marketplace which reaches far beyond the linguistic sphere and into the political, with powerful and potentially devastating consequences. Were it not for an overriding metanarrative of neoliberal logic, this evolution in the ontology of digital language may seem like an obvious manifestation of the postmodern condition. But as the value of words shifts from conveyor of meaning to conveyor of capital, should we be thinking of Google as the all powerful usurer of language, and if so, how long before the linguistic bubble bursts?

In this paper I set out some recent thoughts about the idea of subprime language – asking questions such as how much and how often language can be bought, sold or ‘borrowed’ before it becomes exhausted of meaning and restrictive of expression and understanding. How resilient is language to a quasi-capitalist operating system, and what happens if/when linguistic capitalism crashes? And finally, knowing the historical and cultural power that a control of language can have, the fragility and unpredictability of the economic system which now seems to underpin it, and with a growing awareness of the power wielded by technology companies such as Google, should we not be more aware of the the potential dangers in these techno-linguistic shifts?

In recent weeks the fake news debate has been evoking numerous references to Newspeak, the language of thought control and state propaganda employed to further the ideology and control of English Socialism (INGSOC) in George Orwell’s 1984. It is an interesting analogy, but I think rather than a straight forward comparison to the misinformation and alternative facts seemingly employed during the Trump campaign, there are deeper problems within today’s informational infrastructure that a more thorough reading of Orwell’s text draws out. Firstly, there is the assumption in Newspeak that “thought is dependent on words”, a somewhat problematic yet entirely relevant causal linkage when it comes to debates about search results, auto-predictions, filter bubbles and algorithmically generated social media newsfeeds, which can be instrumental in the cultivation of extreme views and hate crime.

The second issue concerns the limitations and restrictions of language that is so important to the idea of Newspeak, a language which “differed from most all other languages in that its vocabulary grew smaller instead of larger every year”. We can see echoes of this in the shrinking of creative vocabulary of digital language in favour of words which might be cheaper, easier to find, or more alluring either to algorithms or to human readers.

The third point I want to explore takes the culmination of the first two – i.e. that words have a real effect on how we think, yet the way information flows through the digital spaces encourages the shrinking of our online vocabulary and discourages non-normative language – and complicates this already worrying formula with an overriding motive not of state political control (as in Orwell’s dystopia), but of private capital gain (as in advertisers and tech/media companies). In the digital networks of information and communication we have created, the potential for political control comes often as a side effect of the economic incentive, or as a manipulation of a system which allows language, and therefore thought, to be so dependent on and subject to a neoliberal logic which is itself so precariously mediated by algorithmic systems and networks.




PODCAST: Pip Thornton – Critiquing linguistic capitalism, Google’s ad empire, fake news and poetry

Algocracy and the Transhumanist Project


My post as research assistant on the Algocracy & Transhumanism project at NUIG has come to an end, and I will shortly be returning to Royal Holloway to finish writing up my PhD. I have really enjoyed the five months I have spent here in Galway – I  have learned a great deal from the workshops I have been involved in, the podcasts I have edited, the background research I have been doing for John on the project, and also from the many amazing people I have met both in and outside the university.

I  have also had the opportunity to present my own research to a  wide audience and most recently gave a talk on behalf of the Technology and Governance research cluster entitled A Critique of Linguistic Capitalism (and an artistic intervention)  as part of a seminar series organised by the  Whitaker Institute’s Ideas Forum,  which…

View original post 513 more words

Talk at NUIG 25th Jan – Linguistic Capitalism – technology & governance research cluster


I’m giving a talk at NUI Galway on Wednesday 25th January as part of the Whitaker Institute Ideas Forum seminar series.

It will be an explanation and exploration of all things Linguistic Capitalism, with a demonstration of my {poem}.py  project, as well as some new ideas about the role of Google advertising in the fake news debate.

Most exciting of all is a guest appearance from Galway poet Rita Ann Higgins who will be reading some of her poem Killer City, to help illustrate the talk.

Being Human | Human Being: a panel discussion of Ex Machina

Ex Machina panel: if Ava was trained on search data, how come she doesn’t try to sell Nathan the pair of trainers he googled months ago? And other insights….

Algocracy and the Transhumanist Project


Back in March I co-curated a Passengerfilms event in London which used Alex Garland’s 2015 film Ex Machina to provoke a panel discussion about what it means to ‘be human’ in a world in which the digitally -or algorithmically – processed ‘virtual’ is increasingly experienced in the actualities of everyday life. I wrote a post on my own blog about the event at the time, but have now had the chance to edit the audio recording of the panel discussion, which features thoughts on the film and on the wider discourse from John Danaher (NUI Galway) and myself, as well as Lee Mackinnon (Arts University, Bournemouth), Oli Mould (Royal Holloway) and Mike Duggan (Royal Holloway).

We held the event in the downstairs area of The Book Club, an East London club venue, so some of the audio is accompanied by a booming bassline from the upstairs bar. I have tried…

View original post 314 more words

A Critique of Linguistic Capitalism: a short podcast from Pip Thornton

Algocracy and the Transhumanist Project

I started work as the research assistant on the Algocracy and Transhumanism project in September, and John has invited me to record a short podcast about some of my own PhD research on Language in the Age of Algorithmic Reproduction. You download the podcast here or listen below.

bog-eyeThe podcast relates to a project called {poem}.py, which is explained in greater detail here on my blog. The project involves making visible the workings of linguistic capitalism by printing out receipts for poetry which has been passed through Google’s advertising platform AdWords.


I have presented the project twice now – each time asking fellow presenters for their favourite poem or lyric which I can then process through the Keyword planner and print out on a receipt printer for them to take home. I often get asked what is the most expensive poem, and of course it depends on…

View original post 130 more words

NEWS | Curating (in)security at AAG 2017

Great write up from Nick Robinson in anticipation of our AAG2017 sessions

Boston waterfront Skyline of downtown Boston from the pier (

Every year, nearly 10,000 academics converge on one particular U.S. city in the name of all things geography – Boston, Massachusetts being the location of choice for the annual AAG (American Association of Geographers) conference in April 2017.

With a vast array of potential sessions, panels and presentations – the AAG has something for everyone: from Geographies of Bread and Water in the 21st Century  to subjects pertaining to aspects of Physical Geography, Geopolitics, and even Cyber Infrastructure!

Visiting the AAG has long been a personal ambition of mine since beginning my own undergraduate degree, and this year finally presents an opportunity after my paper (and preliminary thesis title) – “How to Backup your Files Nation-State in a Digital Era: The Estonian Data Embassy” – was accepted onto a fantastic looking double-session titled: Curating (in)security: Unsettling Geographies of Cyberspace. (see…

View original post 368 more words

Curating (in)security: Unsettling Geographies of Cyberspace CfP AAG 2017

Curating (in)security: Unsettling Geographies of Cyberspace
Call for Papers
AAG 2017 Boston (April 5-9, 2017)

In calling for the unsettling of current theorisation and practice, this session intends to initiate an exploration of the contributions geography can bring to cybersecurity and space. This is an attempt to move away from the dominant discourses around conflict and state prevalent in international relations, politics, computer science and security/war studies. As a collective, we believe geography can embrace alternative perspectives on cyber (in)securities that challenge the often masculinist and populist narratives of our daily lives. Thus far, there has been limited direct engagement with cybersecurity within geographical debates, apart from ‘cyberwar’ (Kaiser, 2015; Warf 2015), privacy (Amoore, 2014), or without recourse to examining this from the algorithmic or code perspective (Kitchin & Dodge, 2011; Crampton, 2015).

As geographers, we are ideally placed to question the discourses that drive the spatio-temporal challenges made manifest though cyber (in)securities in the early 21st century. This session attempts to provoke alternative ways we can engage and resist in the mediation of our collective technological encounters, exploring what a research agenda for geography in this field might look like, why should we get involved, and pushing questions in potentially unsettling directions. This session therefore seeks to explore the curative restrictions and potentials that exude from political engagement, commercial/economic interests, neoliberal control and statist interventions. The intention is not to reproduce existing modes of discourse, but to stimulate creative and radical enquiry, reclaiming curation from those in positions of power not only in terms of control, but by means of restorative invention.

We intend to have an interactive and lively discussion that we hope will be productive for a growing field of inquiry between disciplines. In light of this, potential contributions could combine or exceed those outlined below:

·         Algorithms and algorithmic governance
·         Alternative theories of space / cyberspace / cybersecurity
·         Artistic interventions / performances
·         Big data
·         Cyber / digital finance
·         Disciplinarity and knowledge production
·         Hackers and activism
·         Human-Computer Interaction (HCI)
·         Materiality and virtuality
·         More-than-human agency
·         Networks
·         Power and resistance
·         Precarity, affect and vulnerability
·         Privacy and surveillance
·         Surveillance and encryption

Session Guide

To submit a contribution, please contact one of the panel organisers. Abstracts should be no longer than 200 words and should be submitted by October 7th 2016.

Panel Organisers
Andrew Dwyer (University of Oxford, UK)

Pip Thornton (Royal Holloway, University of London, UK)

In addition, if you wish to offer contributions that are not in a conventional lecture mode, please provide a brief description of what your output intends to be in addition to the 200 word abstract.

{poem}.py : a critique of linguistic capitalism

How much does poetry cost? What is the worth of language in a digital age? Is quality measured on literary value or exchange value, the beauty of hand-crafted, hard-wrung words, or how many click-throughs those (key)words can attract and how much money they earn the company who sells them? But haven’t words always been sold? As soon as they became written down, moveable and transferable words entered the market place, and then necessarily the political sphere. But these words gained an exchange value as integral parts of a text – a story, a poem, a book, for example. Removing or reordering these individual words – or ranking them based on external influences would change the meaning and devalue the text as a whole, in both a literary and monetary way. Can language retain its integrity once it becomes part of the digital economy? Is there even such a thing as the ‘integrity of language’?  Certainly the words Google auctions off have referential values unanchored to narrative context, and it is this new context and the politics surrounding it that I am attempting to examine and expose in my new project which I have called {poem}.py.


The project started out when I was required to provide a poster for the Information Security Group (ISG) Open Day at Royal Holloway later this month, which always make me nervous as – unlike most of my PhD contemporaries in the Cyber Security CDT – I don’t have a load of mathematical formulas, graphs and data to fill out the required poster template. So as I was thinking about how to represent and explain my PhD topic to an audience of cryptographers, mathematicians and computer scientists, I decided to see how much my favourite poem ‘cost’ if I put all the words through the Google AdWords Keyword Planner and outputted the results on a mock-up of a receipt – which I thought might look nice on a poster. In this way I discovered that, at 4:39 PM on 7th May 2016, my favourite poem At the Bomb Testing Site by William Stafford cost the princely sum of £45.88 (before tax).

bombtest_receiptTo explain the logic behind this – the keyword planner is the free tool Google AdWords provides advertisers so they can plan their budgets and decide how much to bid for a particular keyword or key phrase to use in their advert. Google gives a ‘suggested bid’ price for each word, giving an advertiser some idea how much they will have to spend to win the mini auction which is triggered each time someone searches for that keyword. When an advertiser wins the auction, their advert will appear as a ‘paid’ (as opposed to organic) search result right at the top (and now right the bottom too) of the rankings with the small yellow ‘Ad’ box next to them. The advertiser then pays the winning bid (which, like eBay, will be one penny/cent above the second highest bid) each time someone clicks on their advert. Phrases such as ‘cheap laptop’ or ‘car insurance’ can cost as much as £50 per click. This is the basis of how Google makes its money, a form of ‘linguistic capitalism’ (Kaplan: 2014) or ‘semantic capitalism’ (Feuz, Fuller & Stalder: 2011) in which the contextual or linguistic value of language is negated at the expense of its exchange value.

One of the first problems I encountered with this method was that once I had fed the words of a poem through the keyword planner I then had to put them back into their narrative order to make the receipt ‘readable’ as a downward list, as Google churns the words back out according to their frequency of search rather than in their original order. With my test poem, I had to order the words back into the shape of the poem manually, which was time-consuming and fiddly. I have since been working with CDT colleagues Ben Curtis and Giovanni Cherubin using Python code to automate this  process. This union of poetry and code is where the project title {poem}.py comes from – .py being the file extension for Python.

Once I had a spreadsheet with the poem back in narrative list order, and with the corresponding ‘price’ of each word – including duplicates – I added up the total cost of the poem and then created a template which mirrored a paper receipt.

This first attempt revealed several really interesting points which not only illustrate what I am trying to examine and expose in my thesis, but it also gave me ideas about how I could use the project as a quantitative method of gathering data and also as a creative practice and artistic intervention.  A section of my thesis examines how the decontextualisation of words in the searchable database leads to anomalies in search results and autopredictions which not only reflect, but also therefore perpetuate stereotypical, sexualised, racist or sexist search results. The words of the poem on the receipt have likewise been taken out of context, and are instead imagined as how well they will do in the context of an advert. Their repeated use by advertisers and confirmatory clicks by users will also presumably increase their frequency within the wider database.

“the cost of a word to Google relates to the size and wealth of the industry it plays a part in advertising”

Once I had run a few more poems through this process I started to realise that words relating to health, technology, litigation and finance were particularly expensive. In At the Bomb Testing Site, I was initially puzzled as to why the word ‘it’ costs £1.96, which seemed disproportionate compared to other words. I then realised that, to Google, the word is ‘IT’ (as in information technology) – hence its price.

In Wordsworth’s Daffodils, the words ‘cloud’, ‘crowd’ and ‘host’ are expensive not because of their poetic merit or aesthetic imaginings, but because of ‘cloud computing’, ‘crowd-sourcing/funding’ and ‘website hosting’. Wilfred Owen’s Dulce et Decorum Est revealed that medical words such3Untitled as ‘cancer’ ‘fatigue’ and ‘deaf’ had relatively high suggested bid prices, while ‘economical’, ‘accident’, ‘broken’, in Anne Carson’s Essay on What I Think About Most are all over £5.00 per click, and the suggested bid for the word ‘claim’ is £18.10. Perhaps unsurprisingly,  it seems the cost of a word to Google relates to the size and wealth of the industry it plays a part in advertising.

But as well as pricing individual word and phrases, Google’s Keyword Planner also tries to second-guess what you are trying to advertise by the words you enter. In the case of At the Bomb Testing Site, the Keyword Planner thought I was either trying to advertise road biking (presumably the words curve, road, panting, tense, elbows, hands and desert suggested this), or some kind of life coaching, career management service which was prompted by the phrase in the poem ‘ready for change’. Put a question mark on the end of that phrase and it becomes a highly profitable key-phrase in an advert. Similarly the high price of the word ‘o’er’ in Daffodils is explained in the context of OER (Open Educational Resources). The AdWords planner also suggested I might be trying to market a product relating to Game of Thrones due to the Rains of Castamere song in which ‘the rains weep o’er his hall’.

As I played around with the receipt template, I added a CRC32 checksum hash value to the receipt as an ‘authorisation code’. A checksum is a mathematical blueprint of a piece of text which is generated to ensure that a transmitted text is not altered. The sender sends the checksum with the text and the recipient generates the same checksum to make sure it has stayed the same in transit. Using this as an authorisation code on the poem receipt is therefore indicating that when protected by code or encrypted, the poem retains its integrity, but when it is decoded, it is then subject to the laws of the market – as shown on the receipt itself. I also added N/A to the tax field as a little dig at Google’s tax situation in the UK.

But the more poems and texts I analysed in this way, I began to suspect that there is something interesting to be learnt from understanding the geographical, political and cultural logics which might dictate the economic forces which apparently mediate and control this linguistic market place. I ran words such as ‘trump’, ‘war’ and ‘blair’ through the keyword planner over a period of two weeks and noticed how the suggested bid prices fluctuated, despite them not being what you might assume to be very ‘marketable’ words. The keyword planner also allows the user to target their campaign by location, so I could then measure the ‘value’ of war, for example, in the US and in the UK, and even down to tiny areas such as Egham, and I could record these values over a period of time to see how key national and international events might influence word prices.


As well as recording the fluctuations of specific words and names, I am keen to capture the changing uses and values of groups of words based loosely around a theme, and have decided that continuing to use poetry is the best (and most apt) way to do this. So I have selected a series of poems which are somewhat tangentially linked to events which are happening in the UK and world over the next few months such as the Olympics, the EU Referendum, the release of the Chilcot report and the US Presidential election. Gathering this data over the next few months will enable me to conduct a quantitative longitudinal study into the geopolitical and cultural influences which shape linguistic capitalism, and therefore potentially also the composition and weighting of the wider linguistic discourse.

But apart from the quantitative side to this project (which can be harvested in data spreadsheet form), I want to use the output of the receipt as an artistic intervention or critique to make the issues and politics around linguistic capitalism and the way Google treats language more visible and accessible. If there is a politics lurking within the algorithmic hierarchies and logic of the search engine industry (which I believe there is), then it is a politics hidden by the sheer ubiquity and in some way the aesthetics of the Google empire. My thesis is based loosely around Walter Benjamin’s Work of Art in the Age of Mechanical Reproduction essay, and as such, views the various ways in which Google controls, manipulates and exploits data (linguistic and otherwise) under the guise of ‘free’ tools and accessories as a kind of aestheticisation of politics. Following Benjamin, therefore, the final chapter of my thesis will examine ways of turning this power back around, and ‘making art political’, or more specifically to this project, reclaiming language as art.

I hope to be able to speak to and engage with various academics and artists who have attempted ‘Google Poetics’, Adwords ‘happenings’ (Christophe Bruno: 2002) or creative resistance (Mahnke & Uprichard: 2014), (Cabell & Huff: 2010), (Feuz, Fuller & Stalder: 2011) to explore the difficulties and successes of working within or without of the Google framework to produce interventions. It is in this chapter that I also want to use {poem}.py as my own artistic intervention and act of political art. I am aware that I am in effect mixing quantitative data gathering with qualitative methods and creative practise here, which is something I need to think through.


As I mentioned in my previous post, last week I co-organised a workshop on Living with Algorithms which aimed to let participants be creative and provocative in thinking about everyday life and algorithms. For my own contribution to the workshop I asked participants to send me their favourite poems in advance. I then bought a second-hand receipt printer and set about monetising their poems so I was able to print them off for them ‘live’ during my presentation at the workshop. At some stage I would like to use this group of poems to form the basis of an actual art exhibition, but this method has also proved really helpful in terms of beginning to answer some of the questions I asked at the beginning of this post. Because I didn’t tell the participants why I wanted them to send me a poem, some of them were only available in formats which unintentionally resisted the process of commodification so I had no option but to print out VOID receipts for two of them.  The first was an amazing spoken word poem called Bog Eye Man, by Jemima Foxtrot which is only accessible on YouTube or Vimeo. As the actual text of the poem does not appear on the web, I was unable to ‘scrape’ it. The other poem was contained within a Jpeg file from which I could not copy and paste. These two examples show how we might begin to envisage a way to maintain the integrity of poetry in a digital age dominated by linguistic or semantic capitalism, the example of the spoken word poem in particular harks back to Benjamin’s description of the loss of aura when a work of art becomes ‘reproducible’. For the time being, Bog Eye Man remains resolutely unmonetised (at least until spoken data starts being algorithmically scraped anyway…), and retains – as Benjamin wrote ‘its presence in time and space’.

But back to the poster, where all this started. This isn’t the one I’ll be presenting at the ISG open day – it doesn’t conform to the strict template and colour scheme – but is one I made for the Humanities and Arts Research Centre poster competition, which is a bit more aesthetically pleasing…


Living with Algorithms workshop

Mike Duggan and I have convened a workshop which will take place in London tomorrow (9th June 2016) on the subject of Living with Algorithms. A couple of people have been unable to make it at short notice, which is a huge shame, but it now gives me the opportunity to present and get feedback on a new project I’ve been working on called {poem}.py.

I will blog about it more after the workshop… I don’t want to spoil the surprise! Outline and final program is as follows – it looks like it will be a really good day…

The Living with Algorithms workshop is sponsored by the RHUL Centre for Doctoral Training in Cyber Security (CDT) and the Humanities and Arts Research Centre (HARC)

Workshop outline
It is clear that the spatial practices and experiences of the everyday are increasingly produced as configurations in which algorithms play a major part (Pasquale, 2015). Algorithms now permeate our daily lives in a huge variety of ways; from how we move, socialise, exchange money and goods, to how we engage politically, and even how we experience the world from the position of our embodied and corporeal selves. Amongst scholars from across the disciplines, Geographers have paid particular attention to the co-constitution of digital technologies and spatial practice (see Kitchin & Dodge, 2011; Leszczynski, 2015; Thrift & French, 2002) through detailing how algorithms, code and software increasingly come together to produce the spaces of everyday life. Yet there has been a lack of empirical attention to how this nexus is lived or experienced from the perspective of those living with it. As this field continues to develop we suggest that much more needs to be done here.
This workshop aims to bring together a series of short, provocative and critical papers of 5-10 minutes, which explore how everyday life, and the experiences of it, have been affected by the algorithms which increasingly come to produce them. In essence, we wish to use empirical examples to question:
–       What it means to live with algorithms in the context of everyday life?
–       In which ways do algorithms produce our daily practices?
–       What the pressing concerns of algorithmic living are? And why are they important now?
–       What is it specifically about algorithms that do work in the world, and how does this differ from the work of code, software and data?
In bringing together experts and doctoral students in this field for a round table discussion we seek to develop the notion that culture and technology are co-constituted in everyday practice by focusing specifically on the roles that algorithms play in everyday cultural practices. In the format of a day-long roundtable workshop based around a series of themes, we hope to begin to answer some of these questions.

Session One
Introduction from Mike Duggan
Pip Thornton, {poem}.py : A critique of linguistic capitalism
Sam Kinsley, An algorithmic imaginary: anticipation and stupidity
Philip Garnett, Vectorising the human
Andreas Haggman, In defence of imperfection

Session Two
Kui Kihoro Mackay, Black Twitter and Becky with the good
Olga Goriunova, The algorithmic production of the visual common
Carl Anthony Bonner-Thompson, No camp, no fem: masculinities,
sexualities and embodiment across Grindr
Lee Mackinnon, Emotion, emoticon, calculability
Andrew Dwyer, The kiss of death: an algorithmic curse

Session Three
James Ash, Digital interfaces and debt: algorithms and mobile
Clancy Wilmott, From coordinates to code: algorithms in everyday
mobile mapping practices
John Morris, Are my savings safer under the mattress? what do
algorithms tell me about the health of my bank?
Nat O’Grady, Technologising techniques of emergency governance
Sam Hind, Crypto-cartography and the pragmatics of forgetting

Feminist perspectives on global politics, in poems

I’ve been experimenting with a new method for my research which involves poetry. This is excellent – We need more poetry in politics !

feminist academic collective

IMG_0802-0Tiina Vaittinen & Saara Särmä

We have just finished teaching a course on feminist perspectives on global politics at the University of Tampere, with an international group of students with different disciplinary backgrounds. During the course, we introduced the students to a wide range of readings on feminist IR, and towards the end of the course Saara gave them a creative assignment, originally picked up from Elina Penttinen’s pedagogical tools. The results, based on the students’ readings of some of the contributors and/or readers of this blog, were so amazing that we want to share the work with you.

Here is the assignment that was given to the class:

1. Choose any text from the course moodle
2. Read it carefully
3. Construct a poem using only words in the text

The poem can be any length, but should capture the essence of the original text (the main argument etc.), write by hand or…

View original post 651 more words