reclaiming poetry from the algorithmic marketplace

cropped-fullsizerender_1I’ve been playing around with some ideas for my upcoming presentation at the Research in Real-Time – Practice in Progress conference at NUI Galway next month. I’ll be using some material from the Engineering Fictions workshop I did with CONNECT writer in residence Jessica Foley in Dublin in February, where we experimented with writing the most expensive and cheapest love poems using Google AdWords suggested bid prices. We then sent the poems to each other via GMail in order to ‘expose’ the words to the algorithms, before processing them through the AdWords keyword planner and the {poem}.py code, and printing them out on a receipt.

Ode-Love: Winner of the cheapest love poem prize (photo: Jessica Foley)

The workshop was great fun and also extremely interesting methodologically. By creating poetry manually, and in the firm knowledge that the words which pass through Google’s advertising and search platforms are always already infused with algorithmically and economically mediated ‘values’, it felt like we were able to reclaim some of the artistic agency from the algorithms that increasingly second guess our linguistic intentions. We were able to second guess the second guessers, and it felt really liberating! Jessica and I plan to write the experience up in full very soon.

In the meantime, I have returned to the poem that started this whole project off – my favourite poem – William Stafford’s At the Bomb Testing Site (1960), which was the first poem I ‘valued’ as part of my {poem}.py project. When I first ran the poem through AdWords last year, I used the Ad Groups function of the keyword planner to try to reverse engineeer the logic of the bid prices Google suggests for each word, and for the poem as a whole. This feature suggests other keywords, phrases and topics which might help enhance your advertising campaign, but it also provides a glimpse into what the algorithms ‘think’ you are trying to advertise and as such is a fascinating insight into what words ‘mean’ to Google. Last year, references in Stafford’s poem to a curved desert road, hands gripping and tense elbows, generated suggestions that I was trying to advertise road biking. The phrase ‘ready for change’ had the algorithms thinking I was planning a well-being or recruitment campaign.


This year, the road biking suggestions have gone, but other even more fascinating semantic assumptions have appeared. To Google, at this moment in time, At the Bomb Testing Site conjures up Carl Jung, gastric bands and Californian Republicans. So in the spirit of reclaiming the poetry from the algorithmic marketplace, I decided to rewrite Stafford’s poem using only the suggested advertising categories and potential related search queries offered to me by Google when I put the poem through the keyword planner. With apologies to the estate of William Stafford, here is the result:

At the Bomb Testing Site (2017)

By Google AdWords (after William Stafford)

I’m feeling stuck.

Atomic trinity:

anger, depression, ego

and archetype elbow pain after fall.

California republican

delegates latest nuclear test.

Popeye syndrome.

Who invented the hydrogen bomb?

Carl Jung’s shadow?

I don’t like myself.

Business goals,

data entry jobs,

weight loss surgery in mexico.

I am ready to change my life

Self referral mental health

define psyche.

Inner self crossword clue.

Feel joy! Wellbeing,

core beliefs,

gastric bypass,

bikini island.

Ready steady:

be yourself.



Politics, Poetry and Google: The Value of Words in an Age of Linguistic Capitalism

An alternative angle to the fake news and Google Ads debate. It’s more poetic, but by no means less political.

“Control of language equals control of thought, and it is private capital and tech companies who are at those controls.”



I recently published a short piece on The Medium about the similarities between language in the age of Google and Orwell’s Newspeak in 1984. It uses Orwell’s text to critique linguistic capitalism and the political power of language, and imagines the rise of Google as a neoliberal thought police.

The full article is here:  Politics, Poetry and Google


View story at

View story at

SUBPRIME LANGUAGE: The Precarious Value of Words in an Age of Linguistic Capitalism, Digital Advertising and #FakeNews

As the value of words shifts from conveyor of meaning to conveyor of capital, has Google become an all powerful usurer of language, and if so, how long before the linguistic bubble bursts?

I’m giving a talk at Trinity College Dublin next week as part of the CONNECT centre and Engineering Fictions. I’ll be using a lot of the material from the talk I gave at NUIG a couple of weeks ago, but I also want to try out some of the new ideas I’ve been developing around the idea of subprime language and linguistic liquidity. Below is an extended abstract/intro for the new stuff. It is work in progress – any thoughts are welcome…. I hope also to develop these ideas at the AAG in Boston and at the RGS-IBG in London later this year. 

As tech companies such as Google increasingly mediate and monetise the informational landscape through search and advertising platforms such as AdWords and AdSense, the ongoing effects on and of the language they auction, sell and exploit are becoming more and more palpable. In the viral spreading of fake news and political click-bait, and in the daily battles for exposure, it seems that words are being lent against a narrative so tenuous as to make their linguistic function negligible. Infused with a neoliberal logic which favours advertising dollars over truth and the systemic bias of algorithmic processing, the discursive side-effects of this semantic shift reveal a deep-rooted weakness in the linguistic marketplace which reaches far beyond the linguistic sphere and into the political, with powerful and potentially devastating consequences. Were it not for an overriding metanarrative of neoliberal logic, this evolution in the ontology of digital language may seem like an obvious manifestation of the postmodern condition. But as the value of words shifts from conveyor of meaning to conveyor of capital, should we be thinking of Google as the all powerful usurer of language, and if so, how long before the linguistic bubble bursts?

In this paper I set out some recent thoughts about the idea of subprime language – asking questions such as how much and how often language can be bought, sold or ‘borrowed’ before it becomes exhausted of meaning and restrictive of expression and understanding. How resilient is language to a quasi-capitalist operating system, and what happens if/when linguistic capitalism crashes? And finally, knowing the historical and cultural power that a control of language can have, the fragility and unpredictability of the economic system which now seems to underpin it, and with a growing awareness of the power wielded by technology companies such as Google, should we not be more aware of the the potential dangers in these techno-linguistic shifts?

In recent weeks the fake news debate has been evoking numerous references to Newspeak, the language of thought control and state propaganda employed to further the ideology and control of English Socialism (INGSOC) in George Orwell’s 1984. It is an interesting analogy, but I think rather than a straight forward comparison to the misinformation and alternative facts seemingly employed during the Trump campaign, there are deeper problems within today’s informational infrastructure that a more thorough reading of Orwell’s text draws out. Firstly, there is the assumption in Newspeak that “thought is dependent on words”, a somewhat problematic yet entirely relevant causal linkage when it comes to debates about search results, auto-predictions, filter bubbles and algorithmically generated social media newsfeeds, which can be instrumental in the cultivation of extreme views and hate crime.

The second issue concerns the limitations and restrictions of language that is so important to the idea of Newspeak, a language which “differed from most all other languages in that its vocabulary grew smaller instead of larger every year”. We can see echoes of this in the shrinking of creative vocabulary of digital language in favour of words which might be cheaper, easier to find, or more alluring either to algorithms or to human readers.

The third point I want to explore takes the culmination of the first two – i.e. that words have a real effect on how we think, yet the way information flows through the digital spaces encourages the shrinking of our online vocabulary and discourages non-normative language – and complicates this already worrying formula with an overriding motive not of state political control (as in Orwell’s dystopia), but of private capital gain (as in advertisers and tech/media companies). In the digital networks of information and communication we have created, the potential for political control comes often as a side effect of the economic incentive, or as a manipulation of a system which allows language, and therefore thought, to be so dependent on and subject to a neoliberal logic which is itself so precariously mediated by algorithmic systems and networks.



PODCAST: Pip Thornton – Critiquing linguistic capitalism, Google’s ad empire, fake news and poetry

Algocracy and the Transhumanist Project


My post as research assistant on the Algocracy & Transhumanism project at NUIG has come to an end, and I will shortly be returning to Royal Holloway to finish writing up my PhD. I have really enjoyed the five months I have spent here in Galway – I  have learned a great deal from the workshops I have been involved in, the podcasts I have edited, the background research I have been doing for John on the project, and also from the many amazing people I have met both in and outside the university.

I  have also had the opportunity to present my own research to a  wide audience and most recently gave a talk on behalf of the Technology and Governance research cluster entitled A Critique of Linguistic Capitalism (and an artistic intervention)  as part of a seminar series organised by the  Whitaker Institute’s Ideas Forum,  which…

View original post 513 more words

Talk at NUIG 25th Jan – Linguistic Capitalism – technology & governance research cluster


I’m giving a talk at NUI Galway on Wednesday 25th January as part of the Whitaker Institute Ideas Forum seminar series.

It will be an explanation and exploration of all things Linguistic Capitalism, with a demonstration of my {poem}.py  project, as well as some new ideas about the role of Google advertising in the fake news debate.

Most exciting of all is a guest appearance from Galway poet Rita Ann Higgins who will be reading some of her poem Killer City, to help illustrate the talk.

Being Human | Human Being: a panel discussion of Ex Machina

Ex Machina panel: if Ava was trained on search data, how come she doesn’t try to sell Nathan the pair of trainers he googled months ago? And other insights….

Algocracy and the Transhumanist Project


Back in March I co-curated a Passengerfilms event in London which used Alex Garland’s 2015 film Ex Machina to provoke a panel discussion about what it means to ‘be human’ in a world in which the digitally -or algorithmically – processed ‘virtual’ is increasingly experienced in the actualities of everyday life. I wrote a post on my own blog about the event at the time, but have now had the chance to edit the audio recording of the panel discussion, which features thoughts on the film and on the wider discourse from John Danaher (NUI Galway) and myself, as well as Lee Mackinnon (Arts University, Bournemouth), Oli Mould (Royal Holloway) and Mike Duggan (Royal Holloway).

We held the event in the downstairs area of The Book Club, an East London club venue, so some of the audio is accompanied by a booming bassline from the upstairs bar. I have tried…

View original post 314 more words

A Critique of Linguistic Capitalism: a short podcast from Pip Thornton

Algocracy and the Transhumanist Project

I started work as the research assistant on the Algocracy and Transhumanism project in September, and John has invited me to record a short podcast about some of my own PhD research on Language in the Age of Algorithmic Reproduction. You download the podcast here or listen below.

bog-eyeThe podcast relates to a project called {poem}.py, which is explained in greater detail here on my blog. The project involves making visible the workings of linguistic capitalism by printing out receipts for poetry which has been passed through Google’s advertising platform AdWords.


I have presented the project twice now – each time asking fellow presenters for their favourite poem or lyric which I can then process through the Keyword planner and print out on a receipt printer for them to take home. I often get asked what is the most expensive poem, and of course it depends on…

View original post 130 more words

NEWS | Curating (in)security at AAG 2017

Great write up from Nick Robinson in anticipation of our AAG2017 sessions

Boston waterfront Skyline of downtown Boston from the pier (

Every year, nearly 10,000 academics converge on one particular U.S. city in the name of all things geography – Boston, Massachusetts being the location of choice for the annual AAG (American Association of Geographers) conference in April 2017.

With a vast array of potential sessions, panels and presentations – the AAG has something for everyone: from Geographies of Bread and Water in the 21st Century  to subjects pertaining to aspects of Physical Geography, Geopolitics, and even Cyber Infrastructure!

Visiting the AAG has long been a personal ambition of mine since beginning my own undergraduate degree, and this year finally presents an opportunity after my paper (and preliminary thesis title) – “How to Backup your Files Nation-State in a Digital Era: The Estonian Data Embassy” – was accepted onto a fantastic looking double-session titled: Curating (in)security: Unsettling Geographies of Cyberspace. (see…

View original post 368 more words

Curating (in)security: Unsettling Geographies of Cyberspace CfP AAG 2017

Curating (in)security: Unsettling Geographies of Cyberspace
Call for Papers
AAG 2017 Boston (April 5-9, 2017)

In calling for the unsettling of current theorisation and practice, this session intends to initiate an exploration of the contributions geography can bring to cybersecurity and space. This is an attempt to move away from the dominant discourses around conflict and state prevalent in international relations, politics, computer science and security/war studies. As a collective, we believe geography can embrace alternative perspectives on cyber (in)securities that challenge the often masculinist and populist narratives of our daily lives. Thus far, there has been limited direct engagement with cybersecurity within geographical debates, apart from ‘cyberwar’ (Kaiser, 2015; Warf 2015), privacy (Amoore, 2014), or without recourse to examining this from the algorithmic or code perspective (Kitchin & Dodge, 2011; Crampton, 2015).

As geographers, we are ideally placed to question the discourses that drive the spatio-temporal challenges made manifest though cyber (in)securities in the early 21st century. This session attempts to provoke alternative ways we can engage and resist in the mediation of our collective technological encounters, exploring what a research agenda for geography in this field might look like, why should we get involved, and pushing questions in potentially unsettling directions. This session therefore seeks to explore the curative restrictions and potentials that exude from political engagement, commercial/economic interests, neoliberal control and statist interventions. The intention is not to reproduce existing modes of discourse, but to stimulate creative and radical enquiry, reclaiming curation from those in positions of power not only in terms of control, but by means of restorative invention.

We intend to have an interactive and lively discussion that we hope will be productive for a growing field of inquiry between disciplines. In light of this, potential contributions could combine or exceed those outlined below:

·         Algorithms and algorithmic governance
·         Alternative theories of space / cyberspace / cybersecurity
·         Artistic interventions / performances
·         Big data
·         Cyber / digital finance
·         Disciplinarity and knowledge production
·         Hackers and activism
·         Human-Computer Interaction (HCI)
·         Materiality and virtuality
·         More-than-human agency
·         Networks
·         Power and resistance
·         Precarity, affect and vulnerability
·         Privacy and surveillance
·         Surveillance and encryption

Session Guide

To submit a contribution, please contact one of the panel organisers. Abstracts should be no longer than 200 words and should be submitted by October 7th 2016.

Panel Organisers
Andrew Dwyer (University of Oxford, UK)

Pip Thornton (Royal Holloway, University of London, UK)

In addition, if you wish to offer contributions that are not in a conventional lecture mode, please provide a brief description of what your output intends to be in addition to the 200 word abstract.

{poem}.py : a critique of linguistic capitalism

How much does poetry cost? What is the worth of language in a digital age? Is quality measured on literary value or exchange value, the beauty of hand-crafted, hard-wrung words, or how many click-throughs those (key)words can attract and how much money they earn the company who sells them? But haven’t words always been sold? As soon as they became written down, moveable and transferable words entered the market place, and then necessarily the political sphere. But these words gained an exchange value as integral parts of a text – a story, a poem, a book, for example. Removing or reordering these individual words – or ranking them based on external influences would change the meaning and devalue the text as a whole, in both a literary and monetary way. Can language retain its integrity once it becomes part of the digital economy? Is there even such a thing as the ‘integrity of language’?  Certainly the words Google auctions off have referential values unanchored to narrative context, and it is this new context and the politics surrounding it that I am attempting to examine and expose in my new project which I have called {poem}.py.


The project started out when I was required to provide a poster for the Information Security Group (ISG) Open Day at Royal Holloway later this month, which always make me nervous as – unlike most of my PhD contemporaries in the Cyber Security CDT – I don’t have a load of mathematical formulas, graphs and data to fill out the required poster template. So as I was thinking about how to represent and explain my PhD topic to an audience of cryptographers, mathematicians and computer scientists, I decided to see how much my favourite poem ‘cost’ if I put all the words through the Google AdWords Keyword Planner and outputted the results on a mock-up of a receipt – which I thought might look nice on a poster. In this way I discovered that, at 4:39 PM on 7th May 2016, my favourite poem At the Bomb Testing Site by William Stafford cost the princely sum of £45.88 (before tax).

bombtest_receiptTo explain the logic behind this – the keyword planner is the free tool Google AdWords provides advertisers so they can plan their budgets and decide how much to bid for a particular keyword or key phrase to use in their advert. Google gives a ‘suggested bid’ price for each word, giving an advertiser some idea how much they will have to spend to win the mini auction which is triggered each time someone searches for that keyword. When an advertiser wins the auction, their advert will appear as a ‘paid’ (as opposed to organic) search result right at the top (and now right the bottom too) of the rankings with the small yellow ‘Ad’ box next to them. The advertiser then pays the winning bid (which, like eBay, will be one penny/cent above the second highest bid) each time someone clicks on their advert. Phrases such as ‘cheap laptop’ or ‘car insurance’ can cost as much as £50 per click. This is the basis of how Google makes its money, a form of ‘linguistic capitalism’ (Kaplan: 2014) or ‘semantic capitalism’ (Feuz, Fuller & Stalder: 2011) in which the contextual or linguistic value of language is negated at the expense of its exchange value.

One of the first problems I encountered with this method was that once I had fed the words of a poem through the keyword planner I then had to put them back into their narrative order to make the receipt ‘readable’ as a downward list, as Google churns the words back out according to their frequency of search rather than in their original order. With my test poem, I had to order the words back into the shape of the poem manually, which was time-consuming and fiddly. I have since been working with CDT colleagues Ben Curtis and Giovanni Cherubin using Python code to automate this  process. This union of poetry and code is where the project title {poem}.py comes from – .py being the file extension for Python.

Once I had a spreadsheet with the poem back in narrative list order, and with the corresponding ‘price’ of each word – including duplicates – I added up the total cost of the poem and then created a template which mirrored a paper receipt.

This first attempt revealed several really interesting points which not only illustrate what I am trying to examine and expose in my thesis, but it also gave me ideas about how I could use the project as a quantitative method of gathering data and also as a creative practice and artistic intervention.  A section of my thesis examines how the decontextualisation of words in the searchable database leads to anomalies in search results and autopredictions which not only reflect, but also therefore perpetuate stereotypical, sexualised, racist or sexist search results. The words of the poem on the receipt have likewise been taken out of context, and are instead imagined as how well they will do in the context of an advert. Their repeated use by advertisers and confirmatory clicks by users will also presumably increase their frequency within the wider database.

“the cost of a word to Google relates to the size and wealth of the industry it plays a part in advertising”

Once I had run a few more poems through this process I started to realise that words relating to health, technology, litigation and finance were particularly expensive. In At the Bomb Testing Site, I was initially puzzled as to why the word ‘it’ costs £1.96, which seemed disproportionate compared to other words. I then realised that, to Google, the word is ‘IT’ (as in information technology) – hence its price.

In Wordsworth’s Daffodils, the words ‘cloud’, ‘crowd’ and ‘host’ are expensive not because of their poetic merit or aesthetic imaginings, but because of ‘cloud computing’, ‘crowd-sourcing/funding’ and ‘website hosting’. Wilfred Owen’s Dulce et Decorum Est revealed that medical words such3Untitled as ‘cancer’ ‘fatigue’ and ‘deaf’ had relatively high suggested bid prices, while ‘economical’, ‘accident’, ‘broken’, in Anne Carson’s Essay on What I Think About Most are all over £5.00 per click, and the suggested bid for the word ‘claim’ is £18.10. Perhaps unsurprisingly,  it seems the cost of a word to Google relates to the size and wealth of the industry it plays a part in advertising.

But as well as pricing individual word and phrases, Google’s Keyword Planner also tries to second-guess what you are trying to advertise by the words you enter. In the case of At the Bomb Testing Site, the Keyword Planner thought I was either trying to advertise road biking (presumably the words curve, road, panting, tense, elbows, hands and desert suggested this), or some kind of life coaching, career management service which was prompted by the phrase in the poem ‘ready for change’. Put a question mark on the end of that phrase and it becomes a highly profitable key-phrase in an advert. Similarly the high price of the word ‘o’er’ in Daffodils is explained in the context of OER (Open Educational Resources). The AdWords planner also suggested I might be trying to market a product relating to Game of Thrones due to the Rains of Castamere song in which ‘the rains weep o’er his hall’.

As I played around with the receipt template, I added a CRC32 checksum hash value to the receipt as an ‘authorisation code’. A checksum is a mathematical blueprint of a piece of text which is generated to ensure that a transmitted text is not altered. The sender sends the checksum with the text and the recipient generates the same checksum to make sure it has stayed the same in transit. Using this as an authorisation code on the poem receipt is therefore indicating that when protected by code or encrypted, the poem retains its integrity, but when it is decoded, it is then subject to the laws of the market – as shown on the receipt itself. I also added N/A to the tax field as a little dig at Google’s tax situation in the UK.

But the more poems and texts I analysed in this way, I began to suspect that there is something interesting to be learnt from understanding the geographical, political and cultural logics which might dictate the economic forces which apparently mediate and control this linguistic market place. I ran words such as ‘trump’, ‘war’ and ‘blair’ through the keyword planner over a period of two weeks and noticed how the suggested bid prices fluctuated, despite them not being what you might assume to be very ‘marketable’ words. The keyword planner also allows the user to target their campaign by location, so I could then measure the ‘value’ of war, for example, in the US and in the UK, and even down to tiny areas such as Egham, and I could record these values over a period of time to see how key national and international events might influence word prices.


As well as recording the fluctuations of specific words and names, I am keen to capture the changing uses and values of groups of words based loosely around a theme, and have decided that continuing to use poetry is the best (and most apt) way to do this. So I have selected a series of poems which are somewhat tangentially linked to events which are happening in the UK and world over the next few months such as the Olympics, the EU Referendum, the release of the Chilcot report and the US Presidential election. Gathering this data over the next few months will enable me to conduct a quantitative longitudinal study into the geopolitical and cultural influences which shape linguistic capitalism, and therefore potentially also the composition and weighting of the wider linguistic discourse.

But apart from the quantitative side to this project (which can be harvested in data spreadsheet form), I want to use the output of the receipt as an artistic intervention or critique to make the issues and politics around linguistic capitalism and the way Google treats language more visible and accessible. If there is a politics lurking within the algorithmic hierarchies and logic of the search engine industry (which I believe there is), then it is a politics hidden by the sheer ubiquity and in some way the aesthetics of the Google empire. My thesis is based loosely around Walter Benjamin’s Work of Art in the Age of Mechanical Reproduction essay, and as such, views the various ways in which Google controls, manipulates and exploits data (linguistic and otherwise) under the guise of ‘free’ tools and accessories as a kind of aestheticisation of politics. Following Benjamin, therefore, the final chapter of my thesis will examine ways of turning this power back around, and ‘making art political’, or more specifically to this project, reclaiming language as art.

I hope to be able to speak to and engage with various academics and artists who have attempted ‘Google Poetics’, Adwords ‘happenings’ (Christophe Bruno: 2002) or creative resistance (Mahnke & Uprichard: 2014), (Cabell & Huff: 2010), (Feuz, Fuller & Stalder: 2011) to explore the difficulties and successes of working within or without of the Google framework to produce interventions. It is in this chapter that I also want to use {poem}.py as my own artistic intervention and act of political art. I am aware that I am in effect mixing quantitative data gathering with qualitative methods and creative practise here, which is something I need to think through.


As I mentioned in my previous post, last week I co-organised a workshop on Living with Algorithms which aimed to let participants be creative and provocative in thinking about everyday life and algorithms. For my own contribution to the workshop I asked participants to send me their favourite poems in advance. I then bought a second-hand receipt printer and set about monetising their poems so I was able to print them off for them ‘live’ during my presentation at the workshop. At some stage I would like to use this group of poems to form the basis of an actual art exhibition, but this method has also proved really helpful in terms of beginning to answer some of the questions I asked at the beginning of this post. Because I didn’t tell the participants why I wanted them to send me a poem, some of them were only available in formats which unintentionally resisted the process of commodification so I had no option but to print out VOID receipts for two of them.  The first was an amazing spoken word poem called Bog Eye Man, by Jemima Foxtrot which is only accessible on YouTube or Vimeo. As the actual text of the poem does not appear on the web, I was unable to ‘scrape’ it. The other poem was contained within a Jpeg file from which I could not copy and paste. These two examples show how we might begin to envisage a way to maintain the integrity of poetry in a digital age dominated by linguistic or semantic capitalism, the example of the spoken word poem in particular harks back to Benjamin’s description of the loss of aura when a work of art becomes ‘reproducible’. For the time being, Bog Eye Man remains resolutely unmonetised (at least until spoken data starts being algorithmically scraped anyway…), and retains – as Benjamin wrote ‘its presence in time and space’.

But back to the poster, where all this started. This isn’t the one I’ll be presenting at the ISG open day – it doesn’t conform to the strict template and colour scheme – but is one I made for the Humanities and Arts Research Centre poster competition, which is a bit more aesthetically pleasing…