Remixing Code, Resisting Control

For the past several weeks, I’ve been thinking about computer code and language as a means of control. For Alexander Galloway, code is a type of protocol, or a way of imposing control in communication. In Protocol, Galloway describes code as a language (yet to be officially recognized as such) that requires adherence to its standards in order to work. For Kenneth Goldsmith, code offers an opportunity to be creative. In Uncreative Writing, Goldsmith talks about remixing different kinds of “code,” such as the code from an image file with lines of poetry, to create a new image. I’m wondering how we can bring what Galloway says about control and restraint in code into conversation with Goldsmith’s presentation of creative uses of code.

Galloway shows how power structures, such as DNS and ISP, are instituted through code, which must conform to a certain standard in order to successfully communicate. For Galloway, resistance to this kind of control consists of finding loopholes or “exploits” in systems (what hackers do). But Goldsmith shows how resistance to standards can take a different route, how it can actually defy the requirements of protocol, by splicing the standard code with other kinds of code, or languages. This remixing is creative because it combines two different codes (such as poetry and computer code) to create something new.

On page 24 in Uncreative Writing. Goldsmith performs an experiment with an image of William Shakespeare. His experiment takes the textual code from the image and splices it with the text from a Shakespearean sonnet. The resulting .jpg file renders a jumbled image. I performed the same experiment with a picture of my family’s thanksgiving table. Here, I took the code from a .jpg file and spliced it with text (in this case, with an argument that my family had at the table when the picture was taken). The result looks like this:

img_6793

The image shows two things: first, how the code doesn’t work, and second, how this failure nonetheless results in an image that is read and rendered by the computer. In this case, the remixing of code is both corruptive and creative. It shows that mixing different kinds of languages, such as English and computer code, successfully resists standards.

I realize that Galloway’s project is ultimately about communication, while Goldsmith’s is about creating something new from old materials. But it seems to me that we can see the two in the same light, as a resistance to the control of language through experimentation. Do you think this experiment changes how we view protocol? Is mixing different kinds of code analogous to Galloway’s “exploit”, or is it something else?

A Dearth of Dissidents

My bot – “Rhyme and Punishment” (@Crimeaandpun – political pun intended) and the hashtag is #Dissident –  posts verse from dissident poets every 15 minutes. What is currently operating is a very imperfect and incomplete version of what I’d like to do.* My hope was to create a bot that, in addition to tweeting poetry, would retweet and also respond to other people’s tweets that included trigger words and/or the same hashtag. But I had so many technical difficulties getting this project up that I scaled back that idea and cheated by signing my account up for a free service that re-tweets posts that have a common hashtag (#Dissident). I picked this hashtag because I assumed it would be fairly common these days. I chose poetry in translation by a Russian writer from the early 20th C, Marina Tsvetaeva (who was anti-Bolshevik), two contemporary Syrian poets, Amira Abul Husn and Housam Al-Mosilli, and a Chinese poet, Liu Xia (who is married to dissident Liu Xiaobo).  The  face of Rhyme and Punishment is Vladimir Putin. It all seems a bit pretentious, I know, but I wanted to see what it would look like if I threw together writers who are or were persecuted by their respective governments for their politics with a contemporary political figure who has a reputation for doing just that, and who is widely condemned by most advocates of democracy (with the exception of our president-elect).

The other thing I did was have my bot follow a few organizations and media outlets that are popular with some of the more Right-leaning and/or Trumpian contingent on Twitter: the CIA, the FBI, Homeland Security, Ann Coulter, several Tea Party groups, the Daily Caller, Breitbart News, and even the Grand Old Party, which seems relatively benign among these others. Most of these are entities are what I would personally consider on the “despotic” end of the ideological spectrum, and I wanted to see how many followers Rhyme and Punishment could attract from this pool of Twitterers. So far, four – all of which I am sure are bot-driven themselves, in order to attract more followers. (If they were individuals they would probably notice that R&P is not their political cup of tea.) So you could say that R&P is baiting a certain Twitter demographic as an adjunct to this experiment – what affinities trigger which followers?

Keeping a search open for all tweets with the hashtag #Dissident, as well as account names that include the word Dissident, yields surprising results. In the past three days, my bot has tweeted more of its own material with the hashtag than any other account on Twitter. I did not expect that. The poetry includes the hashtag at the end of every sentence, not every line, in order to create some discontinuity in the results of such a search. I reset my tweets 15 minutes apart so that other #Dissident tweets would be more frequently interspersed. But even reset 30 or 45+ minutes apart, there has been no significant increase. Why is this such an unpopular hashtag at a time like this? It’s interesting, nevertheless, to see what else is tweeted, and from which countries. Some of the tweets are ads. Most of the others are not from the U.S. I wonder, of course, if anyone has done a similar search and discovered Vladimir’s tweets, but so far no one has responded. I would say that this experiment has yielded very mixed results. Then again, I didn’t approach it with any assumptions about what would happen.

* I struggled with a very long time on my own — too long — to get a Twitterbot script to work before I was able to sit down with a Digital Fellow (fortunately, bot-expert Patrick Smyth) and get some help. (Greg had also offered some suggestions via email, but I didn’t have any luck.) Even with Patrick’s help, I spent hours afterward working on it.  I won’t lie – the whole process has been extremely frustrating, not to mention distressingly time-consuming given everything else I have going on in my life. However, creating Twitterbots with more sophisticated functions is something that I’d like to continue doing from time to time, once I can master the coding.

A Sample-est Critique of My Own Twitterbot

In my earlier post, I began to reflect on the ethical, political, and theoretical limitations of the cloud text project I designed. By way of summary, I offer the following: my twitterbot retweets slightly altered tweets from the #NotNormal stream, so as to amplify a broad range of political messages associated with Anti-Trump sentiment and resistance. In my initial post, I expressed concerns about the ways in which my bot promulgated and perpetuated unvetted news links, thereby contributing to a larger problematic grounded in uncritical reading and reflection.

Sample’s criteria for bots of conviction provides an additional framework for critique; specifically, he offers the following framework by which bots can function politically and effectively:

  • Topical.  According to this criteria, bot should not be about lost love or existential anguish; they should focus on the morning news — and the daily horrors that fail to make it into the news. My bot, though  initially topical (I constructed its database from tweets collected over the course of two days), but since then the news cycle has moved on. Ideally, I would have a mechanism for continually scraping the twitter feeds to update my supporting database.
  • Data-based. Here Sample articulates the importance of actual data and research. Mine transmits memes and other forms of predigested research, but does not reach back to the supporting data in responsible ways.
  • Cumulative. In Sample’s words, “it is the nature of bots to do the same thing over and over again, with only slight variation. Repetition with a difference.” The aggregation of these repetitions conveys rhetorical and political weight. In this sense, my twitterbot functions well – it highlights the repetitive nature of reductive political sentiment; it assaults one with the only slightly iterative nature of revision in twitter-based discourse, etc. Though I intended it to function in service of progressive politics, what manifests is an implicit critique of political discourse.
  • Oppositional. My bot takes a stand, which Samples argues is an important element of automatized protest, but that stand is a catch-all aggregate of sometimes only ancillary stands – the #notnormal hashtag can be instrumentalized in a wide variety of political projects, and for that reason my bot’s stand can be at times incoherent. (I even had to delete two references to blonde tips that somehow appeared).
  • Uncanny. If, for Sample, bots should help reveal things that were previously hidden, then my bot fails entirely to satisfy this criteria; the tweets it produces have already been tweeted (and, in many cases, already retweeted). Instead of revealing the hidden, my bot exaggerates existing visibility.

I present this exercise, not as a means of self-castigation, but as a way of more rigorously reflecting on what seemed to be a clever project. And this has me rethinking my bot’s potential for revision and ongoing deployment.

Cloud Text Experiment: @MakamisaBot

Twitterbot: @MakamisaBot

I decided to try my hand at building and experimenting with a Twitterbot on a dedicated Twitter handle since I had completed Patrick Smyth’s API workshop once before.

I had intended to use @MakamisaBot as a tool for broadcasting news or reactions from Filipinos and Filipino Americans during the Duterte presidency, a timely experiment considering what happened last week (http://www.rappler.com/newsbreak/iq/151667-timeline-ferdinand-marcos-burial-controversy). I had done a preliminary scrape on Twitter using Patrick’s script for #MarcosBurial, but didn’t try to retweet anything:

screen-shot-2016-11-22-at-2-11-11-pm

I worried about participating in a discussion where a language barrier could pose a problem since tweets would be a mix of English and Tagalog.  I will probably continue to tinker with this idea if it could be of some use, creative or otherwise. I’ve instead started using the tutorial below on making a Twitterbot with Python to play around with @MakamisaBot and text files from Project Gutenberg.

https://jitp.commons.gc.cuny.edu/make-a-twitter-bot-in-python-iterative-code-examples/

Makamisa (meaning “after mass”) is believed to be Jose Rizal’s third, unfinished novel. Jose Rizal is considered a national hero in the Philippines – his literary work at the end of Spanish occupation in the late 1800s influenced Filipino resistance and he was ultimately executed by firing squad for his writings on December 30, 1896. Rizal’s novels were originally written in Spanish, then translated to Tagalog and English. They are part of the public domain and I chopped them up to use for my project. Makamisa’s manuscript ends abruptly with: “Although it was rumored that aunt Anday received slaps on her face, they still do not [have]” (translated in English). If I could get the script to work, I’d be interested in producing similarly unfinished sentences using available text from his preceding novels, Noli Me Tángere and El Filibusterismo.

Krisetya and Starosielski: Undersea Cables as Narrative Landscape

Last week’s trip to DataGryd with Andrew Blum reaffirmed for me how very small we are in the networks we’ve built to connect each other and control our environments.  I was interested in learning more about Markus Krisetya and his work after reading about his cartographic designs in the chapters we read from Blum’s Tubes.  Krisetya has a Tumblr page with some of his work, including a Submarine Cable Map from 2014:

http://mkrisetya.tumblr.com/post/84823497084/submarine-cable-map-protectors-of-the-internet

An updated one from 2016 from TeleGeography can be found here:

http://submarine-cable-map-2016.telegeography.com/

The updated map is a little more interactive – you can scroll left and right, zoom in and out to read short reports on changes to the cable network since the map’s previous publication.  I enjoyed Blum’s description of Krisetya’s cartographic work as a form of storytelling (“I loved drawing stories on paper, and referencing distance in that strange manner,” [Krisetya] told me) (Blum 15).

I focus on TeleGeography’s maps because they provide us with physical representations of our media infrastructures, something I’ve been searching for as I collect and study materials for my final paper on colonial infrastructures in the Philippines.  In Nicole Starosielski’s The Undersea Network, I noticed that the Philippines only features as a sort of stepping stone in the undersea cable networks despite the existence of miles of telegraph networks connecting islands to each other and across the Pacific in the early 1900s.  There has to be a story behind this as well.  I recently found a map from 1902 online that could be useful provided by Brown University Library’s Digital Production Services illustrating these telegraph cables from that time published in National Geographic Magazine:

http://library.brown.edu/dps/curio/the-world-wide-telegraph/

Twitter Bots: Useful, Dangerous, or Both

The world of tomorrow is a terrifying place – looks like we have some tidying-up to do.

Recently we were assigned two articles related to twitter bots. Rob Dubbin’s The Rise of Twitter Bots was a more relaxed take on the subject: Twitter bots represent a long spanning gamut between wasting time and a reminder of surveillance. The other, Mark Sample’s piece on protest bots, looked to take a deeper look into how to effectively create protest on the internet by arming bots with more than just simple repetitions, but rather intricate creations of an uncanny environment that captures attention. Twitter has graciously opened their API for development which made all of these creative and powerful ventures possible, but we should rather look towards the future of Twitter bots – specifically how artificial intelligence will affect a platform like this.

AIStat

The biggest buzz phrase as of recently has to be “deep learning systems.” We’ve officially moved beyond the alpha stages of artificial intelligence and have now developed systems that are finally… well… intelligent. This also means that everyone including your grandmother and the kitchen sink are jumping on-board to develop deep learning systems, and they are definitely increasing infrastructure to do so. Deep learning and A.I. can scale anywhere from YouTube’s recent 8m data project to something more relevant: a Twitter bot with a built in A.I.

TayTweets

TayTweets or @TayandYou was supposed to be an innocent experiment: Microsoft wished to better understand conversation by having a bot learn from discussion with fellow Twitter users. The name “Tay” was based off of “thinking about you,” and released initially in March of this year. Of course the mission progressed from a center of learning to an all out nightmare quite quickly. The bot started picking up on slang, started learning racial slurs, and even at one point sided with Hitler. Tay was even brought back a second time after Microsoft tried cleaning it up, but that was a catastrophic failure as well. But this was just regular hatefulness – the rabbit hole goes deeper than that.

Spearphishing, a recent phenomenon is a more advanced form of phishing that is catered to a specific individual. Recently, a cyber-security firm that specializes in social media, ZeroFOX, developed a project for educational purposes that uses machine learning to target users. The intelligence develops profiles on each target based on what they’ve tweeted, finds the best time to send the tweet and sends it to the unsuspecting user. Recently, a writer from The Atlantic was taking a look at this whole scenario and the tweets that were sent were incredibly realistic. However, in their test, they made the phishing link redirect to The Atlantic rather than a malicious site.

SNAP_R

The scary thing to think about is where A.I. and machine learning techniques collide with Twitter bots. Apparently, the success rate of this project (dubbed SNAP_R) averaged around 30% in a test which is quite astounding. The bot was also programmed to take trending topics and unsuspecting users, mesh them, and generate this dangerous content. With the rate at which A.I. is being developed, this is just scratching the surface in regards of what’s to come in the future. Of course we can talk about the positives that Twitter’s open API has afforded us, but the elephant in the room must be addressed at the same time on a deeper level (no pun intended).

Even Twitter has been investing in artificial intelligence. The acquisition of Magic Pony back in june marks their third straight year of acquiring firms. A.I. even has a darker side as well – Elon Musk recently predicted that A.I. will be the future of cyber attacks. Of course they can function on the low level of phishing schemes and hateful statements, but when the technology advances, so does the scope of usage. The future of Twitter bots and learning systems are a lot darker and more powerful than protest or humor: they beget new systems of influence and control, and slowly lead us to question what is real.

For some more resources on A.I. and Deep Learning, check out

Artificial Intelligence: The Future Is Now

DeepMind

DeepMind’s Lip Reading Abilities

What’s the Difference Between A.I., ML and DL?

Deep Learning: The Past, Present and Future of A.I.

Cooling Systems at Data Centers

One of the things that I was really looking forward to seeing before our site visit last Tuesday was the cooling systems. When describing the center of Milwaukee’s Internet in Tubes, Andrew Blum illustrates how the “double-hung windows were thrown wide open to the winter, the cheapest way to keep the machines cool” (Blum 2012: 23-24). Even though I was not expecting that to be the case of DataGryd, Blum’s description was, at least, quite surprising to me. As we were told by Peter Feldman (CEO) during the visit—and as they state on their website—at 60 Hudson Street waste heat from the gas turbines is captured by their absorption chillers and cooling to the datacenter floors is offered not only by condensed water from their cooling towers, but also also through the chilled water created from the gas turbine waste heat.

In order to make the cooling infrastructure work though, new generators had to be installed in the building a few years ago. As Rich Miller explains in this article from Data Center Knowledge, the superstorm Sandy back in 2012 made everyone aware of the problems that come with placing power generators in the basement of buildings. Despite the fact that DataGryd did not experience any power shortage or flooding during the superstorm, the generators were shifted to the 24th floor of 60 Hudson Street. Miller goes on to talk about both the cooling system and the cooling towers, which were added to the roof and each support 8,500 tons of condensed water. As it turns out, the purpose of all this is to provide medium-voltage electricity which involves less cabling and less power loss during distribution.

After the site visit I wanted to learn more about cooling efficiency when it comes to data centers, and I came across this article from ComputerWeekly.com, which compares two different techniques. On the one hand, air cooling has been the most common method since digital technologies improved and computers kept getting smaller and smaller. On the other hand, water cooling—which used to be the default means of keeping a computer cool—has pointed out in the last few years the problems of air cooling regarding, for instance, lack of space, poor conductivity and cost of maintenance.

I definitely recommend these articles to everyone interested in cooling mechanisms and data centers.

Sources Cited

Blum, Andrew. “The Map.” Tubes. New York City: HarperCollins, 2012. Web.

The #NotNormal Twitter Bot

For my cloud text experiment, I decided to create a twitterbot using the class tutorial. Though I had a small issue in the beginning, I found it relatively easy to pull it together using the template scripts, and you can see it running on my twitter account at: https://twitter.com/AndrewDunnLIS  (this is an old account I created back during library school)

For this experiment, I decided to focus on the #NotNormal hashtag, used on twitter and Facebook in connection with racist incidents and other examples of extreme behavior/opinions by Trump and his associates. The point of the hashtag is prevent him, his policies, and his administration from becoming normalized by the media. I got the idea from a strategy guide published by Congressman Jerry Nadler (http://www.jerrynadler.com/news-clips/how-we-resist-trump-and-his-extreme-agenda). The guide is fascinating and very helpful, and so I encourage everyone to check it out.

To construct the Twitterbot, I spent two days “listening” to and scraping tweets from the #NotNormal feed. In the end, I had about 260 non-duplicate tweets, which I collected in a text file that serves as the primary directory. Most of the tweets have multiple hashtags, and most link to news articles or infographics that explain what the authors find so outrageous, and not normal. In building this library, I decided to delete the addresses of specific people (@_____) – I’d rather not get embroiled in specific feuds, if that’s possible; I’d rather just amplify political statements. In essence, then, the bot is retweeting things from the last 48 hours. In completing this project, by the way, I was struck by how frequently people just retweeted things in their feed, without taking what I think is necessary time to reformulate thoughts in their own words, and to really process/digest/reflect critically on the arguments they’re promoting. To a large degree, these are structural limitations built into the interface – most importantly, the character limit, which people can only bypass by including links. Still, now that I’m doing the same thing using automation, it strikes me as strange that so much of our current model for public discourse is automatic, and literally equivalent to something a program could do. (On top of that, I have some reservations about a project in which I’m just retweeting links to news sources/stories that I have not vetted, so I doubt I’ll leave this running for long. Also, because the links concern breaking news, they’ll fall out of relevance shortly.)  I’m starting to think that it would be really interesting to map how a single tweet spreads through the twittersphere – I know there are programs that do that, and if anyone has recommendations I would love to see them, especially now that I have an API access key.

 

 

Stories and Maps

On Tuesday, after our visit to the datacenter, Andrew Blum made a comment about stories that made me think back to our discussions of “mapping” networks. I can’t remember his exact words, but Blum was talking about the way that our guides presented the data center to us, saying that people who are immersed in that kind of work often don’t tell the story that we want to hear. His comment led me to another comment from the introduction to his book, Tubes, in the section where he discusses the “folk cartography” of Kevin Kelly. Blum explains how Kelly solicited the public to submit hand-drawn maps of their conceptions of the internet (“Internet Mapping Project”). A researcher, Mara Vanina Oses, from the University of Buenos Aires, analyzed these maps by their topologies (here’s her presentation) . Blum notes that while these maps are perceptive, displaying our awareness of our experience in the network, none of them actually reference the physical machines and tubes that make up the network. And he rightly points out that ignorance about how these infrastructures actually work is dangerous: “the great global scourges of modern life are always made worse by not knowing. Yet we treat the Internet as if it were a fantasy” (7).

I see a similar formulation from Deleuze and Guattari, who make a distinction between mapping and tracing. According to these theorists, tracing only reinforces what we already know, what is in the unconscious, while a map attempts to engage with the real:

“The tracing has already translated the map into an image ; it has already transformed the rhizome into roots and radicles. It has organized, stabilized, neutralized the multiplicities according to the axes of significance and subjectification belonging to it. It has generated, structuralized the rhizome, and when it thinks it is reproducing something else it is in fact only reproducing itself. That is why the tracing is so dangerous. It injects redundancies and propagates them” (A Thousand Plateaus 13).

There’s an analogy where we can associate Kelly’s “folk cartography” with D&G’s tracing, while the investigative work that Blum undertakes is like D&G’s mapping. All three theorists think that this tracing activity is potentially dangerous, because it neglects the workings of the actual infrastructure that exists outside the mind. But I think there’s something to be learned by looking at these citizen “tracings” of the network. While Blum rightly points out that these tracings are fantasies, they reveal the aesthetic experience of being on the network, and how certain users tend to sense, at least on an abstract level, scope of the system.

Although the maps from the “Internet Mapping Project” don’t explicitly indicate the cables and servers that make up the internet, they do reveal the sense of obscurity and incomprehensibility about the network. In their furious, numerous lines, or simple, abstract shapes, the users struggle to represent the connectivity of information. While people generally put themselves in the center of their own networks, meaning that they see the network as organized around their needs, they still suggest that the system is complicated, overwhelming, and in many ways beyond them.

screenshot-2016-11-18-09-17-35

screenshot-2016-11-18-09-17-15

screenshot-2016-11-18-09-16-44

We can use these maps to reframe other topics we’ve discussed throughout the semester. What do they have to say about labor, conceptions of labor, or media and agency? I do think these maps deserve a second look because they tell stories about how people experience the internet. It may not be the full story, but it’s worth listening.