Author Archives: Andrew Dunn

A Sample-est Critique of My Own Twitterbot

In my earlier post, I began to reflect on the ethical, political, and theoretical limitations of the cloud text project I designed. By way of summary, I offer the following: my twitterbot retweets slightly altered tweets from the #NotNormal stream, so as to amplify a broad range of political messages associated with Anti-Trump sentiment and resistance. In my initial post, I expressed concerns about the ways in which my bot promulgated and perpetuated unvetted news links, thereby contributing to a larger problematic grounded in uncritical reading and reflection.

Sample’s criteria for bots of conviction provides an additional framework for critique; specifically, he offers the following framework by which bots can function politically and effectively:

  • Topical.  According to this criteria, bot should not be about lost love or existential anguish; they should focus on the morning news — and the daily horrors that fail to make it into the news. My bot, though  initially topical (I constructed its database from tweets collected over the course of two days), but since then the news cycle has moved on. Ideally, I would have a mechanism for continually scraping the twitter feeds to update my supporting database.
  • Data-based. Here Sample articulates the importance of actual data and research. Mine transmits memes and other forms of predigested research, but does not reach back to the supporting data in responsible ways.
  • Cumulative. In Sample’s words, “it is the nature of bots to do the same thing over and over again, with only slight variation. Repetition with a difference.” The aggregation of these repetitions conveys rhetorical and political weight. In this sense, my twitterbot functions well – it highlights the repetitive nature of reductive political sentiment; it assaults one with the only slightly iterative nature of revision in twitter-based discourse, etc. Though I intended it to function in service of progressive politics, what manifests is an implicit critique of political discourse.
  • Oppositional. My bot takes a stand, which Samples argues is an important element of automatized protest, but that stand is a catch-all aggregate of sometimes only ancillary stands – the #notnormal hashtag can be instrumentalized in a wide variety of political projects, and for that reason my bot’s stand can be at times incoherent. (I even had to delete two references to blonde tips that somehow appeared).
  • Uncanny. If, for Sample, bots should help reveal things that were previously hidden, then my bot fails entirely to satisfy this criteria; the tweets it produces have already been tweeted (and, in many cases, already retweeted). Instead of revealing the hidden, my bot exaggerates existing visibility.

I present this exercise, not as a means of self-castigation, but as a way of more rigorously reflecting on what seemed to be a clever project. And this has me rethinking my bot’s potential for revision and ongoing deployment.

The #NotNormal Twitter Bot

For my cloud text experiment, I decided to create a twitterbot using the class tutorial. Though I had a small issue in the beginning, I found it relatively easy to pull it together using the template scripts, and you can see it running on my twitter account at: https://twitter.com/AndrewDunnLIS  (this is an old account I created back during library school)

For this experiment, I decided to focus on the #NotNormal hashtag, used on twitter and Facebook in connection with racist incidents and other examples of extreme behavior/opinions by Trump and his associates. The point of the hashtag is prevent him, his policies, and his administration from becoming normalized by the media. I got the idea from a strategy guide published by Congressman Jerry Nadler (http://www.jerrynadler.com/news-clips/how-we-resist-trump-and-his-extreme-agenda). The guide is fascinating and very helpful, and so I encourage everyone to check it out.

To construct the Twitterbot, I spent two days “listening” to and scraping tweets from the #NotNormal feed. In the end, I had about 260 non-duplicate tweets, which I collected in a text file that serves as the primary directory. Most of the tweets have multiple hashtags, and most link to news articles or infographics that explain what the authors find so outrageous, and not normal. In building this library, I decided to delete the addresses of specific people (@_____) – I’d rather not get embroiled in specific feuds, if that’s possible; I’d rather just amplify political statements. In essence, then, the bot is retweeting things from the last 48 hours. In completing this project, by the way, I was struck by how frequently people just retweeted things in their feed, without taking what I think is necessary time to reformulate thoughts in their own words, and to really process/digest/reflect critically on the arguments they’re promoting. To a large degree, these are structural limitations built into the interface – most importantly, the character limit, which people can only bypass by including links. Still, now that I’m doing the same thing using automation, it strikes me as strange that so much of our current model for public discourse is automatic, and literally equivalent to something a program could do. (On top of that, I have some reservations about a project in which I’m just retweeting links to news sources/stories that I have not vetted, so I doubt I’ll leave this running for long. Also, because the links concern breaking news, they’ll fall out of relevance shortly.)  I’m starting to think that it would be really interesting to map how a single tweet spreads through the twittersphere – I know there are programs that do that, and if anyone has recommendations I would love to see them, especially now that I have an API access key.

 

 

Internet Freedom & Network Vulnerability

NPR published an article today about the Internet’s vulnerability to government crackdowns. The discussion focuses on other countries, particularly those marked by political dissent and upheaval, and by totalitarian regimes. Of particular interest were those apps/websites (Facebook, Twitter, WhatsApp, YouTube, Skype, etc.) that governments either restricted or pressured into exposing/turning in activists. The overall conclusion was a tendency toward less free access to the Internet – “The report’s scope covers the experiences of some 88 percent of the world’s Internet users. And of all 65 countries reviewed, Internet freedom in 34 — more than half — has been on a decline over the past year.” The authors cite increased surveillance as the common first step, and the whole article has ominous overtones in light of the Trump ascendancy.

http://www.npr.org/sections/alltechconsidered/2016/11/14/500214959/internet-freedom-wanes-as-governments-target-messaging-social-apps)

I know this topic aligns more closely with last week’s discussion, but I’ve been hearing helicopters outside for the last week and it’s starting to feel ominous. When my Internet connection started flickering, and then when it went out for a few minutes, i thought “Oh, they’ll pulled the switch.” Knowing what the cables in my backyard look like (someone once cut them on purpose as some kind of retribution for my neighbor and the whole building was cut off for days), it’s much more likely that (a la Blum) a squirrel was chewing a wire. But how hard would it be, if things get really bad, for them to shut down every IP address in Brooklyn and cripple resistance?

Interestingly, I’ve seen more guides for user encryption circulating Facebook, like this one published to Medium (https://medium.com/@kappklot/things-to-know-about-web-security-before-trumps-inauguration-a-harm-reductionist-guide-c365a5ddbcb8#.xwfu8n794). This stuff seems important, but in the face of a real crackdown, as we’ve seen in our readings, they could simply disable the Internet in problem areas.

I don’t really know here I’m going with all of this – I’m just trying to think through the last week’s gloom and reflect on what’s possibly coming.

Facebook’s Political Networks

I came across this article in the Times Magazine that really seemed to resonate with this week’s readings, in particular Galloway’s & Thacker’s work in a Theory of Networks. It’s a long article, but it explores in detail the role that Facebook (more specifically, the new types of political posts specifically crafted for Facebook’s newsfeed) has played in this particularly vicious and vitriolic election cycle. Here is the article: http://mobile.nytimes.com/2016/08/28/magazine/inside-facebooks-totally-insane-unintentionally-gigantic-hyperpartisan-political-media-machine.html?_r=0&referer=http%3A%2F%2Fwww.slate.com%2Fblogs%2Ffuture_tense%2F2016%2F11%2F04%2Ffacebook_is_fueling_an_international_boom_in_pro_trump_propaganda.html

Here are  some important excerpts:

“Individually, these pages [such as OccupyDemocrats, or the Other 98%] have meaningful audiences, but cumulatively, their audience is gigantic: tens of millions of people. On Facebook, they rival the reach of their better-funded counterparts in the political media, whether corporate giants like CNN or The New York Times, or openly ideological web operations like Breitbart or Mic. And unlike traditional media organizations, which have spent years trying to figure out how to lure readers out of the Facebook ecosystem and onto their sites, these new publishers are happy to live inside the world that Facebook has created. Their pages are accommodated but not actively courted by the company and are not a major part of its public messaging about media. But they are, perhaps, the purest expression of Facebook’s design and of the incentives coded into its algorithm — a system that has already reshaped the web and has now inherited, for better or for worse, a great deal of America’s political discourse.”

Another:

“This year, political content has become more popular all across the platform: on homegrown Facebook pages, through media companies with a growing Facebook presence and through the sharing habits of users in general. But truly Facebook-native political pages have begun to create and refine a new approach to political news: cherry-picking and reconstituting the most effective tactics and tropes from activism, advocacy and journalism into a potent new mixture. This strange new class of media organization slots seamlessly into the news feed and is especially notable in what it asks, or doesn’t ask, of its readers. The point is not to get them to click on more stories or to engage further with a brand. The point is to get them to share the post that’s right in front of them. Everything else is secondary.”

I won’t quote the entire article, because I think these give you a good sense of where it’s going, and of the ways in which it parallels Galloway’s and Thacker’s argument – in essence, Facebook is a network governed by protocols; these protocols define a technology that regulates the flow of information and connects life forms. Facebook, furthermore, is not a single network, but a network of networks, wherein each individual network sees different things, and comes to radically different conclusions about the same events that often can’t be reconciled. No one theoretically controls these networks, but the networks are controlled regardless, so that, in the words of Galloway, “we are witnessing a sovereignty that is…based not on exceptional events but on exceptional topologies.” Within this “twofold dynamic of network control,” subjects act within distributed networks to materialize and create protocols through their exercise of local agency.

These native posts/pages are designed to function within Facebook’s rhetorical context, but they gain potency through the complex relationships between autonomous, interconnected agents. This, as Galloway explains, is the basis for protocol, and so it seems to me that Facebook, just by setting the initial parameters (posts, newsfeeds, the ability to like and share things),  exercises control over political discourse; we see this control emerge, as the Times article suggests, in a very specific style of political engagement that is grounded in what Galloway would call distinct levels of network individuation (that of the user nodes, who share posts and political memes to perform their politics, and the networks through which these posts/memes can spread, which define the larger political movements). The end result, however, is less a kind of public space, or town square, or commons, than a series of differently structured networks with their own unique and competing swarm doctrines. Control is a kind of coordination that emerges in response to Facebook’s protocol, to its user interface and network affordances, that has real consequences for the kinds of conversations that can happen.

Versions of Cause and Effect in Technology and Society

As I was reading this week’s texts on labor in the cloud, I was struck, in particular, by the emphasis on the political motivations behind these tools (I’m using “politics” here in a very specific sense, in connection with Marxian discourses on political economy, etc.). I think this stood out to me in part because so many of the readings we’ve done for this class have been kind of post-Marxist – it’s not that infrastructure analysis and A Thousand Plateaus are anti-politics, merely that politic/political economy become decentralized in their critiques. This political emphasis brought to mind a text that I had sort of forgotten about, and which speaks to the themes of this course – “The Technology and the Society” by Raymond Williams, a Marxist critic, that was written in 1972.

One of the things that Williams analyzes is the notion of cause and effect in discussions of technology, specifically the effects of new technologies on social structures and society. He identifies eight ways of discussing this relationship, using television as an example:

(ii) Television was invented as a result of scientific
and technical research. Its power as a medium of
news and entertainment was then so great that it
altered all preceding media of news and
entertainment.

(iii) Television was invented as a result of scientific
and technical research. Its power as a medium of
social communication was then so great that it
altered many of our institutions and forms of social
relationships.

(iv) Television was invented as a result of scientific
and technical research. Its inherent properties as an
electronic medium altered our basic perceptions of
reality, and thence our relations with each other and
with the world.

(v) Television was invented as a result of scientific
and technical research. As a powerful medium of
communication and entertainment it took its place
with other factors- such as greatly increased physical
mobility, itself the result of other newly invented
technologies- in altering the scale and form of our
societies.

(vi) Television was invented as a result of scientific
and technical research, and developed as a medium of
entertainment and news. It then had unforeseen
consequences, not only on other entertainment and
news media. which it reduced in viability and
importance, but on some of the central processes of
family, cultural and social life.

(vi) Television, discovered as a possibility by scientific
and technical research, was selected for investment
and development to meet the needs of a new kind of
society,l especially in the provision of centralised
entertainment and in the centralised formation of
opinions and styles of behaviour.

(vii) Television, discovered as a possibility by scientific
and technical research, was selected for investment
and promotion as a new and profitable phase of a
domestic consumer economy; it is then one of the
characteristic “machines for the home.”

(viii) Television became available as a result of
scientific and technical research, and in its character
and uses exploited and emphasised elements of a
passivity, a cultural and psychological inadequacy,
which had always been latent in people, but which
television now organised and came to represent.

(ix.) Television became available as a result of scientific
and technical research. and in its character and uses
both served and exploited the needs of a new kind of
large-scale and complex but atomised society.

As you can see, the emphasis progresses from one in which technology is produced through ideologically neutral mechanisms (research) and results in generally unintended consequences to one in which technological creation is itself an ideologically motivated process, aimed at reinforcing latent structures in the capitalist mode of production for the benefit of those in power. I think it’s interesting to consider how our class readings might be mapped onto this continuum, and to what extent something like ANT presents technological production as a quasi-accidental, haphazard, emergent process, as opposed to a politically motivated process that is deeply embedded within a certain ideological context. (I don’t think it actually does this in practice; I merely present this a kind of provocation.) Doing so raises important questions about the benefits and pitfalls of de-emphasizing politics – which strategy, for instance, has the most philosophical/theoretical utility? Which strategy is best suited to creating change? In reading some of the chapters from the Labor book, for instance, I couldn’t help thinking that the authors favored simplistic political reductions over real engagements with the complexity of these systems, and whether or not that reaction is accurate, I felt in some ways like it was conditioned by many of our earlier readings.

For instance, there is an artist, Aaron Koeblin, who uses Mechanical Turk to create artworks, and who uses that process to foreground complex issues surrounding distributed labor and value creation. In a way, his work points to many of the questions raised by our readings, but without landing in a conclusive political judgment. I don’t know if that’s a strength of the work or a weakness, but here are some links to what might be some relevant projects:

10,000 Cents

Bicycle Built for Two Thousand