Category Archives: Uncategorized

Twitter Bots: Useful, Dangerous, or Both

The world of tomorrow is a terrifying place – looks like we have some tidying-up to do.

Recently we were assigned two articles related to twitter bots. Rob Dubbin’s The Rise of Twitter Bots was a more relaxed take on the subject: Twitter bots represent a long spanning gamut between wasting time and a reminder of surveillance. The other, Mark Sample’s piece on protest bots, looked to take a deeper look into how to effectively create protest on the internet by arming bots with more than just simple repetitions, but rather intricate creations of an uncanny environment that captures attention. Twitter has graciously opened their API for development which made all of these creative and powerful ventures possible, but we should rather look towards the future of Twitter bots – specifically how artificial intelligence will affect a platform like this.

AIStat

The biggest buzz phrase as of recently has to be “deep learning systems.” We’ve officially moved beyond the alpha stages of artificial intelligence and have now developed systems that are finally… well… intelligent. This also means that everyone including your grandmother and the kitchen sink are jumping on-board to develop deep learning systems, and they are definitely increasing infrastructure to do so. Deep learning and A.I. can scale anywhere from YouTube’s recent 8m data project to something more relevant: a Twitter bot with a built in A.I.

TayTweets

TayTweets or @TayandYou was supposed to be an innocent experiment: Microsoft wished to better understand conversation by having a bot learn from discussion with fellow Twitter users. The name “Tay” was based off of “thinking about you,” and released initially in March of this year. Of course the mission progressed from a center of learning to an all out nightmare quite quickly. The bot started picking up on slang, started learning racial slurs, and even at one point sided with Hitler. Tay was even brought back a second time after Microsoft tried cleaning it up, but that was a catastrophic failure as well. But this was just regular hatefulness – the rabbit hole goes deeper than that.

Spearphishing, a recent phenomenon is a more advanced form of phishing that is catered to a specific individual. Recently, a cyber-security firm that specializes in social media, ZeroFOX, developed a project for educational purposes that uses machine learning to target users. The intelligence develops profiles on each target based on what they’ve tweeted, finds the best time to send the tweet and sends it to the unsuspecting user. Recently, a writer from The Atlantic was taking a look at this whole scenario and the tweets that were sent were incredibly realistic. However, in their test, they made the phishing link redirect to The Atlantic rather than a malicious site.

SNAP_R

The scary thing to think about is where A.I. and machine learning techniques collide with Twitter bots. Apparently, the success rate of this project (dubbed SNAP_R) averaged around 30% in a test which is quite astounding. The bot was also programmed to take trending topics and unsuspecting users, mesh them, and generate this dangerous content. With the rate at which A.I. is being developed, this is just scratching the surface in regards of what’s to come in the future. Of course we can talk about the positives that Twitter’s open API has afforded us, but the elephant in the room must be addressed at the same time on a deeper level (no pun intended).

Even Twitter has been investing in artificial intelligence. The acquisition of Magic Pony back in june marks their third straight year of acquiring firms. A.I. even has a darker side as well – Elon Musk recently predicted that A.I. will be the future of cyber attacks. Of course they can function on the low level of phishing schemes and hateful statements, but when the technology advances, so does the scope of usage. The future of Twitter bots and learning systems are a lot darker and more powerful than protest or humor: they beget new systems of influence and control, and slowly lead us to question what is real.

For some more resources on A.I. and Deep Learning, check out

Artificial Intelligence: The Future Is Now

DeepMind

DeepMind’s Lip Reading Abilities

What’s the Difference Between A.I., ML and DL?

Deep Learning: The Past, Present and Future of A.I.

Cooling Systems at Data Centers

One of the things that I was really looking forward to seeing before our site visit last Tuesday was the cooling systems. When describing the center of Milwaukee’s Internet in Tubes, Andrew Blum illustrates how the “double-hung windows were thrown wide open to the winter, the cheapest way to keep the machines cool” (Blum 2012: 23-24). Even though I was not expecting that to be the case of DataGryd, Blum’s description was, at least, quite surprising to me. As we were told by Peter Feldman (CEO) during the visit—and as they state on their website—at 60 Hudson Street waste heat from the gas turbines is captured by their absorption chillers and cooling to the datacenter floors is offered not only by condensed water from their cooling towers, but also also through the chilled water created from the gas turbine waste heat.

In order to make the cooling infrastructure work though, new generators had to be installed in the building a few years ago. As Rich Miller explains in this article from Data Center Knowledge, the superstorm Sandy back in 2012 made everyone aware of the problems that come with placing power generators in the basement of buildings. Despite the fact that DataGryd did not experience any power shortage or flooding during the superstorm, the generators were shifted to the 24th floor of 60 Hudson Street. Miller goes on to talk about both the cooling system and the cooling towers, which were added to the roof and each support 8,500 tons of condensed water. As it turns out, the purpose of all this is to provide medium-voltage electricity which involves less cabling and less power loss during distribution.

After the site visit I wanted to learn more about cooling efficiency when it comes to data centers, and I came across this article from ComputerWeekly.com, which compares two different techniques. On the one hand, air cooling has been the most common method since digital technologies improved and computers kept getting smaller and smaller. On the other hand, water cooling—which used to be the default means of keeping a computer cool—has pointed out in the last few years the problems of air cooling regarding, for instance, lack of space, poor conductivity and cost of maintenance.

I definitely recommend these articles to everyone interested in cooling mechanisms and data centers.

Sources Cited

Blum, Andrew. “The Map.” Tubes. New York City: HarperCollins, 2012. Web.

The #NotNormal Twitter Bot

For my cloud text experiment, I decided to create a twitterbot using the class tutorial. Though I had a small issue in the beginning, I found it relatively easy to pull it together using the template scripts, and you can see it running on my twitter account at: https://twitter.com/AndrewDunnLIS  (this is an old account I created back during library school)

For this experiment, I decided to focus on the #NotNormal hashtag, used on twitter and Facebook in connection with racist incidents and other examples of extreme behavior/opinions by Trump and his associates. The point of the hashtag is prevent him, his policies, and his administration from becoming normalized by the media. I got the idea from a strategy guide published by Congressman Jerry Nadler (http://www.jerrynadler.com/news-clips/how-we-resist-trump-and-his-extreme-agenda). The guide is fascinating and very helpful, and so I encourage everyone to check it out.

To construct the Twitterbot, I spent two days “listening” to and scraping tweets from the #NotNormal feed. In the end, I had about 260 non-duplicate tweets, which I collected in a text file that serves as the primary directory. Most of the tweets have multiple hashtags, and most link to news articles or infographics that explain what the authors find so outrageous, and not normal. In building this library, I decided to delete the addresses of specific people (@_____) – I’d rather not get embroiled in specific feuds, if that’s possible; I’d rather just amplify political statements. In essence, then, the bot is retweeting things from the last 48 hours. In completing this project, by the way, I was struck by how frequently people just retweeted things in their feed, without taking what I think is necessary time to reformulate thoughts in their own words, and to really process/digest/reflect critically on the arguments they’re promoting. To a large degree, these are structural limitations built into the interface – most importantly, the character limit, which people can only bypass by including links. Still, now that I’m doing the same thing using automation, it strikes me as strange that so much of our current model for public discourse is automatic, and literally equivalent to something a program could do. (On top of that, I have some reservations about a project in which I’m just retweeting links to news sources/stories that I have not vetted, so I doubt I’ll leave this running for long. Also, because the links concern breaking news, they’ll fall out of relevance shortly.)  I’m starting to think that it would be really interesting to map how a single tweet spreads through the twittersphere – I know there are programs that do that, and if anyone has recommendations I would love to see them, especially now that I have an API access key.

 

 

Stories and Maps

On Tuesday, after our visit to the datacenter, Andrew Blum made a comment about stories that made me think back to our discussions of “mapping” networks. I can’t remember his exact words, but Blum was talking about the way that our guides presented the data center to us, saying that people who are immersed in that kind of work often don’t tell the story that we want to hear. His comment led me to another comment from the introduction to his book, Tubes, in the section where he discusses the “folk cartography” of Kevin Kelly. Blum explains how Kelly solicited the public to submit hand-drawn maps of their conceptions of the internet (“Internet Mapping Project”). A researcher, Mara Vanina Oses, from the University of Buenos Aires, analyzed these maps by their topologies (here’s her presentation) . Blum notes that while these maps are perceptive, displaying our awareness of our experience in the network, none of them actually reference the physical machines and tubes that make up the network. And he rightly points out that ignorance about how these infrastructures actually work is dangerous: “the great global scourges of modern life are always made worse by not knowing. Yet we treat the Internet as if it were a fantasy” (7).

I see a similar formulation from Deleuze and Guattari, who make a distinction between mapping and tracing. According to these theorists, tracing only reinforces what we already know, what is in the unconscious, while a map attempts to engage with the real:

“The tracing has already translated the map into an image ; it has already transformed the rhizome into roots and radicles. It has organized, stabilized, neutralized the multiplicities according to the axes of significance and subjectification belonging to it. It has generated, structuralized the rhizome, and when it thinks it is reproducing something else it is in fact only reproducing itself. That is why the tracing is so dangerous. It injects redundancies and propagates them” (A Thousand Plateaus 13).

There’s an analogy where we can associate Kelly’s “folk cartography” with D&G’s tracing, while the investigative work that Blum undertakes is like D&G’s mapping. All three theorists think that this tracing activity is potentially dangerous, because it neglects the workings of the actual infrastructure that exists outside the mind. But I think there’s something to be learned by looking at these citizen “tracings” of the network. While Blum rightly points out that these tracings are fantasies, they reveal the aesthetic experience of being on the network, and how certain users tend to sense, at least on an abstract level, scope of the system.

Although the maps from the “Internet Mapping Project” don’t explicitly indicate the cables and servers that make up the internet, they do reveal the sense of obscurity and incomprehensibility about the network. In their furious, numerous lines, or simple, abstract shapes, the users struggle to represent the connectivity of information. While people generally put themselves in the center of their own networks, meaning that they see the network as organized around their needs, they still suggest that the system is complicated, overwhelming, and in many ways beyond them.

screenshot-2016-11-18-09-17-35

screenshot-2016-11-18-09-17-15

screenshot-2016-11-18-09-16-44

We can use these maps to reframe other topics we’ve discussed throughout the semester. What do they have to say about labor, conceptions of labor, or media and agency? I do think these maps deserve a second look because they tell stories about how people experience the internet. It may not be the full story, but it’s worth listening.

Internet Freedom & Network Vulnerability

NPR published an article today about the Internet’s vulnerability to government crackdowns. The discussion focuses on other countries, particularly those marked by political dissent and upheaval, and by totalitarian regimes. Of particular interest were those apps/websites (Facebook, Twitter, WhatsApp, YouTube, Skype, etc.) that governments either restricted or pressured into exposing/turning in activists. The overall conclusion was a tendency toward less free access to the Internet – “The report’s scope covers the experiences of some 88 percent of the world’s Internet users. And of all 65 countries reviewed, Internet freedom in 34 — more than half — has been on a decline over the past year.” The authors cite increased surveillance as the common first step, and the whole article has ominous overtones in light of the Trump ascendancy.

http://www.npr.org/sections/alltechconsidered/2016/11/14/500214959/internet-freedom-wanes-as-governments-target-messaging-social-apps)

I know this topic aligns more closely with last week’s discussion, but I’ve been hearing helicopters outside for the last week and it’s starting to feel ominous. When my Internet connection started flickering, and then when it went out for a few minutes, i thought “Oh, they’ll pulled the switch.” Knowing what the cables in my backyard look like (someone once cut them on purpose as some kind of retribution for my neighbor and the whole building was cut off for days), it’s much more likely that (a la Blum) a squirrel was chewing a wire. But how hard would it be, if things get really bad, for them to shut down every IP address in Brooklyn and cripple resistance?

Interestingly, I’ve seen more guides for user encryption circulating Facebook, like this one published to Medium (https://medium.com/@kappklot/things-to-know-about-web-security-before-trumps-inauguration-a-harm-reductionist-guide-c365a5ddbcb8#.xwfu8n794). This stuff seems important, but in the face of a real crackdown, as we’ve seen in our readings, they could simply disable the Internet in problem areas.

I don’t really know here I’m going with all of this – I’m just trying to think through the last week’s gloom and reflect on what’s possibly coming.

Wifi enabled chip allows paralyzed Rhesus monkey to walk again

The rear left leg is normal, the rear right leg is being moved through wifi enabled electrodes.

The rear left leg is normal, the rear right leg is being moved through wifi enabled electrodes. Source: http://spectrum.ieee.org

Researchers have successfully used wifi enabled electrodes to allow a monkey with nerve damage to walk again. From an article in the BBC:

Dr Gregoire Courtine, one of the researchers, said: “This is the first time that a neurotechnology has restored locomotion in primates.” He told the BBC News website: “The movement was close to normal for the basic walking pattern, but so far we have not been able to test the ability to steer.”

The technology connects the portion of the brain in control of motor function to the extremities of a Rhesus monkey with spinal cord damage. Signals that would normally flow along the spine, instead, travel over a wireless signal to electrodes that then signal nerves.

Implantable technology is nothing new but this development is monumental because the device is replacing a motor function that happens almost instantaneously. And, thanks to technology being created to transform Bluetooth signal (low power demand) into a wifi signal (high power demand), the possibility of implants in the brain and other parts of the body are capable of long and dependable operation.

I’m fascinated by this dislocation of electro-chemical signaling to electro-numerical signaling. Until now, chips implanted into the brain of a subject only collected data or controled simple robotic limbs. This device re-enables the use of the subject’s own body. Of course, the brain chip that allows a monkey to move its legs is not interpreting brain functions — it is only by-passing the damaged spinal cord. The cliche question to ask is whether someone can hack the wifi enabled transmitters.

Facebook’s Political Networks

I came across this article in the Times Magazine that really seemed to resonate with this week’s readings, in particular Galloway’s & Thacker’s work in a Theory of Networks. It’s a long article, but it explores in detail the role that Facebook (more specifically, the new types of political posts specifically crafted for Facebook’s newsfeed) has played in this particularly vicious and vitriolic election cycle. Here is the article: http://mobile.nytimes.com/2016/08/28/magazine/inside-facebooks-totally-insane-unintentionally-gigantic-hyperpartisan-political-media-machine.html?_r=0&referer=http%3A%2F%2Fwww.slate.com%2Fblogs%2Ffuture_tense%2F2016%2F11%2F04%2Ffacebook_is_fueling_an_international_boom_in_pro_trump_propaganda.html

Here are  some important excerpts:

“Individually, these pages [such as OccupyDemocrats, or the Other 98%] have meaningful audiences, but cumulatively, their audience is gigantic: tens of millions of people. On Facebook, they rival the reach of their better-funded counterparts in the political media, whether corporate giants like CNN or The New York Times, or openly ideological web operations like Breitbart or Mic. And unlike traditional media organizations, which have spent years trying to figure out how to lure readers out of the Facebook ecosystem and onto their sites, these new publishers are happy to live inside the world that Facebook has created. Their pages are accommodated but not actively courted by the company and are not a major part of its public messaging about media. But they are, perhaps, the purest expression of Facebook’s design and of the incentives coded into its algorithm — a system that has already reshaped the web and has now inherited, for better or for worse, a great deal of America’s political discourse.”

Another:

“This year, political content has become more popular all across the platform: on homegrown Facebook pages, through media companies with a growing Facebook presence and through the sharing habits of users in general. But truly Facebook-native political pages have begun to create and refine a new approach to political news: cherry-picking and reconstituting the most effective tactics and tropes from activism, advocacy and journalism into a potent new mixture. This strange new class of media organization slots seamlessly into the news feed and is especially notable in what it asks, or doesn’t ask, of its readers. The point is not to get them to click on more stories or to engage further with a brand. The point is to get them to share the post that’s right in front of them. Everything else is secondary.”

I won’t quote the entire article, because I think these give you a good sense of where it’s going, and of the ways in which it parallels Galloway’s and Thacker’s argument – in essence, Facebook is a network governed by protocols; these protocols define a technology that regulates the flow of information and connects life forms. Facebook, furthermore, is not a single network, but a network of networks, wherein each individual network sees different things, and comes to radically different conclusions about the same events that often can’t be reconciled. No one theoretically controls these networks, but the networks are controlled regardless, so that, in the words of Galloway, “we are witnessing a sovereignty that is…based not on exceptional events but on exceptional topologies.” Within this “twofold dynamic of network control,” subjects act within distributed networks to materialize and create protocols through their exercise of local agency.

These native posts/pages are designed to function within Facebook’s rhetorical context, but they gain potency through the complex relationships between autonomous, interconnected agents. This, as Galloway explains, is the basis for protocol, and so it seems to me that Facebook, just by setting the initial parameters (posts, newsfeeds, the ability to like and share things),  exercises control over political discourse; we see this control emerge, as the Times article suggests, in a very specific style of political engagement that is grounded in what Galloway would call distinct levels of network individuation (that of the user nodes, who share posts and political memes to perform their politics, and the networks through which these posts/memes can spread, which define the larger political movements). The end result, however, is less a kind of public space, or town square, or commons, than a series of differently structured networks with their own unique and competing swarm doctrines. Control is a kind of coordination that emerges in response to Facebook’s protocol, to its user interface and network affordances, that has real consequences for the kinds of conversations that can happen.

lingua universal

 

google-noto

Google is explicit: “Make all the world’s information universally accessible and useful.”

If your device does not recognize a font, it renders little boxes in the character space—unofficially called “tofu.” Google and design company Monotype have been working for over 5 years towards creating a linguistically universal font, one that will render any language on any device : it’s called Noto (no tofu, GET IT!?).

Is there in “digital” the potential for universal language? What does it mean when “universalism” is designed and asserted by commercial actors? Is the effort necessarily more about power, more a move like “commercial imperialism?” Monotype, for its part, tried to be sensitive to this:

Describing the company’s approach to Tibetan, for example, Monotype did “deep research into a vast library of writings and source material, and then enlisted the help of a Buddhist monastery to critique the font and make adjustments. The monks’ constant study of Tibetan manuscripts made them the ideal experts to evaluate Noto Tibetan, and were instrumental in the final design of the font.”

Can a non-imperial universalism be mediated by a multi-lingual digital platform? Readings this week suggest directing this question to the inbuilt protocologic forms brokering digital exchange, where “control” describes our User Interface. Perhaps the question is rather: can a non-imperial universalism be operationalized via any structured interface?