Monthly Archives: November 2016

css.php

Twitter Bots: Useful, Dangerous, or Both

The world of tomorrow is a terrifying place – looks like we have some tidying-up to do.

Recently we were assigned two articles related to twitter bots. Rob Dubbin’s The Rise of Twitter Bots was a more relaxed take on the subject: Twitter bots represent a long spanning gamut between wasting time and a reminder of surveillance. The other, Mark Sample’s piece on protest bots, looked to take a deeper look into how to effectively create protest on the internet by arming bots with more than just simple repetitions, but rather intricate creations of an uncanny environment that captures attention. Twitter has graciously opened their API for development which made all of these creative and powerful ventures possible, but we should rather look towards the future of Twitter bots – specifically how artificial intelligence will affect a platform like this.

AIStat

The biggest buzz phrase as of recently has to be “deep learning systems.” We’ve officially moved beyond the alpha stages of artificial intelligence and have now developed systems that are finally… well… intelligent. This also means that everyone including your grandmother and the kitchen sink are jumping on-board to develop deep learning systems, and they are definitely increasing infrastructure to do so. Deep learning and A.I. can scale anywhere from YouTube’s recent 8m data project to something more relevant: a Twitter bot with a built in A.I.

TayTweets

TayTweets or @TayandYou was supposed to be an innocent experiment: Microsoft wished to better understand conversation by having a bot learn from discussion with fellow Twitter users. The name “Tay” was based off of “thinking about you,” and released initially in March of this year. Of course the mission progressed from a center of learning to an all out nightmare quite quickly. The bot started picking up on slang, started learning racial slurs, and even at one point sided with Hitler. Tay was even brought back a second time after Microsoft tried cleaning it up, but that was a catastrophic failure as well. But this was just regular hatefulness – the rabbit hole goes deeper than that.

Spearphishing, a recent phenomenon is a more advanced form of phishing that is catered to a specific individual. Recently, a cyber-security firm that specializes in social media, ZeroFOX, developed a project for educational purposes that uses machine learning to target users. The intelligence develops profiles on each target based on what they’ve tweeted, finds the best time to send the tweet and sends it to the unsuspecting user. Recently, a writer from The Atlantic was taking a look at this whole scenario and the tweets that were sent were incredibly realistic. However, in their test, they made the phishing link redirect to The Atlantic rather than a malicious site.

SNAP_R

The scary thing to think about is where A.I. and machine learning techniques collide with Twitter bots. Apparently, the success rate of this project (dubbed SNAP_R) averaged around 30% in a test which is quite astounding. The bot was also programmed to take trending topics and unsuspecting users, mesh them, and generate this dangerous content. With the rate at which A.I. is being developed, this is just scratching the surface in regards of what’s to come in the future. Of course we can talk about the positives that Twitter’s open API has afforded us, but the elephant in the room must be addressed at the same time on a deeper level (no pun intended).

Even Twitter has been investing in artificial intelligence. The acquisition of Magic Pony back in june marks their third straight year of acquiring firms. A.I. even has a darker side as well – Elon Musk recently predicted that A.I. will be the future of cyber attacks. Of course they can function on the low level of phishing schemes and hateful statements, but when the technology advances, so does the scope of usage. The future of Twitter bots and learning systems are a lot darker and more powerful than protest or humor: they beget new systems of influence and control, and slowly lead us to question what is real.

For some more resources on A.I. and Deep Learning, check out

Artificial Intelligence: The Future Is Now

DeepMind

DeepMind’s Lip Reading Abilities

What’s the Difference Between A.I., ML and DL?

Deep Learning: The Past, Present and Future of A.I.

Cooling Systems at Data Centers

One of the things that I was really looking forward to seeing before our site visit last Tuesday was the cooling systems. When describing the center of Milwaukee’s Internet in Tubes, Andrew Blum illustrates how the “double-hung windows were thrown wide open to the winter, the cheapest way to keep the machines cool” (Blum 2012: 23-24). Even though I was not expecting that to be the case of DataGryd, Blum’s description was, at least, quite surprising to me. As we were told by Peter Feldman (CEO) during the visit—and as they state on their website—at 60 Hudson Street waste heat from the gas turbines is captured by their absorption chillers and cooling to the datacenter floors is offered not only by condensed water from their cooling towers, but also also through the chilled water created from the gas turbine waste heat.

In order to make the cooling infrastructure work though, new generators had to be installed in the building a few years ago. As Rich Miller explains in this article from Data Center Knowledge, the superstorm Sandy back in 2012 made everyone aware of the problems that come with placing power generators in the basement of buildings. Despite the fact that DataGryd did not experience any power shortage or flooding during the superstorm, the generators were shifted to the 24th floor of 60 Hudson Street. Miller goes on to talk about both the cooling system and the cooling towers, which were added to the roof and each support 8,500 tons of condensed water. As it turns out, the purpose of all this is to provide medium-voltage electricity which involves less cabling and less power loss during distribution.

After the site visit I wanted to learn more about cooling efficiency when it comes to data centers, and I came across this article from ComputerWeekly.com, which compares two different techniques. On the one hand, air cooling has been the most common method since digital technologies improved and computers kept getting smaller and smaller. On the other hand, water cooling—which used to be the default means of keeping a computer cool—has pointed out in the last few years the problems of air cooling regarding, for instance, lack of space, poor conductivity and cost of maintenance.

I definitely recommend these articles to everyone interested in cooling mechanisms and data centers.

Sources Cited

Blum, Andrew. “The Map.” Tubes. New York City: HarperCollins, 2012. Web.

The #NotNormal Twitter Bot

For my cloud text experiment, I decided to create a twitterbot using the class tutorial. Though I had a small issue in the beginning, I found it relatively easy to pull it together using the template scripts, and you can see it running on my twitter account at: https://twitter.com/AndrewDunnLIS  (this is an old account I created back during library school)

For this experiment, I decided to focus on the #NotNormal hashtag, used on twitter and Facebook in connection with racist incidents and other examples of extreme behavior/opinions by Trump and his associates. The point of the hashtag is prevent him, his policies, and his administration from becoming normalized by the media. I got the idea from a strategy guide published by Congressman Jerry Nadler (http://www.jerrynadler.com/news-clips/how-we-resist-trump-and-his-extreme-agenda). The guide is fascinating and very helpful, and so I encourage everyone to check it out.

To construct the Twitterbot, I spent two days “listening” to and scraping tweets from the #NotNormal feed. In the end, I had about 260 non-duplicate tweets, which I collected in a text file that serves as the primary directory. Most of the tweets have multiple hashtags, and most link to news articles or infographics that explain what the authors find so outrageous, and not normal. In building this library, I decided to delete the addresses of specific people (@_____) – I’d rather not get embroiled in specific feuds, if that’s possible; I’d rather just amplify political statements. In essence, then, the bot is retweeting things from the last 48 hours. In completing this project, by the way, I was struck by how frequently people just retweeted things in their feed, without taking what I think is necessary time to reformulate thoughts in their own words, and to really process/digest/reflect critically on the arguments they’re promoting. To a large degree, these are structural limitations built into the interface – most importantly, the character limit, which people can only bypass by including links. Still, now that I’m doing the same thing using automation, it strikes me as strange that so much of our current model for public discourse is automatic, and literally equivalent to something a program could do. (On top of that, I have some reservations about a project in which I’m just retweeting links to news sources/stories that I have not vetted, so I doubt I’ll leave this running for long. Also, because the links concern breaking news, they’ll fall out of relevance shortly.)  I’m starting to think that it would be really interesting to map how a single tweet spreads through the twittersphere – I know there are programs that do that, and if anyone has recommendations I would love to see them, especially now that I have an API access key.

 

 

Stories and Maps

On Tuesday, after our visit to the datacenter, Andrew Blum made a comment about stories that made me think back to our discussions of “mapping” networks. I can’t remember his exact words, but Blum was talking about the way that our guides presented the data center to us, saying that people who are immersed in that kind of work often don’t tell the story that we want to hear. His comment led me to another comment from the introduction to his book, Tubes, in the section where he discusses the “folk cartography” of Kevin Kelly. Blum explains how Kelly solicited the public to submit hand-drawn maps of their conceptions of the internet (“Internet Mapping Project”). A researcher, Mara Vanina Oses, from the University of Buenos Aires, analyzed these maps by their topologies (here’s her presentation) . Blum notes that while these maps are perceptive, displaying our awareness of our experience in the network, none of them actually reference the physical machines and tubes that make up the network. And he rightly points out that ignorance about how these infrastructures actually work is dangerous: “the great global scourges of modern life are always made worse by not knowing. Yet we treat the Internet as if it were a fantasy” (7).

I see a similar formulation from Deleuze and Guattari, who make a distinction between mapping and tracing. According to these theorists, tracing only reinforces what we already know, what is in the unconscious, while a map attempts to engage with the real:

“The tracing has already translated the map into an image ; it has already transformed the rhizome into roots and radicles. It has organized, stabilized, neutralized the multiplicities according to the axes of significance and subjectification belonging to it. It has generated, structuralized the rhizome, and when it thinks it is reproducing something else it is in fact only reproducing itself. That is why the tracing is so dangerous. It injects redundancies and propagates them” (A Thousand Plateaus 13).

There’s an analogy where we can associate Kelly’s “folk cartography” with D&G’s tracing, while the investigative work that Blum undertakes is like D&G’s mapping. All three theorists think that this tracing activity is potentially dangerous, because it neglects the workings of the actual infrastructure that exists outside the mind. But I think there’s something to be learned by looking at these citizen “tracings” of the network. While Blum rightly points out that these tracings are fantasies, they reveal the aesthetic experience of being on the network, and how certain users tend to sense, at least on an abstract level, scope of the system.

Although the maps from the “Internet Mapping Project” don’t explicitly indicate the cables and servers that make up the internet, they do reveal the sense of obscurity and incomprehensibility about the network. In their furious, numerous lines, or simple, abstract shapes, the users struggle to represent the connectivity of information. While people generally put themselves in the center of their own networks, meaning that they see the network as organized around their needs, they still suggest that the system is complicated, overwhelming, and in many ways beyond them.

screenshot-2016-11-18-09-17-35

screenshot-2016-11-18-09-17-15

screenshot-2016-11-18-09-16-44

We can use these maps to reframe other topics we’ve discussed throughout the semester. What do they have to say about labor, conceptions of labor, or media and agency? I do think these maps deserve a second look because they tell stories about how people experience the internet. It may not be the full story, but it’s worth listening.

Internet Freedom & Network Vulnerability

NPR published an article today about the Internet’s vulnerability to government crackdowns. The discussion focuses on other countries, particularly those marked by political dissent and upheaval, and by totalitarian regimes. Of particular interest were those apps/websites (Facebook, Twitter, WhatsApp, YouTube, Skype, etc.) that governments either restricted or pressured into exposing/turning in activists. The overall conclusion was a tendency toward less free access to the Internet – “The report’s scope covers the experiences of some 88 percent of the world’s Internet users. And of all 65 countries reviewed, Internet freedom in 34 — more than half — has been on a decline over the past year.” The authors cite increased surveillance as the common first step, and the whole article has ominous overtones in light of the Trump ascendancy.

http://www.npr.org/sections/alltechconsidered/2016/11/14/500214959/internet-freedom-wanes-as-governments-target-messaging-social-apps)

I know this topic aligns more closely with last week’s discussion, but I’ve been hearing helicopters outside for the last week and it’s starting to feel ominous. When my Internet connection started flickering, and then when it went out for a few minutes, i thought “Oh, they’ll pulled the switch.” Knowing what the cables in my backyard look like (someone once cut them on purpose as some kind of retribution for my neighbor and the whole building was cut off for days), it’s much more likely that (a la Blum) a squirrel was chewing a wire. But how hard would it be, if things get really bad, for them to shut down every IP address in Brooklyn and cripple resistance?

Interestingly, I’ve seen more guides for user encryption circulating Facebook, like this one published to Medium (https://medium.com/@kappklot/things-to-know-about-web-security-before-trumps-inauguration-a-harm-reductionist-guide-c365a5ddbcb8#.xwfu8n794). This stuff seems important, but in the face of a real crackdown, as we’ve seen in our readings, they could simply disable the Internet in problem areas.

I don’t really know here I’m going with all of this – I’m just trying to think through the last week’s gloom and reflect on what’s possibly coming.

“Eye in the sky” (and on the streets)

I had already been making efforts to look for surveillance cameras and speed cameras at intersections and in the subway, cell towers, dishes, analog TV antennae, and various mystery boxes on telephone poles. Until Verizon convinced me to switch from copper to FIOS a couple of years ago (very grudgingly, and only after resisting for years and then deploying the “executive carpet bomb” complaint technique on Verizon officials – it works, btw) it was in my personal interest to be informed about POTS service in the city,  and in particular the contents of two green-gray junction boxes on my street. After the switch, I began to look more carefully at where FIOS cables in the area were coming and going.

But Networks of New York—and this course in general—has made me much more observant of the wireless and wired infrastructural environment, both in the city and in the countryside, where cables and lines are easier to notice (I also keep an eye out for lightning rods on old barns). This morning on the bus I began to pick out a number of wireless surveillance devices that Burrington includes in her field guide: the flat square antennae that enable street and traffic surveillance cameras to talk to each other; microwave antennae; and some unremarkable-looking antennae bolted to streetlamps that didn’t seem to be connected to any other device. So apparently Myrtle Avenue is abuzz with data exchange at all times. But it’s a conversation that none of us can eavesdrop on—at least, not without the right tools.

What seemed before to be a basic, rather linear setup of cameras and transmitters at intersections now feels like a  cloud of surveillance that I move through, at all times, except maybe (I hope) when I am inside my apartment sleeping. Still, it’s easy to tune out because it’s not very visible and doesn’t interfere with my daily activities. A quarter mile away, however, that’s not the case. For the past couple of years, portable towers with floodlights and cameras set up on the grounds of a large NYCHA complex by the NYPD make their presence very well-known at night. Generators emit a continuous rumble and the light undoubtedly intrudes into the rooms of surrounding apartments. The towers were put there presumably because the area had experienced some persistent low-level crime—and occasionally, shootings. But they seem to have substituted the limited one-way communication capacity of machines for any human presence, whether of police or security guards, or anyone who could connect with the community, and facilitate interpersonal networks.

Now, it’s reasonable to assume that many residents would gladly trade the intrusion of the light towers for a safer environment. But sometimes when I pass these towers I try to imagine what it’s like to live every day under the eye of normalized police surveillance: the visibility is surely as much, or more, of a deterrent as the lights and cameras themselves. (And yet, by now these familiar structures might as well be permanent neighborhood fixtures.) I also try to imagine what it’s like to live in an environment where personal safety is often at risk—that’s also an intrusion. And, I think about the difference between subjecting people to pervasive but low-visibility surveillance versus that which is localized but meant to be noticed and to encourage certain behaviors in response. Is one mode more insidious overall? Does one have more potential to erode public trust in legal and municipal institutions?

FFR

I was checking out manhole covers on my way to work and noticed one abbreviation (“DWS”) that I later searched for online. Here’s a handy Wikipedia page on manhole cover abbreviations in NYC, and some of them link to pages about the companies or public works divisions they refer to. But some covers only indicate which ironworks made them.

Two of them are rather intriguing:

  • BPB = Borough President Brooklyn
  • BPM = Borough President Manhattan

Would these be leading to dedicated ducts for the respective BP’s offices? And if so, are they   access points for internet, cable, or something else? Vacuum tubes? Escape tunnels?

The Wiki page includes links to a couple of other useful sites – including Forgotten New York (an excellent source for all kinds of info on old NYC infrastructure, streets, buildings, and their vestigial traces.)

There’s also this website, with great photos.

 

Wifi enabled chip allows paralyzed Rhesus monkey to walk again

The rear left leg is normal, the rear right leg is being moved through wifi enabled electrodes.

The rear left leg is normal, the rear right leg is being moved through wifi enabled electrodes. Source: http://spectrum.ieee.org

Researchers have successfully used wifi enabled electrodes to allow a monkey with nerve damage to walk again. From an article in the BBC:

Dr Gregoire Courtine, one of the researchers, said: “This is the first time that a neurotechnology has restored locomotion in primates.” He told the BBC News website: “The movement was close to normal for the basic walking pattern, but so far we have not been able to test the ability to steer.”

The technology connects the portion of the brain in control of motor function to the extremities of a Rhesus monkey with spinal cord damage. Signals that would normally flow along the spine, instead, travel over a wireless signal to electrodes that then signal nerves.

Implantable technology is nothing new but this development is monumental because the device is replacing a motor function that happens almost instantaneously. And, thanks to technology being created to transform Bluetooth signal (low power demand) into a wifi signal (high power demand), the possibility of implants in the brain and other parts of the body are capable of long and dependable operation.

I’m fascinated by this dislocation of electro-chemical signaling to electro-numerical signaling. Until now, chips implanted into the brain of a subject only collected data or controled simple robotic limbs. This device re-enables the use of the subject’s own body. Of course, the brain chip that allows a monkey to move its legs is not interpreting brain functions — it is only by-passing the damaged spinal cord. The cliche question to ask is whether someone can hack the wifi enabled transmitters.