Friday, October 18, 2013

Streaming and Presentism

While I was reading for the other paper I am doing this semester, Time and the Moving Image, I came across the historian François Hartog and anthropologist Marc Augé who both explore the concept of “presentism”. The article, “Temporalities of Video” by Christine Ross, related this concept to issues of time and temporality in video art, but it immediately made me think of last week’s class about protocol and cloud computing. Ross outlines how Augé argues that architectural ruins allow you to experience a sense of the passage of time, and that our current society is unable to produce ruins. Our buildings and structures are made for the present, to be replaced. And Hartog argues that “the prevailing regime of historicity characteristic of our times” (Ross, 85) is presentism. Which is a turning of the present into the most important value, at the expense of a connection with the past and future.


Ross herself mentioned “information technologies” in passing, referring to their “logic of instantaneity and transparency” (85) as another form of blocking ruin-making. And I feel like this relates closely to cloud computing, where files, software, and even hardware is provided remotely, and only exactly when the user needs it. In particular one of the cloud’s methods of delivery, streaming, seems particularly redolent of “presentism”. You are not even accessing one file at a time, but receiving it byte by byte, for it to disappear again. Absolutely no ruins are left in this case, not even a single file on some forgotten external hard-drive. Since I have jumped ship on my previous essay topic, I’m hoping to explore this idea in my essay instead.

Thursday, October 10, 2013

Forget the government, you should be watching out for me

Mark Andrejevic's reading was on an issue - "peer-to-peer surveillance" (212) - that I've admittedly only been really conscious of very recently. A couple of months ago the New Zealand Herald ran an article on a Brazilian "Boyfriend Tracker" app that caused a lot of controversy due to its allowance of "NSA-level spying on suspected cheaters" (thanks International Business Times). Of course everyone I know uses Facebook and Google to "spy on" others, for example a friend (who will remain nameless) recently showed me a shirtless photo of a boy she had a crush on from class, but didn't personally know. However it wasn't until every article on this app referenced the US's National Security Agency that I began thinking about this kind of surveillance alongside governmental and corporation surveillance.
 
I’ve always thought that the spy and the spyee were the important factor in deciding whether a kind of surveillance should be taken seriously (hence why I hadn’t considered googling a new friend a form of surveillance before), but the “boyfriend tracker” case reveals that the level of surveillance, what one has access to, is perhaps more important. Many of the features of this app don’t seem too troubling thanks to the fact one can already gain this kind of information about your boyfriend or girlfriend very easily, like obtaining a call history and text messages (check their phone while they’re in the bathroom, duh). But others, like turning on their phone to listen to their environment are apparently a step too far. It is factors like this which bring people to draw comparisons between individuals spying on people they know, and the government spying on civilians.
 
I’m having a hard time deciding how sinister peer-to-peer surveillance is, especially when my friend’s trophy of a photo of her shirtless crush is paired against the malevolent figure of governmental surveillance. But perhaps it needs to be considered more on a case-by-case basis, and not every private individual has the best of intentions.

Wednesday, October 2, 2013

Techno-Realist Science Fiction

I’ve been thinking generally about two of last week’s readings, Alexander Hall’s article on technological utopianism in culture and David Golumbia’s piece on cyberlibertarianism, and how they relate to representations of technology in contemporary science fiction film. In particular I’ve had this nagging thought that perhaps rather than easily being categorised as utopian or dystopian, as being technophobic or technopilic, contemporary science fiction films are more techno-realist (a term Luke introduced me to in last week’s class).

Traditionally, or at least following the nuclear attacks on Japan as Hall pointed out, films about technology have been overwhelmingly pessimistic. Hall’s article then argues that the general cultural mood is becoming more optimistic about technology and its future. But I feel like the overall trend of late (i.e. the last five years) is to have science fiction films that are not overtly pessimistic about technology (as is the tradition), but nor are they technophilic, with cyberlibertarian viewpoint. That is to say, they are complicated!


One film in particular that I’m considering writing about for my research assignment is Duncan Jones’ Moon (2009). Moon is very similar to Stanley Kubrick’s 2001: A Space Odyssey, it’s really an homage of sorts. But it is where Moon departs from the earlier film that reveals how general attitudes to the future of technology, and how humans factor in, are perhaps changing. Where in the original a human fights off a homicidal computer, in Moon *spoilers* the target of the homicidal computer is actually a cyborg. What Moon questions is not whether we, the humans, will be safe around future technology (one of the main issues that viewers took away from 2001: A Space Odyssey), but rather the ethics and morals surrounding posthumanism, from a cyborg’s point of view. Maybe Moon is part of a new breed of sci fi films, and so reflective of a new cultural attitude, which don’t naively present a perfect future, but don’t predict humanity’s doom either. It’s more techno-realist.

Wednesday, September 25, 2013

Robot/Alien Bigotry

During last week’s class on “saving the human” Neal discussed how Sherry Turkle‘s book Alone Together was somewhat disturbing, because during her argument for the uniqueness and superiority of humans, the word robot could too easily be replaced with the name of a minority. The language was similar to a racists explanation of the superiority of the white race

This particular “save the human” approach, or effect, is echoed in many science fiction films, where robots or other “non-humans” are discriminated against - although in the case of film it is most definitely intentional. It has long been acknowledged that science fiction often acts as political allegory. Just one example is District 9, where the similarity between the treatment and slums of the refugee aliens (or “prawns”) and those of blacks in South Africa is no accident, in fact it is one of the main points of the film. Of course aliens are not the same as robots, and so District 9 may seem a somewhat irrelevant example, but their function in science fiction, namely as “non-humans”, aligns them. Many books on the subject, such as Douglas de Witt’s “Difference Engine: Aliens, Robots, and other Racial Matters in the History of Science Fiction”, discuss the two together, sometimes even interchangeably in their role as a metaphor for race. But robotics and technology do turn up in District 9 too, with the aliens’ weapons, which are intimately connected to their DNA, acting as their own post-alienism! Here it isn’t ubiquitous media and advanced technology that poses as the threat, but the humans themselves (an issue we’ve also briefly discussed in class).


But of course as the kinds of artificial intelligence (or alien life!) that is represented in these films become a reality, the discrimination will stop being a metaphor and become the thing. Hopefully, a few hundred years from now we won’t be robot bigots – but unfortunately our track record is not promising.

Thursday, September 19, 2013

Turing v Searle

Matthew Bernius briefly mentioned John Searle's famous "Chinese Room" thought experiment in his article "Manufacturing and Encountering “Human” in the Age of Digital Reproduction", which led to a discussion with Wanda and Adam in our last class on “weak” and “strong” artificial intelligence. When I took Compsci100, the most basic, introductory course that the Computer Science department offers, the artificial intelligence topic was focused around investigating the two categories of AI; “weak” AI aligned with Alan Turing and his Turing Test; and “strong” AI aligned with John Searle and the Chinese Room. The Turing test goes as follows: a human judge converses with two “others”. One is a human, one is a computer. If the judge cannot tell the difference between the human and machine, then the machine is intelligent. In this model thought processes do not matter, what matters is that the computer acts as if it were thinking. Searle counters this with the argument that thought processes do matter, his Chinese Room test proving that he could appear to know Chinese, when really he is just using a series of indexing – so not having true knowledge at all. Searle’s category is that AI needs to not just appear as if they think like humans, but actually think like humans.


Obviously the language of the course immediately signifies Team Searle as better, after all,  “weak” < “strong”. And so I just assumed that any self-respecting roboticist, with any intent to further the field, follows Searle. However in class I suddenly wondered if that’s perhaps an incorrect assumption. I feel like all the fears, all the dystopian scenarios of robot uprisings that humanity has surrounding artificial intelligence could only be possible with Searle’s “strong” AI. Perhaps Turing’s “weak”, imitation AI is a safer route to take. Not having fully autonomous robots doesn’t have to be a bad thing. Maybe here, “weak” > “strong”.

Thursday, August 29, 2013

Clearing up on Luhmann, Wiener and Posthumanism

Thinking about last week’s class and readings, I have to admit that I still haven’t gotten my head 100% around the more important ideas. On their own I understand (I think!) the concepts of Luhmann’s social systems theory, Wiener’s cybernetics and posthumanism in general. But how social systems theory connects to Wiener’s cybernetics, and Wiener’s cybernetics to posthumanism, I’m not exactly sure, or at least blurry on the details. I understand that both social systems theory and cybernetics explore systems. Luhmann discussed how systems are operationally closed and, and I know that cybernetics involve systems with a closed signalling loop. The similarities are already very clear. But when Neal talked about the roots of the word cybernetics, with cyber being the Latin for a boat’s steersman I began (no pun intended) to get lost. What particularly confused me was when Neal said that the most important element of the arrangement of steersman, boat, and water, was the water. I definitely missed something.


We then didn’t get far enough, or have enough time in class to discuss how cybernetics is related to posthumanism. I have made my cursory scroll through the cybernetics Wikipedia page to see if there was some broad, overly simplified explanation, but obviously I feel like I’ve got some further enlightening to do. The self-organisation and closed signalling loops of cybernetics explains the systems or robots very well, but I think that’s only the surface of their connection. That or I’m over-complicating things. But considering that I want to write about posthuman representations in science fiction films as part of my research assignment, I feel I need to get a much better grasp on the concepts of the technology, and exactly how it relates to its broader, connected theories. 

Thursday, August 22, 2013

Science Fiction as a Humanist Technology Pessimist

Since I am wanting to write about science fiction films for my research assignment, I tend to relate every class reading back to science fiction. And therefore when Hans Moeller discussed the two sides of the humanist approach to society and technology (particularly posthumanism) at the beginning of his explanation of social systems theory, I was struck by its parallels to certain discussions on science fiction. Moeller writes there are those pessimistic “in the face of waning humaneness” and those optimists who embrace technology’s “human prospects” (4). This brought to mind Daniel Dinello’s book that I am (probably) using for my book review assignment, Technophobia! Science Fiction Visions of Posthuman Technology, where Dinello discusses how scientists working in the field tend to be blissful optimists, whereas science fiction is pessimistically attached to the techno-dystopia.

Other research I have come across also stresses the techno-pessimisim surrounding science fiction. Noga Applebaum accuses science fiction of endorsing a “technophobic agenda” to young adults, of essentially creating future technophobes! And then of course there is Susan Sontag’s famous essay “The Imagination of Disaster” which claimed that “science fiction films are not about science. They are about disaster”.  This bold claim was very useful for studying science fiction in new ways, although I would argue that it’s about both, the disaster of science. So then scientists are Mueller’s technology optimists, and science fiction the pessimists, who are strangely completely enamoured with the subject of their pessimism.

Interestingly, the most explored technological issue in science fiction, and most common cause of technological dystopias, is posthumanism, which Neal says we will be arriving at in tomorrow’s class via social systems theory. So as a final aside, if we are sticking to the idea that science fiction is a “humanist” pessimism, what does it say about, and how would it relate to social systems theory?