Friday, October 18, 2013

Streaming and Presentism

While I was reading for the other paper I am doing this semester, Time and the Moving Image, I came across the historian François Hartog and anthropologist Marc Augé who both explore the concept of “presentism”. The article, “Temporalities of Video” by Christine Ross, related this concept to issues of time and temporality in video art, but it immediately made me think of last week’s class about protocol and cloud computing. Ross outlines how Augé argues that architectural ruins allow you to experience a sense of the passage of time, and that our current society is unable to produce ruins. Our buildings and structures are made for the present, to be replaced. And Hartog argues that “the prevailing regime of historicity characteristic of our times” (Ross, 85) is presentism. Which is a turning of the present into the most important value, at the expense of a connection with the past and future.


Ross herself mentioned “information technologies” in passing, referring to their “logic of instantaneity and transparency” (85) as another form of blocking ruin-making. And I feel like this relates closely to cloud computing, where files, software, and even hardware is provided remotely, and only exactly when the user needs it. In particular one of the cloud’s methods of delivery, streaming, seems particularly redolent of “presentism”. You are not even accessing one file at a time, but receiving it byte by byte, for it to disappear again. Absolutely no ruins are left in this case, not even a single file on some forgotten external hard-drive. Since I have jumped ship on my previous essay topic, I’m hoping to explore this idea in my essay instead.

Thursday, October 10, 2013

Forget the government, you should be watching out for me

Mark Andrejevic's reading was on an issue - "peer-to-peer surveillance" (212) - that I've admittedly only been really conscious of very recently. A couple of months ago the New Zealand Herald ran an article on a Brazilian "Boyfriend Tracker" app that caused a lot of controversy due to its allowance of "NSA-level spying on suspected cheaters" (thanks International Business Times). Of course everyone I know uses Facebook and Google to "spy on" others, for example a friend (who will remain nameless) recently showed me a shirtless photo of a boy she had a crush on from class, but didn't personally know. However it wasn't until every article on this app referenced the US's National Security Agency that I began thinking about this kind of surveillance alongside governmental and corporation surveillance.
 
I’ve always thought that the spy and the spyee were the important factor in deciding whether a kind of surveillance should be taken seriously (hence why I hadn’t considered googling a new friend a form of surveillance before), but the “boyfriend tracker” case reveals that the level of surveillance, what one has access to, is perhaps more important. Many of the features of this app don’t seem too troubling thanks to the fact one can already gain this kind of information about your boyfriend or girlfriend very easily, like obtaining a call history and text messages (check their phone while they’re in the bathroom, duh). But others, like turning on their phone to listen to their environment are apparently a step too far. It is factors like this which bring people to draw comparisons between individuals spying on people they know, and the government spying on civilians.
 
I’m having a hard time deciding how sinister peer-to-peer surveillance is, especially when my friend’s trophy of a photo of her shirtless crush is paired against the malevolent figure of governmental surveillance. But perhaps it needs to be considered more on a case-by-case basis, and not every private individual has the best of intentions.

Wednesday, October 2, 2013

Techno-Realist Science Fiction

I’ve been thinking generally about two of last week’s readings, Alexander Hall’s article on technological utopianism in culture and David Golumbia’s piece on cyberlibertarianism, and how they relate to representations of technology in contemporary science fiction film. In particular I’ve had this nagging thought that perhaps rather than easily being categorised as utopian or dystopian, as being technophobic or technopilic, contemporary science fiction films are more techno-realist (a term Luke introduced me to in last week’s class).

Traditionally, or at least following the nuclear attacks on Japan as Hall pointed out, films about technology have been overwhelmingly pessimistic. Hall’s article then argues that the general cultural mood is becoming more optimistic about technology and its future. But I feel like the overall trend of late (i.e. the last five years) is to have science fiction films that are not overtly pessimistic about technology (as is the tradition), but nor are they technophilic, with cyberlibertarian viewpoint. That is to say, they are complicated!


One film in particular that I’m considering writing about for my research assignment is Duncan Jones’ Moon (2009). Moon is very similar to Stanley Kubrick’s 2001: A Space Odyssey, it’s really an homage of sorts. But it is where Moon departs from the earlier film that reveals how general attitudes to the future of technology, and how humans factor in, are perhaps changing. Where in the original a human fights off a homicidal computer, in Moon *spoilers* the target of the homicidal computer is actually a cyborg. What Moon questions is not whether we, the humans, will be safe around future technology (one of the main issues that viewers took away from 2001: A Space Odyssey), but rather the ethics and morals surrounding posthumanism, from a cyborg’s point of view. Maybe Moon is part of a new breed of sci fi films, and so reflective of a new cultural attitude, which don’t naively present a perfect future, but don’t predict humanity’s doom either. It’s more techno-realist.

Wednesday, September 25, 2013

Robot/Alien Bigotry

During last week’s class on “saving the human” Neal discussed how Sherry Turkle‘s book Alone Together was somewhat disturbing, because during her argument for the uniqueness and superiority of humans, the word robot could too easily be replaced with the name of a minority. The language was similar to a racists explanation of the superiority of the white race

This particular “save the human” approach, or effect, is echoed in many science fiction films, where robots or other “non-humans” are discriminated against - although in the case of film it is most definitely intentional. It has long been acknowledged that science fiction often acts as political allegory. Just one example is District 9, where the similarity between the treatment and slums of the refugee aliens (or “prawns”) and those of blacks in South Africa is no accident, in fact it is one of the main points of the film. Of course aliens are not the same as robots, and so District 9 may seem a somewhat irrelevant example, but their function in science fiction, namely as “non-humans”, aligns them. Many books on the subject, such as Douglas de Witt’s “Difference Engine: Aliens, Robots, and other Racial Matters in the History of Science Fiction”, discuss the two together, sometimes even interchangeably in their role as a metaphor for race. But robotics and technology do turn up in District 9 too, with the aliens’ weapons, which are intimately connected to their DNA, acting as their own post-alienism! Here it isn’t ubiquitous media and advanced technology that poses as the threat, but the humans themselves (an issue we’ve also briefly discussed in class).


But of course as the kinds of artificial intelligence (or alien life!) that is represented in these films become a reality, the discrimination will stop being a metaphor and become the thing. Hopefully, a few hundred years from now we won’t be robot bigots – but unfortunately our track record is not promising.

Thursday, September 19, 2013

Turing v Searle

Matthew Bernius briefly mentioned John Searle's famous "Chinese Room" thought experiment in his article "Manufacturing and Encountering “Human” in the Age of Digital Reproduction", which led to a discussion with Wanda and Adam in our last class on “weak” and “strong” artificial intelligence. When I took Compsci100, the most basic, introductory course that the Computer Science department offers, the artificial intelligence topic was focused around investigating the two categories of AI; “weak” AI aligned with Alan Turing and his Turing Test; and “strong” AI aligned with John Searle and the Chinese Room. The Turing test goes as follows: a human judge converses with two “others”. One is a human, one is a computer. If the judge cannot tell the difference between the human and machine, then the machine is intelligent. In this model thought processes do not matter, what matters is that the computer acts as if it were thinking. Searle counters this with the argument that thought processes do matter, his Chinese Room test proving that he could appear to know Chinese, when really he is just using a series of indexing – so not having true knowledge at all. Searle’s category is that AI needs to not just appear as if they think like humans, but actually think like humans.


Obviously the language of the course immediately signifies Team Searle as better, after all,  “weak” < “strong”. And so I just assumed that any self-respecting roboticist, with any intent to further the field, follows Searle. However in class I suddenly wondered if that’s perhaps an incorrect assumption. I feel like all the fears, all the dystopian scenarios of robot uprisings that humanity has surrounding artificial intelligence could only be possible with Searle’s “strong” AI. Perhaps Turing’s “weak”, imitation AI is a safer route to take. Not having fully autonomous robots doesn’t have to be a bad thing. Maybe here, “weak” > “strong”.

Thursday, August 29, 2013

Clearing up on Luhmann, Wiener and Posthumanism

Thinking about last week’s class and readings, I have to admit that I still haven’t gotten my head 100% around the more important ideas. On their own I understand (I think!) the concepts of Luhmann’s social systems theory, Wiener’s cybernetics and posthumanism in general. But how social systems theory connects to Wiener’s cybernetics, and Wiener’s cybernetics to posthumanism, I’m not exactly sure, or at least blurry on the details. I understand that both social systems theory and cybernetics explore systems. Luhmann discussed how systems are operationally closed and, and I know that cybernetics involve systems with a closed signalling loop. The similarities are already very clear. But when Neal talked about the roots of the word cybernetics, with cyber being the Latin for a boat’s steersman I began (no pun intended) to get lost. What particularly confused me was when Neal said that the most important element of the arrangement of steersman, boat, and water, was the water. I definitely missed something.


We then didn’t get far enough, or have enough time in class to discuss how cybernetics is related to posthumanism. I have made my cursory scroll through the cybernetics Wikipedia page to see if there was some broad, overly simplified explanation, but obviously I feel like I’ve got some further enlightening to do. The self-organisation and closed signalling loops of cybernetics explains the systems or robots very well, but I think that’s only the surface of their connection. That or I’m over-complicating things. But considering that I want to write about posthuman representations in science fiction films as part of my research assignment, I feel I need to get a much better grasp on the concepts of the technology, and exactly how it relates to its broader, connected theories. 

Thursday, August 22, 2013

Science Fiction as a Humanist Technology Pessimist

Since I am wanting to write about science fiction films for my research assignment, I tend to relate every class reading back to science fiction. And therefore when Hans Moeller discussed the two sides of the humanist approach to society and technology (particularly posthumanism) at the beginning of his explanation of social systems theory, I was struck by its parallels to certain discussions on science fiction. Moeller writes there are those pessimistic “in the face of waning humaneness” and those optimists who embrace technology’s “human prospects” (4). This brought to mind Daniel Dinello’s book that I am (probably) using for my book review assignment, Technophobia! Science Fiction Visions of Posthuman Technology, where Dinello discusses how scientists working in the field tend to be blissful optimists, whereas science fiction is pessimistically attached to the techno-dystopia.

Other research I have come across also stresses the techno-pessimisim surrounding science fiction. Noga Applebaum accuses science fiction of endorsing a “technophobic agenda” to young adults, of essentially creating future technophobes! And then of course there is Susan Sontag’s famous essay “The Imagination of Disaster” which claimed that “science fiction films are not about science. They are about disaster”.  This bold claim was very useful for studying science fiction in new ways, although I would argue that it’s about both, the disaster of science. So then scientists are Mueller’s technology optimists, and science fiction the pessimists, who are strangely completely enamoured with the subject of their pessimism.

Interestingly, the most explored technological issue in science fiction, and most common cause of technological dystopias, is posthumanism, which Neal says we will be arriving at in tomorrow’s class via social systems theory. So as a final aside, if we are sticking to the idea that science fiction is a “humanist” pessimism, what does it say about, and how would it relate to social systems theory?

Thursday, August 15, 2013

Ubiquitous Media in Sci Fi

I've been thinking about how we all came to have an understanding of what ubiquitous media was before we started this course, and for those like me who haven't taken that many media-focused papers, it's usually through (among other things of course) film and T.V. Of course the genre which involves representations of new technologies and media the most is science fiction, which lead me to do some reading on the subject.

Often when academics write about the technologies, real or imagined, in science fiction films they discuss them as metaphors for something else, usually something to do with collective social anxieties. In the rather excellent Liquid Metal – the Science Fiction Film Reader, editor Sean Redmond writes “if you want to know what really aches a culture at any given time don’t go to its art cinema, or its gritty social realist texts, but go to its science fiction” (x). And the rest of this text focuses on the broader metaphorical/social meanings behind science fiction representations of technology and media. This approach is incredibly important, but I noticed very little focused on the technology itself and its environment, and what it says about how we view and understand new media and technologies.

More than ever, as the technological fantasies of these films become a reality, science fiction reflects our views and anxieties surrounding literal technology and media, and even more importantly, it shapes our views too. Admittedly whenever the subject of surveillance or dystopia is brought up in this course my mind flicks for a moment to 2001: A Space Odyssey’s HAL. To cheer myself up after this, I like to think about Iron Man's Tony Stark and his friendly anthropomorphic fire extinguisher. The ability of films to shape how individuals feel about new media and technology is powerful, and the hows and whys of it is something I would be interested exploring for my research essay.

Thursday, August 8, 2013

The Humble Kindle

As I was reading Paul Dourish’s chapter ‘Getting in Touch’, my kindle kept coming to mind, and by the time I had finished I was convinced it is a wonderful example of an intermediate tool that straddles the worlds of the traditional box computer and that of invisible computing, of technology that moves out into the world.

Dourish writes that “the move back and forth between electronic paper forms is not only inconvenient but also impoverished” (34) which led Pierre Wellner to wonder “if there wasn’t a way to combine the two world more effectively by augmenting the physical world with computational properties” (34). In a reverse of this statement, I believe that the kindle augments the computer with physical properties. It appears like a tablet, its computer-ness undeniable (and so not falling into the “invisible computing” category), yet in crucial ways it is very much like a paperback. It is the same width and height as the average paperback (its depth more similar to a novella than a novel) and uses “electronic ink” rather than the computer monitor’s backlit display. It has the easy portability of a book and is not as delicate as other electronic tools. Importantly it also serves a single task, reading, and by associating it only with reading its physical book-like qualities are enhanced.

The seam between the physical and virtual is not as invisible here as it is in Durell Bishop‘s marble answering machine, but unlike the answering machine it is a piece of new technology which is widely used and has actually proven its place and need. It is easy to appreciate because its functions (fitting many books in one book-shaped tablet) are obviously beneficial. It does not appear to be trying to sell us something for the sake of cool technology. Because of this it is rather humble, and can be taken for granted, but it shouldn’t be and straddles the old and new.


I was going to write about my early essay research for this post, but instead the kindle’s gotten me distracted!

Thursday, August 1, 2013

Media Studies 2.0 and Video Game Studies

William Merrin's piece, ‘Media Studies 2.0: upgrading and open-sourcing the discipline’ examines how media studies academics have to reacquaint themselves with what they are studying and how they teach it. The discussion on this struggle reminded me of a more specific element of media studies, video games. This discipline was remarkable in its early days for the general lack of video game skills that those writing about the medium had. It was a serious problem and one not overlooked by the academics themselves.  Like Merrin said, they knew something was going on but were not young enough to know exactly what it was. This is in large part remedied now due to continuing academic evolution within the field, and perhaps because those who grew up with video gaming as a relatively more mainstream activity are now old enough to be academics themselves, and those early trailblazers have had enough free time in the last 20 years to expand their GTA skills.

But unlike video games, one problem for media studies is that one can’t be born late enough to be native to all changes in media, as they are occurring so rapidly.  The leaps in gaming have been more straightforward, less major. Improvements in graphics don’t smash the fundamentals of the medium. But perhaps a more crucial problem, and what seems to be a major gripe of Merrin’s, is that media studies falls back on outdated concepts and categories, while continuing to ignore, to varying degrees, “the engineering or scientific principles of its media” and technology. Video games studies, on the other hand, do not have outdated concepts and categories to fall back on because none exist for the discipline, it has no history. And its academics have never ignored the technology that underlines games. In this way media studies could maybe learn something from the less constrained, conservative approach of video game studies.