Wednesday, September 25, 2013

Robot/Alien Bigotry

During last week’s class on “saving the human” Neal discussed how Sherry Turkle‘s book Alone Together was somewhat disturbing, because during her argument for the uniqueness and superiority of humans, the word robot could too easily be replaced with the name of a minority. The language was similar to a racists explanation of the superiority of the white race

This particular “save the human” approach, or effect, is echoed in many science fiction films, where robots or other “non-humans” are discriminated against - although in the case of film it is most definitely intentional. It has long been acknowledged that science fiction often acts as political allegory. Just one example is District 9, where the similarity between the treatment and slums of the refugee aliens (or “prawns”) and those of blacks in South Africa is no accident, in fact it is one of the main points of the film. Of course aliens are not the same as robots, and so District 9 may seem a somewhat irrelevant example, but their function in science fiction, namely as “non-humans”, aligns them. Many books on the subject, such as Douglas de Witt’s “Difference Engine: Aliens, Robots, and other Racial Matters in the History of Science Fiction”, discuss the two together, sometimes even interchangeably in their role as a metaphor for race. But robotics and technology do turn up in District 9 too, with the aliens’ weapons, which are intimately connected to their DNA, acting as their own post-alienism! Here it isn’t ubiquitous media and advanced technology that poses as the threat, but the humans themselves (an issue we’ve also briefly discussed in class).


But of course as the kinds of artificial intelligence (or alien life!) that is represented in these films become a reality, the discrimination will stop being a metaphor and become the thing. Hopefully, a few hundred years from now we won’t be robot bigots – but unfortunately our track record is not promising.

Thursday, September 19, 2013

Turing v Searle

Matthew Bernius briefly mentioned John Searle's famous "Chinese Room" thought experiment in his article "Manufacturing and Encountering “Human” in the Age of Digital Reproduction", which led to a discussion with Wanda and Adam in our last class on “weak” and “strong” artificial intelligence. When I took Compsci100, the most basic, introductory course that the Computer Science department offers, the artificial intelligence topic was focused around investigating the two categories of AI; “weak” AI aligned with Alan Turing and his Turing Test; and “strong” AI aligned with John Searle and the Chinese Room. The Turing test goes as follows: a human judge converses with two “others”. One is a human, one is a computer. If the judge cannot tell the difference between the human and machine, then the machine is intelligent. In this model thought processes do not matter, what matters is that the computer acts as if it were thinking. Searle counters this with the argument that thought processes do matter, his Chinese Room test proving that he could appear to know Chinese, when really he is just using a series of indexing – so not having true knowledge at all. Searle’s category is that AI needs to not just appear as if they think like humans, but actually think like humans.


Obviously the language of the course immediately signifies Team Searle as better, after all,  “weak” < “strong”. And so I just assumed that any self-respecting roboticist, with any intent to further the field, follows Searle. However in class I suddenly wondered if that’s perhaps an incorrect assumption. I feel like all the fears, all the dystopian scenarios of robot uprisings that humanity has surrounding artificial intelligence could only be possible with Searle’s “strong” AI. Perhaps Turing’s “weak”, imitation AI is a safer route to take. Not having fully autonomous robots doesn’t have to be a bad thing. Maybe here, “weak” > “strong”.