Matthew Bernius briefly mentioned John Searle's
famous "Chinese Room" thought experiment in his article
"Manufacturing and Encountering “Human” in the Age of Digital Reproduction",
which led to a discussion with Wanda and Adam in our last class on “weak” and “strong”
artificial intelligence. When I took Compsci100, the most basic, introductory
course that the Computer Science department offers, the artificial intelligence
topic was focused around investigating the two categories of AI; “weak” AI
aligned with Alan Turing and his Turing Test; and “strong” AI aligned with John
Searle and the Chinese Room. The Turing test goes as follows: a human judge
converses with two “others”. One is a human, one is a computer. If the judge
cannot tell the difference between the human and machine, then the machine is
intelligent. In this model thought processes do not matter, what matters is
that the computer acts as if it were thinking. Searle counters this with the
argument that thought processes do matter,
his Chinese Room test proving that he could appear to know Chinese, when really
he is just using a series of indexing – so not having true knowledge at all.
Searle’s category is that AI needs to not just appear as if they think like
humans, but actually think like humans.
Obviously the language of the course immediately
signifies Team Searle as better, after all, “weak” < “strong”. And so I just assumed
that any self-respecting roboticist, with any intent to further the field,
follows Searle. However in class I suddenly wondered if that’s perhaps an
incorrect assumption. I feel like all the fears, all the dystopian scenarios of
robot uprisings that humanity has surrounding artificial intelligence could only
be possible with Searle’s “strong” AI. Perhaps Turing’s “weak”, imitation AI is
a safer route to take. Not having fully autonomous robots doesn’t have to be a
bad thing. Maybe here, “weak” > “strong”.
No comments:
Post a Comment