I always found odd that the proposition that a human judge a remote teletext communicator on human terms was anything less than obvious and a non-thought. For if it were so obvious then why bother giving a name and attributing it only to one person?
D'ailleurs, the notion itself raises several problems:
- One of the most pressing problematics is that the Turing test asks the testor, or subject to lie to themselves. This is the case when a person knowingly engages a machine in order to evaluate its Turing test passability. Judging the remote communicator against different human appraisal criteria becomes much akin to proofs by contradiction and is thus nonconstructive at best. In other words, it helps none in defining formal distinguishing patterns separating menschen and machines.
- As noted earlier , the Turing test fails to pass an idea of a test (a real one this time) where lack of autonomy should be one of the first signs that the remote interlocutor is a machine not a mensch.
- Another problem is the undue emphasis placed on this non-thought , in the form of things like the Loebner prize raising the problems discussed earlier on.
In the end, it is fortunate that with the kind of application we are interested in and which define AI for us we don't care about the notions of the so-called "Turing test".
No comments:
Post a Comment