Been a long while since I've posted on here - 4 months according to the black box recorder. Regrettably (or unregrettably as tastes vary) this will be another short one.
In the brilliant film Ex Machina we get this dialogue between Caleb and Nathan talking about a nascent AI:
Caleb - "She made a joke".
Nathan - "Right. When she threw your line back at you. About being interested to see what she'd choose. Right, I noticed that, too."
Caleb - "Yeah, that got me thinking, you know. In a way, that's the best indication of AI that I've seen in her so far. It's discretely complicated. It's like... it's kind of non-autistic."
Within the confines of the Turing Test (around which the movie is based) this makes perfect sense. The Turing Test is all about appearing eager and natural in making an impression. A sullen or non-responsive AI would be far less likely to pass than one that appeared outgoing, humorous and friendly.
Yet surely in the interests of a conclusive test we are diminishing the wide range of human experience. Autistic people are people too and some people tend to be more 'natural' in conversation than others.
Furthermore our perception of what is a normal method of response is hugely dependent on the culture and social setting in which the Test occurs. Asking if a relative stranger is in a relationship might seem perfectly natural in an American tech-firm but would have been seen as extremely forward or blunt in many historical or contemporary non-Western societies.
Not that this invalidates the test of course. It is simply a tool for evaluating AI and makes no claims that it should be the only method of divining consciousness. Still consideration of the cultural baggage we may be bringing in to the test is generally ignored in favour of the technical and societal consequences of developing a functional AI.