An Imitation Game

The great mathematician and computer scientist, Alan Turing, proposed his now famous test for Artificial Intelligence in 1950. The test was simple, in a text conversation (then via teletype – today we would say texting) with a person and a machine, if the judge could not reliably tell which was which, then as Turing put it – (hedging even here), it would be unreasonable to say the machine was not intelligent.

The Turing Test bounds the domain of intelligence without defining what it is. We can only recognize intelligence by its results. However, over the more than 50 years since Turing’s formulation, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do,” but rather merely emulate some perceived component of intelligence such as inference or some structure of the brain such as a neural network. Recently the term “Artificial General Intelligence” (AGI) has come into use to refer precisely to the domain as Turing defined it.
There are lots of issues with such a test, the machine would have to be taught how to lie or the judge would have to be very restricted in what could be talked about, the judgement could be shaded from the standpoint of the judge’s expectations with respect to current state-of-the-art in AI, and finally do we really want to build artificial humans or just create intelligent machines?

A Common Sense Turing Test

From one standpoint, however, the Turing Test, or the spirit of it, continues to engage. People, even experts, or rather especially the experts, disagree about how AI can work and even what it is. The issue is further muddled by the rather grandiose claims being made by the developers of current applications. Apple says of SIRI, “It understands what you say” and “It knows what you mean” but as anyone quickly realizes after a few interchanges, it clearly does not understand one word of what you are saying. Sometimes it answers a question correctly but this is clearly the result of some opaque technology that is decidedly not comprehension as we humans recognize it.
This is the real power of Turing in the everyday context; we humans know when we are being understood and we recognize nonsense. Let’s call this the Common Sense Turing Test for AI. Clearly SIRI fails it. So does Amazon’s Alexa, Google Now, Microsoft’s Cortana and all the others.
We think this common sense Turing test can be helpful in dealing with a common and very reasonable concern that people have when trying to assess New Sapience’s Modelled Intelligence technology in comparison to what others are doing
Our sapiens clearly wins any comparison with the current natural language applications by this criterion.
Human: I put the key to my safe deposit box in the back of the top drawer of my dresser.
SIRI: I can help you find a place if you turn on location services.
Did SIRI respond as a human would? Clearly not; a complete failure of our Common Sense Turing Test. If we said the same thing to a sapiens and then asked:
“Where is the key to my safe deposit box?” a year later it would say:
“You put it in the back of the top drawer of your dresser.”
We’d call that a pass.