Representation of a neural network
When we explain our Compact Knowledge Model technology and describe it’s far reaching implications for Artificial General Intelligence a common reaction is “but surely Google and the other big tech companies are doing something similar.”
As we know, Google (and all of the big tech companies) have been making massive investments in the (we think misnamed) “cognitive computing” technology that is now considered almost synonymous with AI by common usage. “Cognitive computing” is jargon for artificial neural networks (ANNs). Neural networks are “trained” over vast numbers of iterations on supercomputers to recognize patterns in equally vast databases. A very expensive process, but one that works reasonably well for things like pattern recognition in photographs, though even here, there are limitations, because ANNs lack any knowledge of the real world objects they are being trained to recognize.
Applications of neural networks to natural language processing proceed in the same way as with images. The networks are trained under the control of algorithms designed to find certain patterns in huge databases, in this case, of documents, which from the standpoint of the program, are just an array of numbers (exactly as a photograph is nothing but an array of numbers to such programs.)
The applications process these text databases but they have no reading comprehension as humans recognize it – no notion whatsoever about the content or meaning of the text. Humans curate the databases to limit the search scope and design “training” algorithms to identity patterns in the numbers – numbers because text in computers is represented in ASCII; a numerical representation. Once trained, and training may take days, weeks or longer on a supercomputer, these programs can find output patterns (answers) that (according to the training algorithm) match input patterns (questions.)
While pattern recognition of imagery is paying off nicely in facial recognition and other areas, the results of statistical processing of natural language outside of very narrow limits (like Jeopardy) are at best – well, judge for yourself.
The following dialog demonstrates natural language processing at Google using neural networks to “predict the next sentence given the previous sentence or sentences in a conversation.” The dialog was recorded a little over a year ago and we have been unable to find a later example. We do believe the project is on-going.
Google’s Neural Conversational Model, June 23, 2015
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of emotions?
Machine: I don’t know.
Human: What is moral?
Machine: What empowered Humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Human: What is morality?
Machine: What is altruism?
Human: What is the definition of altruism?
Machine: If you don’t believe in god, then you don’t know.
Human: Ok … so what is the definition of morality?
Machine: Well, the truth is, you’re not a believer in god almighty.
Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!
Here is what the people at Google say about this:
“Artificial intelligence that can emulate Human reasoning and even emotions is within Google’s reach, according to a leading AI scientist employed by the company.
Talking to the Guardian, Professor Geoff Hinton, an expert in artificial neural networks, said Google is “on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.”
Google is working to encode thoughts as vectors described by a sequence of numbers. These “thought vectors” could endow AI systems with a Human-like “common sense” within a decade, according to Hinton, who has been working at Google for two years.”
Do you feel Google’s effort passes the Common Sense Turing Test (in spite of the above claims)? Do you think it understood the questions? Do you think its answers were based on some comprehension or thought process about the meaning of those questions in the same sense a person would?
We didn’t. Honestly, we are baffled as to why Dr. Hinton feels this nonsensical interchange between man and machine is indicative of being on the brink of anything recognizable as comprehension. It seems very clear to us that the machine has no clue as to the meaning or significance of what it is saying.
What we do see is the unmistakable signature of statistical processing. Responding to a question with a question is something that people do, on a statistical basis, but not because the program had any notional of rhetorical device as humans do. Again, the non-sequitur answer to the question “What is moral?” is most likely indicative that the database the system was trained on probably had many articles that addressed the moral implications of having children in one context or another – so again there was a statistical connection but not a reasonable one. Finally, notice that the human only asked questions, never stated any new information – that’s because such systems have no capacity to learn from conversation – again, no capacity to contain knowledge.
What we are not baffled about, is why in spite of all the R&D dollars being poured into ANNs to process natural language, they continue to disappoint. The reason is that they attempt to extract information from natural language text that isn’t there. Nature language does not in itself store information. It is an encoded communications protocol containing information processing instructions between two entities – in the case of written or stored language, the entity that wrote it and the entity that reads it.
When you read these sentences they only have meaning for you because you already know what the words mean – that is, the words are linked to concepts already in your mind. No information would be conveyed were this not the case. So language conveys new information only because it is an instruction from the sender to the user to rearrange existing knowledge in a new way.