AI at Google
Representation of a neural network Artificial Neural Networks & Natural Language When we explain our Compact Knowledge Model technology and describe it's far reaching implications for Artificial General Intelligence a common reaction is "but surely Google and the other big tech companies are doing something similar." As we know, Google (and all of the big tech companies) have been making massive investments in the (we think misnamed) "cognitive computing" technology that is now considered almost synonymous with AI by common usage. "Cognitive computing" is jargon for artificial neural networks (ANNs). Neural networks are "trained" over vast numbers of iterations on supercomputers to recognize patterns in equally vast databases. A very expensive process, but one that works reasonably well for things like pattern recognition in photographs, though even here, there are limitations, because ANNs lack any knowledge of the real world objects they are being trained to recognize. Applications of neural networks to natural language processing proceed in the same way as with images. The networks are trained under the control of algorithms designed to find certain patterns in huge databases, in this case, of documents, which from the standpoint of the program, are just an array of numbers (exactly as a photograph is nothing but an array of numbers to such programs.) The applications process these text databases but they have no reading comprehension as humans recognize it - no notion whatsoever about the content or meaning of the text. Humans curate the databases to limit the [...]