In a recent TED talk, AI researcher Janelle Shane shared the weird, sometimes alarming antics of Artificial Neural Network (ANNs) AI algorithms as they try to solve human problems.[i] She points out that the best ANNs we have today are maybe on par with worm brains. So how is it that ANNs were ever termed AI in the first place? Worms aren’t intelligent. This goes back to what may be called “Aspirational AI” a common phenomenon in the AI community for at least 40 years. It happens like this: someone has a theory about what intelligence is and goes off and writes some code to implement that theory. Even if it does not work or doesn’t do anything that looks like intelligence it is still considered “AI” because that is what they were aspiring to create. Calling ANNs AI is like being invited into a hanger to look at a new aircraft design but finding nothing but landing gear. You ask: “I thought you said there was an airplane.” And are told: “Yes, there it is - it is just not a very good airplane yet.” We saw another example of this “Aspirational AI” in a recent article in the Analytics India magazine[ii] that listed New Sapience among 10 companies in the Artificial General Intelligence space. They all say they are working on the AGI problem, but we are the only one that has anything to show for our efforts: a working prototype that comprehends language in the same sense [...]
What We Have Been Waiting For Altaira: "Oh Robbie, make me a new dress!" Robbie: "Yes ma'am, with diamonds and rubies this time?" For decades we have envisioned a day when we would have wonderful machines that would understand what we ask of them and have the power to do it. We build machines to extend our own power. The first machines amplified the power of our muscles. Later, with devices like the telescope, we learned to amplify our senses. More recently, we have found ways to amplify some of our brain's basic cognitive functions with computers. Computers have always exceeded humans at arithmetic and formal logic and more recently, by imitating some of the brain's cellular architecture with artificial neural networks, substantial progress has been made in areas like auditory discrimination, image recognition and extracting useful information from large databases. Programs based on these techniques have beaten the best human players in certain games, such as Go and even Jeopardy. But these achievements leave us wanting even more. They are narrow, not general, in their capabilities. A program "trained" to play one game cannot play another. What We Have Been Settling For Anyone who has tried to have a real conversion with SIRI, Alexa or any other of today's "digital personal assistants" soon realizes that they don't actually comprehend a single word you say to them. They are simply matching input data patterns [...]
Narrow AI's Dark Secrets Articles about AI are published everyday. The term "AI" is used in a very narrow sense in the majority of these articles: it means applications based on training artificial neural networks under the control of sophisticated algorithms to solve very particular problems. Here is the first dark secret: This kind of AI isn't even AI. Whatever this software has, the one thing it lacks is anything that resembles intelligence. Intelligence is what distinguishes us from the other animals as demonstrated by its product: knowledge about the world. It is our knowledge and nothing else that has made us masters of the world around us. Not our clear vision, our acute hearing or our subtle motor control, other animals do that every bit as well or better. The developers of this technology understand that and so a term was invented some years ago to separate these kind of programs with real AI; Narrow AI which is in used in contrast to Artificial General Intelligence (AGI) which is the kind that processes and creates world knowledge. Here's the second dark secret. The machine learning we have been hearing about isn't learning at all in the usual sense. When a human "learns" how to ride a bicycle, they do so by practicing until the neural pathways that coordinate the interaction of the senses and muscles have been sufficiently established to allow one to stay balanced. This “neural learning” is clearly very different than the kind of “cognitive [...]
Recently a neural network was trained to recognize an image of a dumbbell, the weight-lifting implement. It did pretty well except for the fact that when programmed to output the composite picture of a dumbbell it showed a very good picture of the weight-lifting tool but clearly attached to it was a very recognizable image of a human hand and arm grasping the bar. This means the program would rate a picture of a dumbbell without a person holding it as less likely to contain a dumbbell than one where it was being held. People, of course, would not make this mistake because they know dumbbells don’t have hands and arms. However, in the picture database the system was trained against, more of the images showed the humans holding the dumbbell than not. How could the program know? – it couldn’t, because ANN’s as they exist today and for the foreseeable future (or maybe never), have no capacity to contain knowledge.
Representation of a neural network Artificial Neural Networks & Natural Language When we explain our Compact Knowledge Model technology and describe it's far reaching implications for Artificial General Intelligence a common reaction is "but surely Google and the other big tech companies are doing something similar." As we know, Google (and all of the big tech companies) have been making massive investments in the (we think misnamed) "cognitive computing" technology that is now considered almost synonymous with AI by common usage. "Cognitive computing" is jargon for artificial neural networks (ANNs). Neural networks are "trained" over vast numbers of iterations on supercomputers to recognize patterns in equally vast databases. A very expensive process, but one that works reasonably well for things like pattern recognition in photographs, though even here, there are limitations, because ANNs lack any knowledge of the real world objects they are being trained to recognize. Applications of neural networks to natural language processing proceed in the same way as with images. The networks are trained under the control of algorithms designed to find certain patterns in huge databases, in this case, of documents, which from the standpoint of the program, are just an array of numbers (exactly as a photograph is nothing but an array of numbers to such programs.) The applications process these text databases but they have no reading comprehension as humans recognize it - no notion whatsoever about the content or meaning of the text. Humans curate the databases to limit the [...]
"Anticipatory Computing" Recently many applications that self-indentify as AI have also been cited as examples of “anticipatory computing,” as in this National Public Radio article: “Computers That Know What You Need, Before You Ask” Here is the Wikipedia entry for “Anticipatory Computing:” In artificial intelligence (AI), anticipation is the concept of an agent making decisions based on predictions, expectations, or beliefs about the future. It is widely considered that anticipation is a vital component of complex natural cognitive systems. As a branch of AI, anticipatory systems is a specialization still echoing the debates from the 1980s about the necessity for AI for an internal model. When asked: “What do you anticipate would happen if someone jumped off the Empire State Building?” A human would employ their internal model of acceleration due to gravity, the relative frailty of the human body and the size of the building to predict: “They would impact the pavement at a high velocity and be killed.” So what for a human is simple common sense, in the context of computing is asserted to be a whole new branch of Artificial Intelligence, one that, according to the NPR article cited above, is being used to change the way we interact with our technology: “Google Now”, which is available on tablets and mobile devices, is an early form of this (anticipatory computing). You can ask it a question like, "Where is the White House?" and get a spoken-word answer. Then, Google Now recognizes any follow-up questions, [...]