In a recent TED talk, AI researcher Janelle Shane shared the weird, sometimes alarming antics of Artificial Neural Network (ANNs) AI algorithms as they try to solve human problems.[i] She points out that the best ANNs we have today are maybe on par with worm brains. So how is it that ANNs were ever termed AI in the first place? Worms aren’t intelligent. This goes back to what may be called “Aspirational AI” a common phenomenon in the AI community for at least 40 years. It happens like this: someone has a theory about what intelligence is and goes off and writes some code to implement that theory. Even if it does not work or doesn’t do anything that looks like intelligence it is still considered “AI” because that is what they were aspiring to create. Calling ANNs AI is like being invited into a hanger to look at a new aircraft design but finding nothing but landing gear. You ask: “I thought you said there was an airplane.” And are told: “Yes, there it is - it is just not a very good airplane yet.” We saw another example of this “Aspirational AI” in a recent article in the Analytics India magazine[ii] that listed New Sapience among 10 companies in the Artificial General Intelligence space. They all say they are working on the AGI problem, but we are the only one that has anything to show for our efforts: a working prototype that comprehends language in the same sense [...]
Narrow AI's Dark Secrets Articles about AI are published everyday. The term "AI" is used in a very narrow sense in the majority of these articles: it means applications based on training artificial neural networks under the control of sophisticated algorithms to solve very particular problems. Here is the first dark secret: This kind of AI isn't even AI. Whatever this software has, the one thing it lacks is anything that resembles intelligence. Intelligence is what distinguishes us from the other animals as demonstrated by its product: knowledge about the world. It is our knowledge and nothing else that has made us masters of the world around us. Not our clear vision, our acute hearing or our subtle motor control, other animals do that every bit as well or better. The developers of this technology understand that and so a term was invented some years ago to separate these kind of programs with real AI; Narrow AI which is in used in contrast to Artificial General Intelligence (AGI) which is the kind that processes and creates world knowledge. Here's the second dark secret. The machine learning we have been hearing about isn't learning at all in the usual sense. When a human "learns" how to ride a bicycle, they do so by practicing until the neural pathways that coordinate the interaction of the senses and muscles have been sufficiently established to allow one to stay balanced. This “neural learning” is clearly very different than the kind of “cognitive [...]
Recently a neural network was trained to recognize an image of a dumbbell, the weight-lifting implement. It did pretty well except for the fact that when programmed to output the composite picture of a dumbbell it showed a very good picture of the weight-lifting tool but clearly attached to it was a very recognizable image of a human hand and arm grasping the bar. This means the program would rate a picture of a dumbbell without a person holding it as less likely to contain a dumbbell than one where it was being held. People, of course, would not make this mistake because they know dumbbells don’t have hands and arms. However, in the picture database the system was trained against, more of the images showed the humans holding the dumbbell than not. How could the program know? – it couldn’t, because ANN’s as they exist today and for the foreseeable future (or maybe never), have no capacity to contain knowledge.