Narrow AI

//Narrow AI

"AI" Today: Reality Check

By |2018-07-14T12:35:11-04:00November 6th, 2017|AGI, Competition, MK, Narrow AI|

Narrow AI's Dark Secrets Articles about AI are published everyday. The term "AI" is used in a very narrow sense in the majority of these articles: it means applications based on training artificial neural networks under the control of sophisticated algorithms to solve very particular problems. Here is the first dark secret: This kind of AI isn't even AI. Whatever this software has, the one thing it lacks is anything that resembles intelligence. Intelligence is what distinguishes us from the other animals as demonstrated by its product: knowledge about the world. It is our knowledge and nothing else that has made us masters of the world around us. Not our clear vision, our acute hearing or our subtle motor control, other animals do that every bit as well or better. The developers of this technology understand that and so a term was invented some years ago to separate these kind of programs with real AI; Narrow AI which is in used in contrast to Artificial General Intelligence (AGI) which is the kind that processes and creates world knowledge. Here's the second dark secret. The machine learning we have been hearing about isn't learning at all in the usual sense.  When a human "learns" how to ride a bicycle, they do so by practicing until the neural pathways that coordinate the interaction of the senses and muscles have been sufficiently established to allow one to stay balanced. This “neural learning” is clearly very different than the kind of “cognitive [...]

Recognizing a Dumbell

By |2018-07-14T13:12:04-04:00July 4th, 2017|Competition, Narrow AI|

Recently a neural network was trained to recognize an image of a dumbbell, the weight-lifting implement. It did pretty well except for the fact that when programmed to output the composite picture of a dumbbell it showed a very good picture of the weight-lifting tool but clearly attached to it was a very recognizable image of a human hand and arm grasping the bar. This means the program would rate a picture of a dumbbell without a person holding it as less likely to contain a dumbbell than one where it was being held. People, of course, would not make this mistake because they know dumbbells don’t have hands and arms. However, in the picture database the system was trained against, more of the images showed the humans holding the dumbbell than not. How could the program know? – it couldn’t, because ANN’s as they exist today and for the foreseeable future (or maybe never), have no capacity to contain knowledge.