Since the failure of symbolic approaches to AI several decades ago, the term Artificial Intelligence has become synonymous with Deep Leaning and Big Data techniques. These applications are based on Artificial Neural Networks (ANNs). But it is generally acknowledged that the best ANNs we have today are maybe on par with worm brains. So how is it that ANNs were ever termed AI in the first place? Worms are not intelligent.
This goes back to what may be called “aspirational AI” a common phenomenon in the AI community for at least 40 years. It happens like this: someone has a theory about what intelligence is and goes off and writes some code to implement that theory. Even if it does not work or does not do anything that looks like intelligence it is still considered “AI” because that is what they were aspiring to create.
Calling today’s applications “Narrow AI” in recognition that they are not intelligent is avoiding the problem. The assumption is that these applications are points on an upward rising curve that will someday be general rather than narrow. What is the justification for that?
OpenAI, a company founded with the express aim to achieve AGI recently revealed their latest ANN based language model, GPT-3. Yann LeCun, a pioneer of Deep Learning had this to say about it.
Trying to build intelligent machines by scaling up language models is like [building] a high-altitude airplane to go to the moon,” he says. “You might beat altitude records but going to the moon will require a completely different approach.
Over a decade earlier David Deutsch, quantum computation physicist at the University of Oxford said something similar but he was talking about ANNs in general:
Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough…No Jeopardy answer will ever be published in a journal of new discoveries…What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory…
If there is a way forward from where ANNs are today to Artificial General Intelligence no seems to knows what it is. Yann LCun also said:
Right now, even the best AI systems are dumb, in the way that they don’t have common sense.”
“We don’t even have a basic principle on which to build this. We’re working on it, obviously, We have lots of ideas, they just don’t work that well.
Despite this, LeCun still believes Deep Learning can do it all given enough time and effort. Maybe but at New Sapience we decided to go back to first principles and start again.