Artificial Intelligence: Past, Present, and Future

What we have been waiting for…

AI is surrounded by hype and technical jargon.  Strip all that away and its pretty simple:

AI allows a machine to understand what we say and the general intelligence to do what we tell it to do.

Sometimes called Artificial General Intelligence to distinguish it from the hype and over-inflated claims of what passes for AI these days, the consensus of industry experts is that there is only a 50% probability that it will arrive by the end of this century.

But what if the experts are wrong? What if real AI was just around the corner?  That would be the most exciting news of our time and it would present the greatest ever investment opportunity.

It is time to be excited about Artificial Intelligence.

What we have been settling  for…

The Past: Symbolic AI

In the 1980s people were as excited about AI as they are today but the technology was completely different.

Researchers sought to put knowledge into machines one fact at a time.

One approach, expert systems, was based on programming rules. Another approach, semantic networks, attempted to break down language (essentially just words) into a computable form.

Both approaches were called “knowledge representation” or, more commonly today, symbolic AI and even Good Old-Fashioned AI.

Symbolic AI did not work

The AI Winter 1990 – 2000

Symbolic AI was so over-hyped and so under-delivered that people became disillusioned about the whole notion of AI for awhile.

Today’s Connectionist Approaches

Today’s AI technology, Machine Learning, is radically different from the old days.

ML applications are ubiquitous and already credited with changing the way we live in many ways. Clearly it is here to stay. Machine Learning’s successes have been so widely publicized that the term has become nearly synonymous with Artificial Intelligence.  But “learning” here does not mean acquiring knowledge but rather “training” huge networks of interconnected “artificial neurons” to recognize patterns in vast databases.  These approaches are called connectionist AI.

Even its most ardent practitioners freely admit that connectionist approaches cannot understand our language and lack general intelligence.

Right now, even the best AI systems are dumb, in the way that they don’t have common sense.”
“We don’t even have a basic principle on which to build this. We’re working on it, obviously, We have lots of ideas, they just don’t work that well.

Yann LeCun, Director of AI at Facebook

Ultimately, the real challenge is human language understanding – that still doesn’t exist. We are not even close to it…

Satya Nadella, Microsoft CEO

I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

Yoshua Bengio, deep learning pioneer

About OpenAIs GPT.3 text generator:  Trying to build intelligent machines by scaling up language models is like [building] a high-altitude airplane to go to the moon,” he says. “You might beat altitude records but going to the moon will require a completely different approach.

Yan LeCun, Facebook

Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough…No Jeopardy answer will ever be published in a journal of new discoveries…What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory…

David Deutch, Oxford University

It is generally acknowledged that the best ANNs we have today are maybe on par with worm brains. So how is it that ANNs were ever termed AI in the first place? Worms are not intelligent.

This goes back to what may be called “aspirational AI” a common phenomenon in the AI community for at least 40 years. It happens like this: someone has a theory about what intelligence is and goes off and writes some code to implement that theory. Even if it does not work or does not do anything that looks like intelligence it is still considered “AI” because that is what they were aspiring to create.

Calling today’s applications “Narrow AI” in recognition that they are not intelligent is avoiding the problem. The assumption is that these applications are points on an upward rising curve that will someday be general rather than narrow. What is the justification for that?

The AI we have been waiting for…


Some researchers still believe Deep Learning can do it all given enough time and effort. Maybe, but at New Sapience we decided to go back to first principles and start again.

Artificial Intelligence from First Principles

What is AI?  We know what artificial means, so the root question is, what is intelligence?

As it occurs in nature, intelligence is information processing that goes on inside an organic brain. It is the defining characteristic of human beings; human are intelligent animals. Humans have that characteristic unique among known species or at least to such a degree that is might as well be a difference in kind. It is the characteristic that confers upon our species our incomparable control over the natural environment.

Intelligence is characterized by a number of operations such as interference (logic), pattern recognition, and memory utilization.  But computers can already do these as well or better than people.  What is missing?

It is knowledge, a very special kind of information structure that functions as in internal model of the external world.

This immediately redefines the entire enterprise of Artificial Intelligence. Computers are already intelligent, but they are ignorant.

Is there something unique about the human brain’s neural architecture that is essential to the creation of knowledge? Why should there be? That is a gratuitous assumption like a flying machine needs to flap its wings because that’s what birds do. The Wright brothers went back to first principles, Aerodynamics, and solved the problem far more easily and elegantly than building an ornithopter.

Can a functional model of the world be designed as a software object structure and processed in software without emulating a neural processing layer? Turns out it can.