AI continues to be hugely popular in the media, and while most articles continue to treat Big Data and Machine Learning as if they are the only game in town, we are starting to see more that recognize these narrow approaches don’t have a clear path to real AI or AGI.  Recently Forbes published a piece by Rob Toews who focuses on the big picture of AI.

To Understand The Future of AI, Study Its Past

This article divides AI into two opposing philosophies: connectionism and symbolism. From the historical perspective this is reasonable. Connectionism is what he means by today’s AI. Symbolism is what is sometimes called “Good Old-Fashioned AI.” Toews provides a good description of it:

“Symbolic AI reached its mainstream zenith in the early 1980s with the proliferation of what were called “expert systems”: computer programs that, using extensive “if-then” logic, sought to codify the knowledge and decision-making of human experts in particular domains. These systems generated tremendous expectations and hype: startups like Teknowledge and Intellicorp raised millions and Fortune 500 companies invested billions in attempts to commercialize the technology.”

I know. I was there. My first company, Talarian Corp, applied real-time expert systems to analyze spacecraft telemetry.

Toews goes on to say, “Expert systems failed spectacularly to deliver on these expectations, due to the shortcomings noted above: their brittleness, inflexibility and inability to learn.”

That’s an interesting way of looking at it. Usually the failure of expert systems is described as inability to scale. That is, it takes so much programming to create a useful application, one gets diminishing returns. (Of course, the ability to learn would overcome that.)

The gist of the article is that these two philosophies can be combined to overcome the limitations of both. The connectionist approach is scalable in that it can handle vast amounts of data which the symbolist approach cannot do, while the latter supports deterministic logic which the former lacks.

The notion is, that because the capabilities of these two approaches do not overlap, they correct each other’s limitations. I believe this is a gratuitous assumption. Toews goes on to describe various current projects that, he says, are taking this approach, such as DARPA’s explainable AI initiative. But no solution to the inherent scalability problem of symbolic approaches is described.  Further, no distinction is made between the kind of learning that neural networks can do, and the kind of cognitive learning that produces knowledge, that neither can do. Combining two limited paradigms does not necessarily imply something that actually works.

Of the two approaches, our Machine Knowledge would appear to be in the symbolic camp, because we don’t do anything like what neural networks do, while we do support deterministic logic. But this is not the case.  We are in a third camp yet undreamed of by the rest of the world.

We understand that symbols are the building blocks for processing instructions, and we treat them as such.  We process the instruction that is a natural language sentence, via transcription and translation into a structure that resembles and corresponds to objects in the real world but does not represent them the way symbols do. Cognitive knowledge is composed of models – and models are not symbols.

If this distinction seems subtle and difficult to grasp, well it probably is, which may be why no one else seems to have thought of it. They are looking in a totally different direction.

Any approach based on knowledge representation using symbols is doomed to failure: they are still programs that need more programs to interpret them. That is why they will not ever scale.

The fact the articles like this are being written is good for us, because they are driven by the recognition that today’s connectionist approach is not going to get us to the goal of real AI.  But the solution is not going to come from a revival of the “good old days” of AI or by combining the two, but by a revolution in how intelligence and knowledge are understood. Something “fresh and strange[1];” Machine Knowledge.

To understand the Future of AI, look to New Sapience.

[1] “Doing what we already know how to do takes the world from 1 to n, adding more of something familiar. But every time we create something new, we go from 0 to 1. The act of creation is singular, as is the moment of creation, the result is something fresh and strange.”

Peter Thiel, “Zero to One”