The Third Wave of AI

Up until now there have been only two branches

The First Wave

Symbolic AI was the first wave and was based on the premise that knowledge could be “represented” as a set of rules that computers could process with logic. If you could add enough rules, you could eventually produce common sense knowledge of the world and general intelligence. It fell out of favor in the late 1980s

The Second Wave

The second wave is connectionist AI which is inspired by biological neurons and aims to create networks that can organize information into knowledge without explicit human interaction. Essentially, connectionist AI aims to reach general intelligence through emulation of the human brain. Machine Leaning is connectionist AI and it is having its day now – so much so the term ML is now used interchangeably with AI.

The Third Wave has begun

In turns out the symbolic crowd was on the right track after all when it focused not on emulating the human brain, like the connectionists, but on the end product of human cognition: knowledge. But there was a fatal flaw in their approach: the symbols themselves.

The connection between symbols and the things they represent are arbitrary conventions. Our natural language is composed of layer after layer of symbols. It is a communications protocol, instructions for processing symbols to convey knowledge. Knowledge is a model like a picture is a model and “a picture is worth a thousand words.”

We see that knowledge is composed of ideas and complex ideas are aggregates of simpler ones. The inescapable conclusion is that, if you keep decomposing ideas into their components, at some point you get to the end, or rather the beginning. This is the same conjecture that Democritus made about the material world: if you keep breaking things apart, eventually you get to the indivisible pieces he called “atoms.” Knowledge, whether in a human mind or a machine, must be composed of elemental ideas.

It took two thousand years to discover the number and properties of material atoms and arrange them into the table that illustrates how they combine to create all material substances. Those atoms are invisible and could only be discovered by indirect experiments.

“Cognitive atoms” are accessible to us through direct introspection of our own thoughts. We hypothesized that these elemental ideas, like nature’s elements, exist within a hidden structure that could be discovered, and their rules of combination revealed. We developed a compact information structure analogous to the Periodic Table that we call the “cognitive core.” With this achievement, the field of AI has been transformed into a pure science with clear foundational principles and a well-defined roadmap to greater and greater capabilities.

Using the core, we constructed a model of the commonsense world of sufficient scope to understand the meaning of words in natural language. While this may seem a herculean task, in reality it is well bounded and as with a picture and words, it is a thousand times more compact and scalable than old-fashioned symbolic approaches. The hard part is to visualize reality with the words and linguistic forms that are normally present in our thought processes stripped away.

We call a computer endowed with our technology a “sapiens.” Sapiens, like human children, learn by integrating concepts extracted from language into what they already know. For the first time there is a methodology, a cognitive chemistry, which endows computers with practical, scalable knowledge in the same sense that humans have knowledge.

A sapiens is not a chatbot. There is no slicing and dicing of predefined text strings going on, no references to real world things like cats or kitchens, nor reading nor books in the processing code. All that is in the model.

Real AI is here at last. New Sapience has created it. This is the most exciting news of our time, maybe all time.