Computational Knowledge

Technology 2017-08-06T14:49:28+00:00

What We Have Been Waiting For

  • Altaira: “Oh Robbie, make me a new dress!”
  • Robbie: “Yes ma’am, with diamonds and rubies this time?”

For decades we have envisioned a day when we would have wonderful machines that would understand what we ask of them and have the power to do it.

We build machines to extend our own power. The first machines amplified the power of our muscles. Later, with devices like the telescope, we learned to amplify our senses. More recently, we have found ways to amplify some of our brain’s basic cognitive functions with computers.

Computers have always exceeded humans at arithmetic and formal logic and more recently, by imitating some of the brain’s cellular architecture with artificial neural networks, substantial progress has been made in areas like auditory discrimination, image recognition and extracting useful information from large databases. Programs based on these techniques have beaten the best human players in certain games, such as Go and even Jeopardy.

But these achievements leave us wanting even more. They are narrow, not general, in their capabilities. A program “trained” to play one game cannot play another.

Knowledge is a mental model of reality that allows us to envision a world that may or not come to pass, depending on our actions. The ability to “predict the phenomena” is the cornerstone of the scientific method, one that was already in play when the first human progenitor made the first tool and used it to change its world.

Until now our machines could not do this – not even a little. They process data and information, but the creation of knowledge has been beyond their capability. (Read more about the distinction between data, information and knowledge.)

What We Have Been Settling For

Anyone who has tried to have a real conversion with SIRI, Alexa or any other of today’s “digital personal assistants” soon realizes that they don’t actually comprehend a single word you say to them. They are simply matching input data patterns to predefined functions and output patterns. Actual comprehension requires knowledge about what the words mean and (until now) only human brains have been able to encompass knowledge.

Today, the big tech companies are currently engaged in an expensive arms race to own “conversation as a platform.” They are doing this because they understand that conversational interfaces have the potential to completely alter the landscape with respect to how people interact with their products and that impacts how these companies generate revenue. For example, where would Google, Inc. be if you could ask your digital assistant to “google” things for you and you never see the ads?

If any of these programs could be enhanced with even the language comprehension of a first grader and a little genuine knowledge of what user might want the device to do for them – they would be vastly “less dumb” than they are now.

  • User: I’m hiding my keys under the mat.
  • SIRI: I don’t understand “I’m hiding my keys under the mat.” I could search the web for it.

Why Is It Taking So Long?

The AI Winter  1990 – 2000

There have been previous attempts to endow computers with knowledge. In the 1980s, a time like now when there was great optimism about AI, two separate approaches appeared to offer great promise: Rule-based Systems and Semantic Networks. Both failed. These disappointments led to an era of disillusionment that has become known as the AI Winter.

Why did they fail? Over 30 years later, hindsight informs us:

  • Semantic Networks failed because they embedded the data in linguistic structures.
  • Expert Systems failed because they embedded the data in logical structures.

It may be said of both that they were based on an inadequate epistemology. That is, they failed to grasp the underlying structure of knowledge and its independence from both logic and language. Thus the term knowledge-based systems was inappropriate as they never achieved knowledge. They could only represent information about the world one datum at a time. They could not scale.

Doesn’t “Deep Learning” Acquire Knowledge?

No. Not even a little bit.

The confusion arises because of the word learning – it actually has two distinct meanings.

The failure of the so called knowledge-based systems has left a lasting impression on the technical and academic communities. “Putting knowledge directly into machines will never be practical because there is too much data” has become an unquestioned dictum. The AI winter continued throughout the 1990s but with the advent of supercomputers and very large databases another AI technique came into its own: artificial neural networks (ANNs).

Neural networks restored optimism about AI and massive investments are again being made now in “cognitive computing” techniques such as Deep Learning and Big Data.  The networks are “trained” by massively iterating pattern recognition algorithms on vast quantities of data. This is a very expensive process often requiring supercomputers and it that has achieved some amazing results. The technology is finding practical applications in many areas such as recognizing objects in photographs. But even here, there are limitations because ANNs lack any knowledge of the real world objects they are being trained to recognize.

In 2017 AlphaGo a program created by Google DeepMind beat the human world champion Go player. AlphaGo is now retired. There is nothing else for it to do. There is nothing else it can do.

Does “learning” how to play the game of Go imply the program has acquired knowledge about the game? When a human learns how to ride a bicycle, they do so by practicing until neural pathways coordinating the interaction of the senses and muscles have been sufficiently established to allow one to stay balanced. This “neural learning” is clearly very different than the kind of “cognitive learning” we do in school which is based on the acquisition and refinement of knowledge. Neural learning cannot be explained and cannot be unlearned, no abstract knowledge of the world is produced.

Is Recreating The Human Brain Our Only Option?

Whole brain emulation research projects at Google, DARPA, IBM and other places are described at artificialbrains.com

No, in fact:

  1. It is not clear that is even possible in any meaningful way.
  2. If it is possible, we might not be able to understand how it worked and so could never trust it.

The human brain is composed of neurons and it creates knowledge so it is theoretically possible for artificial neural networks to create knowledge. Today, some AGI researchers are pursing the “whole brain emulation” approach.  This requires that you first create a neural network comparable in size and complexity to the human brain and then program it to recapitulate, in some form, human cognitive processes until eventually you have a program that can create world knowledge out of raw data as humans do.

One hope is that intelligence is a kind of an emergent property of a vast neural network. This is reminiscent of early science fiction stories that postulated the notion that artificial intelligence might spontaneously occur when human computer networks reached some sufficient level of complexity. Another hope is that on the road to emulate human brain structures and cognitive processes, researchers might uncover some hidden master algorithm that lies at the root of human ability to create knowledge.

There are some fundamental difficulties here. Many animals have large, complex neural networks. Pilot whales not only have larger brains than humans, but their neo-cortex, thought to be the seat of intelligence in humans, is also larger. Evolution has created neural networks all over the place, the ability to create knowledge, just once (as far as we know.)

Does anyone have any idea how to uncover the hidden neural structures that evolution laid down layer by layer over the past 4 million years as humans, alone in terrestrial nature, mastered knowledge?

In the end, it seems likely that, while these whole brain emulation projects may tell us something about ourselves, they will not succeed in endowing machines with knowledge. They are reminiscent of early attempts to build flying machines. Inspired by birds, people first tried to build ornithopters – machines with flapping wings. They generally shook themselves apart.

In any case there is a much better and faster path to AGI.

An ornithopter

A Contrarian Approach

Many of the inventions that have changed our world happened because it was their time to happen. After the advent of the internal combustion engine, the missing piece (thrust) needed to solve the problem of heavier-than-air flight was in place. Now it was just a race to engineer the details. The Wright brothers succeeded because they asked themselves the right question, which was not “How do we build an artificial bird?” but rather “How can we apply the aerodynamics of bird wings to something that can hold itself together?”

Other inventions, like our Modeled Intelligence, come “out of left field,” the result of a series of unforeseeable influences and events, pieces of a puzzle that come together at a certain point in time independent of and often contrary to the technology mainstream.

A contrarian approach is often the result of asking a different question than the mainstream. Our approach to knowledge in machines is based not on the question “How can we emulate the cognitive processes of the human brain?” but “Can we put human knowledge, as we introspect it in our own minds, into a computer?” As it has turned out, we can.

The Wright Brothers were good, if self-taught, engineers and as bicycle mechanics they knew something about building rigid frames from light materials.

Learn More (Cognitively)