Knowledge and Intelligence

Knowledge and Intelligence

Understanding Intelligence

Alan Turing, in his 1950 paper “Computing Machinery and Intelligence,” proposed the following question: “Can machines do what we (as thinking entities) can do?” To answer it, he described his now famous test in which a human judge engages in a natural language conversation via a text interface with one human and one machine, each of which tries to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. The Turing Test bounds the domain of intelligence without defining what it is. We recognize intelligence by its results.

John McCarthy, who coined the term Artificial Intelligence in 1955, defined it as “the science and engineering of making intelligent machines.” A very straightforward definition, yet few terms have been more obfuscated by hype and extravagant claims, imbued with both hope and dread, or denounced as fantasy.

Over the succeeding decades, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do.” The process by which this has come about is no mystery. A researcher formulates a theory about what intelligence or one of its key components is and attempts to implement it in software. “Humans are intelligent because we can employ logic” and so rule-based inference engines are developed. “We are intelligent because our brains are composed of neural networks” and so software neural networks are developed. “We are intelligent because we can reason even under uncertainly” and so programs implementing Bayesian statistics are created.

It doesn’t matter that none of these approaches ever got even to first base at passing the Turing Test, the term Artificial Intelligence was applied to them in the first place, and it stuck. Thus, the field of Artificial Intelligence has come into existence and still, the core concept of “intelligence” itself remains vague and nebulous, people have an intuitive notion that it is about consciousness, self-awareness, and autonomy. As it turns out, these intuitions, as with many such (heavier things fall faster than light ones and giant rocks don’t fall out of the sky), are wrong.

Going back to Turing, the cogency of his test rests upon the fact that we recognize intelligence when we see it or its results. We know when someone understands what we say to them and that is undeniable proof that intelligence is at work. Let’s step back and examine the process of language comprehension.

One person composes a message and sends it to another who processes it. We often talk about the meaning of words but, of course, they have no inherent meaning, they are randomly chosen symbols assigned to represent things in the world around us, or more properly, the ideas that exist in our minds of those things. The grammar of the message and the form of the words in it encode instructions on how the receiving party should make connections between the ideas corresponding to the words to recreate, or at least approximate, in the receiving mind, the meaning that the sending mind wished to convey.

Different languages have completely different vocabularies (word-symbol sets) and grammar varies greatly as well, but people can learn each other’s languages and translation is possible because humans all live in the same world and have corresponding ideas of the things that are experienced in it. Thus, any communication using symbols is dependent on corresponding sets of common referents for those symbols. These sets of common referents are our ideas and our knowledge about the world. Knowledge is not just a bag of random thoughts but an intricately connected structure that reflects many aspects of the external world that it evolved to comprehend, it is in fact a model, an internal model of the external world.

People’s world models do vary greatly but the momentous invention/discovery of language has endowed human beings with a means to transmit ideas from one to another, expanding, deepening, correcting, and enriching one another’s models.

It has often been suggested that humans cannot think without language and that language and intelligence are one and the same. It is certainly true that without language humans could not accumulate the knowledge that has resulted in civilization. Without our languages, humans would still be living in nature and it would even be harder than it is to define the essential difference between Homo sapiens and other closely related species.

It is likely that the capacity to construct ordered world models and language both depend upon a capability for symbolic processing, and it is that which lies at the root of the essential difference of humanity, but they are very distinct processes, and the model is a prerequisite for language and not the other way around. A human living alone separated from its kind without the opportunity to learn a language (the classic “raised by wolves” scenario) will still make alterations to its environment that would be impossible if it could not first have imagined them. Imagination too, requires the model as a necessary prerequisite.

That fact suggests a thought experiment that further clarifies the essence of intelligence. Imagine traveling through space and discovering a planet with structures on the surface created by an alien life form. Can we tell whether the species was intelligent by looking at the structures? If we see patterns in the structure that repeat in a way that indicates an algorithm at work then, no matter how monumental the structures, they are probably akin to beehives, ant hills, beaver dams, and bird nests, the result of instinctive repetitive behaviors. But if they reflect an individuality clearly based on considerations of utility, environment, and future events, they had to be imagined before they could be constructed and that means the builders had an internal world model. It could be a vastly alien intelligence that evolved under conditions so different from the world our model evolved to represent that we could never really communicate, but the essence of intelligence is the same and is unmistakable.

For now, the only example we have of an intelligent entity is ourselves and it is difficult to abstract the essence of intelligence from the experience of being an intelligent being. What about consciousness, self-awareness, and autonomy? Like intelligence, these things are essential to being a human being but are they one and the same?

The answer is that they may support one another but they are not the same. All these characteristics are “off-line” in a hypnotized person, yet that person can still process language, still access and update their world model. Consciousness is the experience of processing real-time data – other animals do that as well and we do not say they are intelligent because of it. Self-awareness is an examination of an entity’s model of itself. As with language and imagination, the capacity to build an internal model is a prerequisite for self-awareness but not the other way around.

Inseparable from consciousness and self-awareness, humans also experience desires and motivations. One motivation that seems particularly inseparable from the experience of being an intelligent being is the desire to use that intelligence to change the world. The perception that intelligence and the desire to employ it to alter the world are connected in human beings is correct.

Humans evolved intelligence, manual dexterity (hands), and the desire to use the first two to change and control their environment, in parallel. All three are a matched set that has everything to do with being a human being (intelligence and hands won’t help you survive if you don’t have the desire to use them) but they are three separate phenomena.

The association between intelligence and the motivation to alter our environment in humans has led to a common misconception about artificial intelligence. If it is possible to build an artificial intelligence that is as intelligent as a human, it will probably be possible to build one that is more intelligent than a human and such a superior AI could then build another superior to itself and so on. This is the so-called “singularity” popularized by Ray Kurzweil and others. People naturally feel that a vastly superior and powerful artificial intelligence must necessarily experience the desire to exercise its power, a fearful (for humans) prospect. But giving a robot hands does not automatically endow it with a desire to pick things up and throw them around and neither does endowing it with intelligence. Whatever motivations robots are given will be by the design of their builders and not some spontaneous or unintended result of its intelligence which its builders also give it.

It is now possible to define what intelligence is, not human intelligence, not alien intelligence, not artificial intelligence but the thing itself with some clarity and we can do this without the usual appeals to how humans experience the phenomena of being intelligent beings.

Intelligence is the process through which a computational information processor creates a model within processor memory of external phenomena (knowledge) of sufficient fidelity to predict the behavior of and/or control those phenomena.

The definition has both a qualitative and quantitative aspect. It is in the quantitative aspect that we can precisely relate the ancillary functions of language, imagination, consciousness, and self-awareness that are so closely related to it yet remain distinct from the core concept of model building. How much fidelity does the model have to have, how well must it predict, and how much control over the external world must it enable before we can properly call the system that processes it intelligent?

Albert Einstein is very property called a genius. Why? He created a model of the world, of the entire space-time continuum in fact, of such unprecedented fidelity that humankind’s ability to predict and control the external world was moved forward by a quantum leap. Now, what if Einstein had been the hypothetical “raised by wolves” person described earlier, without language or culture to support the development of his internal model? Maybe he would have invented a better way to chip flint but who could he tell about it?

Functional intelligence requires accumulated knowledge and that requires language. The capacity to monitor and predict the behavior of our external environment requires real-time analysis of incoming information that can be related to our stored model and that requires consciousness. Control of our environment requires that we are self-motivated and that is autonomy which implies goals that need to be pursued in order to maximize internal utility functions, a process that is meaningless without self-consciousness.

Will artificial intelligence have all these characteristics? Yes, but they won’t spontaneously appear as the result of reaching some level of complexity or a by-product of some master yet undiscovered algorithm. Each will have to be purposely designed and integrated into the core processes that support its intelligence; and the capacity to build a high-fidelity world model

Knowledge and Language

Knowledge and Language

In Humans

True or False: “The Library of Congress as a great repository of knowledge.”

Who would not answer, true, without hesitation? But consider the following thought demonstration:
Suppose Socrates[i] told you he saw a cisticola while on a trip to Africa and you asked what that might be.

He answered: “A cisticola is a very small bird that eats insects.”

In an instant you know that cisticolas, have beaks, wings, and feathers, almost certainly can fly, that they have internal organs, that they have mass and hundreds of other properties that were not contained in the sentence.

Let us step through the articulation process that Socrates when through to create the specification for the creation of this new knowledge.

First, he decomposed the concept denoted by the word “cisticola” in his mind into components concepts and selected certain ones that he guesses already exist in your mind.  The key one is “bird” because if you classify cisticolas as birds you will assign them all the properties common to all birds as well as all the essential properties and attributes of animals, organisms and physical objects; a large body of knowledge.

Second, he selected concepts that will be useful for you to distinguish cisticolas from other birds, their size is very small comparatively and they are insectivorous.
He now has a parts list for the new concept to be constructed in your mind from items that you already have in your cognitive warehouse: bird, very, small, eating, insects.

Third, he provided the assembly instructions: he choose some connective words; “is,” “a,” “that” and arranged all the words in a specific grammatical order.

Thus, natural language is a communications protocol between two entities that have pre-existing knowledge. Each sentence is an instruction, not different in essence from a line of code in a computer program.

 

 

 

 

A cisticola: A picture is worth a thousand words

The process of composing the instruction by the sender is called articulation. The receiver interprets the instruction to assemble a new concept in accordance with the specification expressed by the grammar from pre-existing conceptual components. This process is called language comprehension.

All the books in the Library of Congress are records of such instructions which can only be converted to knowledge when read by someone whose pre-existing knowledge is sufficiently compatible with that of the authors. Therefore, inescapably, we must conclude that language is information not knowledge and it does not contain knowledge.

In Machines

Today programs that use human natural language in their inputs or outputs are becoming increasing popular.

products like Amazon Alexa and chatbots are commonly referred to as AIs, but they’re just sophisticated programs built on top of machine learning, natural language processing, and statistical algorithms.

Luis Perez-Breva, Head MIT’s Innovation Program

The biggest misconception that arises is that a chatbot is a bot that converses with a human in the way that another human would converse with a human…, this is simply not possible using the current technology.

https://botpress.io/learn/what-why

The “current technology” referred to in the second quotation is specified in the first: machine learning, natural language processing, and statistical algorithms.

To converse like a human, you must able to articulate and comprehend like humans as described above. Both processes depend on a pre-existing “warehouse” of concepts containing component concepts that can be assembled to create new knowledge. Such a warehouse is comprised of knowledge and is essentially a model of the world or the domain of comprehension. None of those techniques employed by chatbots possess such a “warehouse.” The very idea of a modelling knowledge in software was abandoned by the “AI” community decades ago is being too difficult.

New Sapience has succeeded in creating such a model in a machine and our software implements the processes of comprehension and articulation in the same step-by-step sequences described above. For the first time a computer program can converse with a human in the way that another human would converse with a human.  This opens a broad new market of powerful applications forever, beyond the reach of chatbot technology.

 

[i] Compare this thought demonstration with the one in the Platonic dialog Meno, where Socrates attempts to prove that all learning is memory. Through a conversation, he leads a boy to comprehend the Pythagorean theorem. We assert that the boy did not have knowledge of the theorem in the first place, but he possessed the component concepts needed to create it and Socrates provided the instructions needed to put them together.