The Hidden Structure

By |2018-07-14T12:30:15-05:00June 2nd, 2018|AGI, Foundations, MK|

Discovering the Hidden Structure of Knowledge --- Though we don’t how the human brain is able to transcend from the data processing layer (where the brain too is just processing low level data coming in from the senses) to the realm of knowledge we can, through introspection, examine the structure of the end product of our thought processes, that is knowledge itself. What we find is a collection of ideas that are connected through various relationships that are themselves ideas. While many of these ideas represent specific objects in the real world, that tree, this car and so forth, many are abstractions, trees, cars. Each idea is connected to many others, some of which define its properties and some its relationship to others ideas. The power of abstract ideas as opposed to ideas representing particular things is that they are reusable. They can become components of new ideas. Complex concepts are built out of fundamentals. As the material world is composed of atoms, our knowledge of the world is composed of ideas. The English language is has over a million words each referring to an idea. Without some notion that only a small portion of these ideas are fundamental (atoms) and can only be combined in certain ways, the task of putting knowledge in machines is overwhelming. Democritus is known as the "laughing philosopher." There is speculation that he was laughing at his critics who clearly had not thought things out as well as [...]

How we got here.

By |2019-09-30T12:07:12-05:00July 28th, 2017|AGI, AI, Foundations, MK|

The Road To Machine Knowledge The body of epistemological theory and insights that have found practical application in Compact Knowledge Models is the result of over forty years of focused interest and study by New Sapience founder, Bryant Cruse.  He first formulated his epistemological theories as an undergraduate at St. Johns' College in Annapolis in 1972, inspired by the works of Plato, Aristotle, Locke, Hobbes, Descartes, and Kant, as well as the existentialists of the 19th century. Epistemology is generally considered an obscure and esoteric branch of philosophy of interest only to academics who, traditionally, have been focused on debate about the truth, belief, and justification of individual assertions.  Cruse’s theories, which approach knowledge as an integrated model designed from the standpoint of utility, represent a clear departure from classic epistemological traditions, and his career focus has been oriented towards practical applications rather than to academic publications. As a space systems engineer on the operations team for the Hubble Space Telescope in the mid-1980s, Cruse became interested in finding a way to automate the analysis of massive amounts of telemetry data in the ground system computers.  This began a path that led him to a residency in AI at the Lockheed Palo Alto research labs where he became the driving force behind development of the first real-time expert system shell. Rule-based systems proved not to be a practical solution for representing human knowledge, and in 1991 he led a team which succeeded in developing a much more efficient [...]

Comments Off on How we got here.

The Turing Test

By |2018-07-14T12:38:23-05:00July 27th, 2017|AI, Foundations, MK|

An Imitation Game The great mathematician and computer scientist, Alan Turing, proposed his now famous test for Artificial Intelligence in 1950. The test was simple, in a text conversation (then via teletype – today we would say texting) with a person and a machine, if the judge could not reliably tell which was which, then as Turing put it - (hedging even here), it would be unreasonable to say the machine was not intelligent. The Turing Test bounds the domain of intelligence without defining what it is. We can only recognize intelligence by its results. However, over the more than 50 years since Turing’s formulation, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do,” but rather merely emulate some perceived component of intelligence such as inference or some structure of the brain such as a neural network. Recently the term “Artificial General Intelligence” (AGI) has come into use to refer precisely to the domain as Turing defined it. There are lots of issues with such a test, the machine would have to be taught how to lie or the judge would have to be very restricted in what could be talked about, the judgement could be shaded from the standpoint of the judge’s expectations with respect to current state-of-the-art in AI, and finally do we really want to build artificial humans [...]


By |2018-07-14T12:40:30-05:00July 20th, 2017|AGI, Foundations, MK|

Artificial General Intelligence (AGI) AGI or Artificial General Intelligence, the quest for software that does have genuine comprehension that would be recognizable as such by anyone because (in the spirit of the Turing Test) you can hold a general unscripted conversation with it. Today, outside of our work, AGI research efforts fall into two categories. Whole brain emulation The approach is that you first create a neural network with the size and complexity of the human brain and then program it to recapitulate, in some form, human cognitive processes that will eventually result in the production of world knowledge. The assumption here is that intelligence is kind of an emergent property of a vast neural network. We find this assumption extremely doubtful, and there are numerous other problems associated with this approach even should it produce something. Ray Kurzweil, who popularized the idea of an AI “singularity” and is currently VP of Engineering at Google is pursuing this approach (no doubt with lots of money – he will need it). The project at Google and numerous other whole brain research projects at DARPA, IBM and other places are described are Cognitive algorithms This approach seeks to discover one or a small number of immensely powerful algorithms that endow the human brain with intelligence and then reverse engineer them such that the program will be able to process raw inputs and turn them into real knowledge as humans can do. We call this the magic algorithm approach. Significant [...]

Knowledge and Language

By |2018-07-14T12:47:05-05:00July 10th, 2017|AGI, AI, Foundations, MK|

The Library of Congress, a great repository... of something. in Humans The question of the relationship between language and knowledge in the human mind has fascinated philosophers and other deep thinkers since ancient times. One theory is that language is a prerequisite for knowledge and that knowledge cannot exist without it. The common-sense notion that is that language contains or records knowledge. True or False: “The Library of Congress as a great repository of knowledge.” Who would not answer, true, without hesitation? But consider the following thought demonstation: Suppose Socrates[i] told you he saw a cisticola while on a trip to Africa and you asked what that might be. He answered: “A cisticola is a very small bird that eats insects.” In an instant you know that cisticolas, have beaks, wings, and feathers, almost certainly can fly, that they have internal organs, that they have mass and hundreds of other properties that were not contained in the sentence. Let us step through the articulation process that Socrates when through to create the specification for the creation of this new knowledge. First, he decomposed the concept denoted by the word “cisticola” in his mind into components concepts and selected certain ones that he guesses already exist in your mind.  The key one is “bird” because if you classify cisticolas as birds you will assign them all the properties common to all birds as well as all the essential properties and attributes of animals, organisms and physical objects; a [...]

A New Epistemology

By |2018-07-14T17:12:32-05:00July 5th, 2017|AGI, Foundations, MK|

How do we know what we know? If we want to endow machines with knowledge we had better understand what it is. Epistemology, a term first used in 1854, is the branch of philosophy concerned with the theory of knowledge. It is not much studied in the schools these days and certainly not in computer science curriculums. Traditionally, epistemologists have focused on such concepts as truth, belief and justification as applied to any given assertions. From that perspective it is not much help since previous attempts to put knowledge into machines failed because they treated knowledge as just that, a vast collection of assertions (facts or opinions). That is not knowledge -that is data. We need to find an organizing structure for all these facts that will transform them into a road map of the world.  Since the dawn of civilization there have successive descriptions of the our world or reality. The ancients created, as beautify articulated by the theorems of the Alexandrian mathematician Ptolemy, an elegant geometric model of the universe with the earth at the center and everything else travelling around it on perfect circles, at a constant velocity. They had to put circles traveling on other circles to make the model match the actual celestial observations - but it worked![1] Claudius Ptolemy AD 100 - 170 The Ptolemaic System The Sextant Later this model was, (what should one say, refuted, replaced, superseded?) by [...]

Models and Metaphors

By |2018-07-14T13:26:25-05:00September 5th, 2016|Foundations|

Personal reflections on neural networks, modeled Artificial Intelligence and the experience of being human. I become more and more excited about the progress we are making, here at New Sapience, in solving the language problem – that is, learning how to build knowledge structures that accurately model the world but that are completely independent of languages and linguistics. Our fundamental realization - that language is an encoded communications protocol between entities and does not contain or record knowledge in itself is hugely helpful in keeping us on the right track. Our biggest challenge is that, as we use introspection to examine our own interior world model, we find ourselves “articulating” that model to ourselves and so language is always coming back in to cloud the issue. I find myself constantly admonishing our “epistemological engineers” to remember to think in terms of nodes and connectors - not the meaning of words – which only can have meaning in relationship to a model independent of language. As the equivalent reading comprehension level of our sapiens climbs up the human grade levels it is tempting to think that once it reaches – say fourth grade we can “send it to school.” Let it read textbooks and eventually the Internet and it will be able to automatically accumulate arbitrarily large quantities of knowledge. We will certainly be able to do this and for a long time I believed we would – why not? Interestingly, the farther we go down the road, the [...]

The Third Singularity

By |2019-03-06T10:13:11-05:00September 20th, 2015|AGI, Foundations, MK|

The Third Singularity Are Super Artificial Intelligences going to make humanity obsolete? If you’re not worried about this maybe you should be since some of the leading technical minds of our time are clearly very concerned. Eminent theoretical physicist, Stephen Hawking said about AI: “it would take off on its own, and re-design itself at an ever increasing rate. Humans who are limited by slow biological evolution, couldn’t compete, and will be superseded.” Visionary entrepreneur and technologist Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” No less than Bill Gates seconded his concern: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned." The scenario Hawking refers to, of A.I.s redesigning themselves to become ever more intelligent is called The Singularity. It goes like this: once humans create A.I.s as intelligent as they are, then there is no reason to believe they could not create A.I.s even more intelligent, but then those super A.I.s could create A.I.s more intelligent than themselves and so on ad-infinitum and in no time at all A.I.s would exist as superior to humans in intelligence as humans are to fruit flies. The term Singularity is taken from mathematics where it refers to a function that becomes undefined at a certain point beyond which its behavior becomes impossible [...]