How We Got Here

Some inventions, like our sapiens, come “out of left field,” the result of a series of unforeseeable influences and events, pieces of a puzzle that come together at a certain point in time independent of and often contrary to the technology mainstream.

Written by Bryant Cruse

July 28, 2017

The Road To Sapiens

The body of epistemological theory and insights that have found practical application in Compact Knowledge Models is the result of over forty years of focused interest and study by New Sapience founder, Bryant Cruse. He first formulated his epistemological theories as an undergraduate at St. Johns’ College in Annapolis in 1972, inspired by the works of Plato, Aristotle, Locke, Hobbes, Descartes, and Kant, as well as the existentialists of the 19th century.

Epistemology is generally considered an obscure and esoteric branch of philosophy of interest only to academics who, traditionally, have been focused on the debate about the truth, belief, and justification of individual assertions. Cruse’s theories, which approach knowledge as an integrated model designed from the standpoint of utility, represent a clear departure from classic epistemological traditions, and his career focus has been oriented toward practical applications rather than academic publications.

As a space systems engineer on the operations team for the Hubble Space Telescope in the mid-1980s, Cruse became interested in finding a way to automate the analysis of massive amounts of telemetry data in the ground system computers. This began a path that led him to a residency in AI at the Lockheed Palo Alto research labs where he became the driving force behind the development of the first real-time expert system shell.

Rule-based systems proved not to be a practical solution for representing human knowledge, and in 1991 he led a team that succeeded in developing a much more efficient methodology by which engineers could specify their knowledge of space systems as a model that could be directly imported into a computer program. This comprehensively solved the telemetry analysis problem and provided another critical piece of the AGI puzzle: how to efficiently put human knowledge acquired through introspection into a computer program.

Putting a detailed knowledge model of the spacecraft in the software made it possible to accurately predict the behavior of the vehicle based on thousands of telemetry measurements with very simple processing algorithms. Mission control systems based on this approach were run on PCs when traditional ones were using mainframes.

Cruse noted at this point that there is an inverse relationship with respect to sophistication in algorithms and the data structures they process. You can solve problems using unstructured data if you have very sophisticated algorithms or you can solve problems using simple algorithms if you have sophisticated information structures.

In 2005, Cruse began the process of applying his practical experience in building computer knowledge models to the body of his epistemological theory, formalizing a model of “meta-knowledge” or “knowledge about knowledge.” This “epistemological kernel” is a set of organizing principles revealing that the conceptual building blocks of all knowledge come in a number of types which determine how any concepts can be combined to create arbitrary more sophisticated ones to represent sense rather than non-sense. This is highly analogous to the way types of atoms can only be combined in fixed ways to create different materials. The epistemological kernel is to knowledge representation of what the Periodic Table of the Elements is to Chemistry. It is the key to making

New Sapience was founded in 2014 after nearly ten years of R&D to develop a software technology based on the epistemological kernel, Compact Knowledge Models. Today we have a software platform, MIKOS, ready to build the world’s first practical, scalable knowledge-based systems. We now have software that can converse in everyday English and learn the meaning of new words via unscripted question-and-answer dialog or through inference based on the meaning of known words. Natural language comprehension is based on our Cognitive Core, a compact knowledge model that may be the ultimate in sophisticated data structures.

Our success has been dependent on a unique blend of formal education, philosophic inquiry, and practical engineering experience which has turned out to be exactly the right one to solve the problem of endowing computers with the capability to process human knowledge. As it has turned out, the solution to language comprehension and ultimately Artificial General Intelligence could not be found within the boundaries of computer science, it required insights grounded in a much broader perspective. Computer science provides the means to implement these insights.

Of all the pieces of the puzzle, it may be the practical experience, often missing from the labs in academia and the large government or corporate research organizations that has been crucial to our success. We asked the right question. Not, how do we build an artificial human brain? But, how can we put information into a computer and process it not as data, but as knowledge?

Subscribe For Updates

Subscribe for Updates

You May Also Like…

Knowledge and Intelligence

Knowledge and Intelligence

Understanding Intelligence Alan Turing, in his 1950 paper “Computing Machinery and Intelligence,” proposed the...

Knowledge and Language

Knowledge and Language

In Humans True or False: “The Library of Congress as a great repository of knowledge.” Who would not answer, true,...

The New Sapience Thesis

The New Sapience Thesis

Knowledge And Intelligence Artificial Intelligence has been considered the “holy grail” of computer science since the...


Submit a Comment