The New Sapience Thesis

By |2019-02-25T10:10:46-05:00July 7th, 2017|AGI, AI, MK|

Knowledge and Intelligence Artificial Intelligence has been considered the “holy grail” of computer science since the dawn of computing, though these days when all kinds of programs are grouped loosely together under the term “AI” it is necessary to say “real AI” or "Artificial General Intelligence" to indicate we are talking about intelligence in the same sense as human intelligence. Humans are intelligent animals. It is that one attribute, that humans possess in so much greater degree than any other known animal that it defines us. We define ourselves by our intelligence and the experience of being thinking entities. But who knows what is going on in the minds of other creatures? Pilot whales not only have larger brains than humans and their neo-cortex, thought to be the seat of intelligence in humans, is also larger. What is truly unique about humans is the end product of our cognitive processes: knowledge. It is knowledge of the world which allows us to evaluate how different courses of actions lead to different results, that has made our species masters of our world. It takes but a moment of reflection to realize that, since the reason we build machines to amplify our power in the world, the real goal of intelligent machines is not “thinking” in the information processing sense, computers can already reason, remember and analyze patterns superbly – in that sense they are already intelligent but - they are ignorant. Imagine if Einstein lived in Cro-Magnon times. What intellectual [...]

A New Epistemology

By |2018-07-14T17:12:32-04:00July 5th, 2017|AGI, Foundations, MK|

How do we know what we know? If we want to endow machines with knowledge we had better understand what it is. Epistemology, a term first used in 1854, is the branch of philosophy concerned with the theory of knowledge. It is not much studied in the schools these days and certainly not in computer science curriculums. Traditionally, epistemologists have focused on such concepts as truth, belief and justification as applied to any given assertions. From that perspective it is not much help since previous attempts to put knowledge into machines failed because they treated knowledge as just that, a vast collection of assertions (facts or opinions). That is not knowledge -that is data. We need to find an organizing structure for all these facts that will transform them into a road map of the world.  Since the dawn of civilization there have successive descriptions of the our world or reality. The ancients created, as beautify articulated by the theorems of the Alexandrian mathematician Ptolemy, an elegant geometric model of the universe with the earth at the center and everything else travelling around it on perfect circles, at a constant velocity. They had to put circles traveling on other circles to make the model match the actual celestial observations - but it worked![1] Claudius Ptolemy AD 100 - 170 The Ptolemaic System The Sextant Later this model was, (what should one say, refuted, replaced, superseded?) by [...]

AI at Google

By |2018-07-14T13:23:18-04:00September 20th, 2016|AGI, Competition|

Representation of a neural network Artificial Neural Networks  & Natural Language When we explain our Compact Knowledge Model technology and describe it's far reaching implications for Artificial General Intelligence a common reaction is "but surely Google and the other big tech companies are doing something similar." As we know, Google (and all of the big tech companies) have been making massive investments in the (we think misnamed) "cognitive computing" technology that is now considered almost  synonymous with AI by common usage. "Cognitive computing" is jargon for artificial neural networks (ANNs). Neural networks are "trained" over vast numbers of iterations on supercomputers to recognize patterns in equally vast databases. A very expensive process, but one that works reasonably well for things like pattern recognition in photographs, though even here, there are limitations, because ANNs lack any knowledge of the real world objects they are being trained to recognize. Applications of neural networks to natural language processing proceed in the same way as with images. The networks are trained under the control of algorithms designed to find certain patterns in huge databases, in this case, of documents, which from the standpoint of the program, are just an array of numbers (exactly as a photograph is nothing but an array of numbers to such programs.) The applications process these text databases but they have no reading comprehension as humans recognize it - no notion whatsoever about the content or meaning of the text. Humans curate the databases to limit the [...]

Models and Metaphors

By |2018-07-14T13:26:25-04:00September 5th, 2016|Foundations|

Personal reflections on neural networks, modeled Artificial Intelligence and the experience of being human. I become more and more excited about the progress we are making, here at New Sapience, in solving the language problem – that is, learning how to build knowledge structures that accurately model the world but that are completely independent of languages and linguistics. Our fundamental realization - that language is an encoded communications protocol between entities and does not contain or record knowledge in itself is hugely helpful in keeping us on the right track. Our biggest challenge is that, as we use introspection to examine our own interior world model, we find ourselves “articulating” that model to ourselves and so language is always coming back in to cloud the issue. I find myself constantly admonishing our “epistemological engineers” to remember to think in terms of nodes and connectors - not the meaning of words – which only can have meaning in relationship to a model independent of language. As the equivalent reading comprehension level of our sapiens climbs up the human grade levels it is tempting to think that once it reaches – say fourth grade we can “send it to school.” Let it read textbooks and eventually the Internet and it will be able to automatically accumulate arbitrarily large quantities of knowledge. We will certainly be able to do this and for a long time I believed we would – why not? Interestingly, the farther we go down the road, the [...]

The Third Singularity

By |2019-03-06T10:13:11-05:00September 20th, 2015|AGI, Foundations, MK|

The Third Singularity Are Super Artificial Intelligences going to make humanity obsolete? If you’re not worried about this maybe you should be since some of the leading technical minds of our time are clearly very concerned. Eminent theoretical physicist, Stephen Hawking said about AI: “it would take off on its own, and re-design itself at an ever increasing rate. Humans who are limited by slow biological evolution, couldn’t compete, and will be superseded.” Visionary entrepreneur and technologist Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” No less than Bill Gates seconded his concern: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned." The scenario Hawking refers to, of A.I.s redesigning themselves to become ever more intelligent is called The Singularity. It goes like this: once humans create A.I.s as intelligent as they are, then there is no reason to believe they could not create A.I.s even more intelligent, but then those super A.I.s could create A.I.s more intelligent than themselves and so on ad-infinitum and in no time at all A.I.s would exist as superior to humans in intelligence as humans are to fruit flies. The term Singularity is taken from mathematics where it refers to a function that becomes undefined at a certain point beyond which its behavior becomes impossible [...]

Knowledge and Intelligence

By |2018-07-14T13:39:17-04:00September 20th, 2015|AGI, MK|

Understanding Intelligence Alan Turing, in his 1950 paper “Computing Machinery and Intelligence,” proposed the following question: “Can machines do what we (as thinking entities) can do?” To answer it, he described his now famous test in which a human judge engages in a natural language conversation via a text interface with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. The Turing Test bounds the domain of intelligence without defining what it is. We recognize intelligence by its results. John McCarthy, who coined the term Artificial Intelligence in 1955, defined it as "the science and engineering of making intelligent machines." A very straight-forward definition, yet few terms have been more obfuscated by hype and extravagant claims, imbued with both hope and dread, or denounced as fantasy. Over the succeeding decades, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do.” The process by which this has come about is no mystery. A researcher formulates a theory about what intelligence or one of its key components is and attempts to implement it in software. “Humans are intelligent because we can employ logic” and so rule-based inference engines are developed. “We are intelligent because our brains are composed of neural networks” and so software neural networks are [...]

Assessing AI

By |2019-03-08T10:45:09-05:00September 16th, 2015|MK|

Measuring Language Comprehension How intelligent will our sapiens become?  For the first time in the history of computing, the language comprehension of a software technology can be measured with tools designed to assess human comprehension.  We are already finding that such tools can be usefully applied to assess our technology’s increasing sophisticated language comprehension.  The performance level of a sapiens is determined solely by the scope and fidelity of its world model.  There is no limit to how well the world can be modeled as the history of human knowledge attests.  However, the computational bandwidth and memory capacity of an individual human brain is forever bounded in ways computer technology is not. We expect the baseline language comprehension to climb quickly through the grade levels, continuing to college, graduate levels, and beyond.  Such a notion has been inconceivable for any other approach because, without world models, they have no language comprehension to measure and no thoughts to articulate.  Since its beginnings in the 1980s, the AI community has been rife with hyperbole and vague claims of programs that “think like humans,” but always without measurable results. We believe that era is now in the past.  With quantifiable comprehension, we foresee that New Sapience’s Machine Knowledge will demonstrate a breakthrough potential to move into a field of machine-human interface applications that is basically unlimited as compared to the technologies currently available. Blooms Taxonomy of Learning Bloom’s Taxonomy provides an important framework teachers use to focus on higher order thinking. [...]

“Anticipatory Computing”

By |2018-07-14T13:58:44-04:00July 20th, 2015|AI, Competition|

"Anticipatory Computing" Recently many applications that self-indentify as AI have also been cited as examples of “anticipatory computing,” as in this National Public Radio article: “Computers That Know What You Need, Before You Ask” Here is the Wikipedia entry for “Anticipatory Computing:” In artificial intelligence (AI), anticipation is the concept of an agent making decisions based on predictions, expectations, or beliefs about the future. It is widely considered that anticipation is a vital component of complex natural cognitive systems. As a branch of AI, anticipatory systems is a specialization still echoing the debates from the 1980s about the necessity for AI for an internal model. When asked: “What do you anticipate would happen if someone jumped off the Empire State Building?” A human would employ their internal model of acceleration due to gravity, the relative frailty of the human body and the size of the building to predict: “They would impact the pavement at a high velocity and be killed.” So what for a human is simple common sense, in the context of computing is asserted to be a whole new branch of Artificial Intelligence, one that, according to the NPR article cited above, is being used to change the way we interact with our technology: “Google Now”, which is available on tablets and mobile devices, is an early form of this (anticipatory computing). You can ask it a question like, "Where is the White House?" and get a spoken-word answer. Then, Google Now recognizes any follow-up questions, [...]

Load More Posts