About admin

This author has not yet filled in any details.
So far admin has created 11 blog entries.

The Turing Test

By | 2017-08-07T09:41:27+00:00 July 27th, 2017|Foundations|

An Imitation Game The great mathematician and computer scientist, Alan Turing, proposed his now famous test for Artificial Intelligence in 1950. The test was simple, in a text conversation (then via teletype – today we would say texting) with a person and a machine, if the judge could not reliably tell which was which, then as Turing put it - (hedging even here), it would be unreasonable to say the machine was not intelligent. The Turing Test bounds the domain of intelligence without defining what it is. We can only recognize intelligence by its results. However, over the more than 50 years since Turing’s formulation, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do,” but rather merely emulate some perceived component of intelligence such as inference or some structure of the brain such as a neural network. Recently the term “Artificial General Intelligence” (AGI) has come into use to refer precisely to the domain as Turing defined it. There are lots of issues with such a test, the machine would have to be taught how to lie or the judge would have to be very restricted in what could be talked about, the judgement could be shaded from the standpoint of the judge’s expectations with respect to current state-of-the-art in AI, and finally do we really want to build artificial humans [...]

Knowledge And Language

By | 2017-07-27T11:54:48+00:00 July 20th, 2017|Foundations|

Knowledge and Language A common misconception about the relationship of knowledge and language is that the latter contains the former. Certainly, we describe the Library of Congress as a great repository of knowledge. But consider what is going on when people use language to share knowledge using language. Our concepts are composed of simpler concepts so in order to convey a concept from one person to another we first mentally break it down into its’ component parts. For example, to convey the idea of “homepod” we break it down into component concepts represented here by their English word referents: age: new device: smart speaker manufacturer: Apple This is a parts list of concepts which the person speaking guesses will already exist in the listener's mind or at least reasonably similar concepts with the same word referents. If one or more parts are lacking, full communication is not possible. If all the components are well understood it may be obvious how they connect to form the new concept. However, more complex concepts require assembly instructions to indicate how the parts are to be connected to each other: grammar. We wrap the parts list in grammar plus a few connection words like articles and copulas and say: “A homepod is the new smart speaker from Apple.” As a child learns language they learn that what follows the word is are properties of what precedes it and the location and form of new, modifies the term "smart speaker" which, following the [...]

A New Epistemology

By | 2017-08-07T13:15:06+00:00 July 20th, 2017|Modelled Intelligence|

How do we know what we know? If we want to endow machines with knowledge we had better understand what it is. Epistemology, a term first used in 1854, is the branch of philosophy concerned with the theory of knowledge. It is not much studied in the schools these days and certainly not in computer science curriculums. Traditionally, epistemologists have focused on such concepts as truth, belief and justification as applied to any given assertions. From that perspective it is not much help since previous attempts to put knowledge into machines failed because they treated knowledge as just that, a vast collection of assertions (facts or opinions). That is not knowledge - that is data. We need to find an organizing structure for all these facts that will transform them into a road map of the world.  Since the dawn of civilization there have successive descriptions of the our world or reality. The ancients created, as beautifully articulated by the theorems of the Alexandrian mathematician Ptolemy, an elegant geometric model of the universe with the earth at the center and everything else travelling around it in perfect circles, at a constant velocity. They had to put circles traveling on other circles to make the model match the actual celestial observations - but it worked![1] Claudius Ptolemy AD 100 - 170 The Ptolemaic System The Sextant Later this model was (what should one say, refuted, replaced, superseded?) by Newton [...]


By | 2017-07-27T11:56:34+00:00 July 20th, 2017|AGI, Foundations|

Artificial General Intelligence (AGI) AGI or Artificial General Intelligence, the quest for software that does have genuine comprehension that would be recognizable as such by anyone because (in the spirit of the Turing Test) you can hold a general unscripted conversation with it. Today, outside of our work, AGI research efforts fall into two categories. Whole brain emulation The approach is that you first create a neural network with the size and complexity of the human brain and then program it to recapitulate, in some form, human cognitive processes that will eventually result in the production of world knowledge. The assumption here is that intelligence is kind of an emergent property of a vast neural network. We find this assumption extremely doubtful, and there are numerous other problems associated with this approach even should it produce something. Ray Kurzweil, who popularized the idea of an AI “singularity” and is currently VP of Engineering at Google is pursuing this approach (no doubt with lots of money – he will need it). The project at Google and numerous other whole brain research projects at DARPA, IBM and other places are described are Cognitive algorithms This approach seeks to discover one or a small number of immensely powerful algorithms that endow the human brain with intelligence and then reverse engineer them such that the program will be able to process raw inputs and turn them into real knowledge as humans can do. We call this the magic algorithm approach. Significant [...]

AI at Google

By | 2017-07-27T11:56:52+00:00 September 20th, 2016|AGI, Competition|

Representation of a neural network Artificial Neural Networks  & Natural Language As we know, Google (and all of the big tech companies) have been making massive investments in the (we think misnamed) "cognitive computing" technology that is now considered almost  synonymous with AI by common usage. "Cognitive computing" is jargon for artificial neural networks (ANNs). Neural networks are "trained" over vast numbers of iterations on supercomputers to recognize patterns in equally vast databases. A very expensive process, but one that works reasonably well for things like pattern recognition in photographs, though even here, there are limitations, because ANNs lack any knowledge of the real world objects they are being trained to recognize. Applications of neural networks to natural language processing proceed in the same way as with images. The networks are trained under the control of algorithms designed to find certain patterns in huge databases, in this case, of documents, which from the standpoint of the program, are just an array of numbers (exactly as a photograph is nothing but an array of numbers to such programs.) The applications process these text databases but they have no reading comprehension as humans recognize it - no notion whatsoever about the content or meaning of the text. Humans curate the databases to limit the search scope and design “training” algorithms to identity patterns in the numbers – numbers because text in computers is represented in ASCII; a numerical representation. Once trained, and training may take days, weeks or longer [...]

The Third Singularity

By | 2017-07-27T12:06:35+00:00 September 20th, 2015|AGI, Foundations|

The Third Singularity Are Super Artificial Intelligences going to make humanity obsolete? If you’re not worried about this maybe you should be since some of the leading technical minds of our time are clearly very concerned. Eminent theoretical physicist, Stephen Hawking said about AI: “it would take off on its own, and re-design itself at an ever increasing rate. Humans who are limited by slow biological evolution, couldn’t compete, and will be superseded.” Visionary entrepreneur and technologist Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” No less than Bill Gates seconded his concern: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned." The scenario Hawking refers to, of A.I.s redesigning themselves to become ever more intelligent is called The Singularity. It goes like this: once humans create A.I.s as intelligent as they are, then there is no reason to believe they could not create A.I.s even more intelligent, but then those super A.I.s could create A.I.s more intelligent than themselves and so on ad-infinitum and in no time at all A.I.s would exist as superior to humans in intelligence as humans are to fruit flies. The term Singularity is taken from mathematics where it refers to a function that becomes undefined at a certain point beyond which its behavior becomes impossible [...]

Knowledge and Intelligence

By | 2017-07-27T11:49:52+00:00 September 20th, 2015|AGI, AI, Foundations, Modelled Intelligence|

Understanding Intelligence Alan Turing, in his 1950 paper “Computing Machinery and Intelligence,” proposed the following question: “Can machines do what we (as thinking entities) can do?” To answer it, he described his now famous test in which a human judge engages in a natural language conversation via a text interface with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. The Turing Test bounds the domain of intelligence without defining what it is. We recognize intelligence by its results. John McCarthy, who coined the term Artificial Intelligence in 1955, defined it as "the science and engineering of making intelligent machines." A very straight-forward definition, yet few terms have been more obfuscated by hype and extravagant claims, imbued with both hope and dread, or denounced as fantasy. Over the succeeding decades, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do.” The process by which this has come about is no mystery. A researcher formulates a theory about what intelligence or one of its key components is and attempts to implement it in software. “Humans are intelligent because we can employ logic” and so rule-based inference engines are developed. “We are intelligent because our brains are composed of neural networks” and so software neural networks are [...]

“Cognitive Computing”

By | 2017-07-27T11:58:02+00:00 September 20th, 2015|AI, Competition|

The Problem with Cognitive Computing Nearly all of the technologies today that describe themselves as AI in both the Narrow and General categories are characterized as Cognitive Computing -- defined as: "the simulation of human thought processes in a computerized model.  Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works." This approach has become so mainstream that some have suggested that the term Cognitive Computing has replaced the term Artificial Intelligence as a branch of computer science. If so, New Sapience is not in the AI space at all since considerations of human brain structure and function play no role in our technology. There are fundamental problems with the Cognitive Computing approach, which are not lost on some of its foremost practitioners.   Jerome Pesenti, vice president of the Watson team at IBM, is quoted in an article in "The Platform" titled “The Real Trouble With Cognitive Computing:” “When it comes to neural networks, we don’t entirely know how they work, and what’s amazing is that we’re starting to build systems we can’t fully understand.  The math and the behavior are becoming very complex and my suspicion is that as we create these networks that are ever larger and keep throwing computing power to it, .... (it) creates some interesting methodological problems.” He goes on to say, “The non-messy way to develop would be to create one big knowledge model, as with the semantic web, [...]

NLP: Other Approaches

By | 2017-07-27T11:59:35+00:00 September 20th, 2015|Competition, Narrow AI, Uncategorized|

Natural Language Processing: Narrow AI Approaches Most of the work being done today in the AI field are categorized as Narrow AI. “Narrow AI” is distinct from Artificial General Intelligence because category-- these are techniques that do interesting and sometimes useful things but clearly exhibit nothing like genuine comprehension of language even when they occasionally output a correct or useful response to an input query. All the “Big Data” and “Deep Learning” type applications (most based on pattern matching and statistical approaches inspired by neural networks) are in this category. This is also the only category where you find actual products like SIRI or Watson or even interesting demos. Apple’s SIRI (and similar offerings from Google, Microsoft and Amazon) SIRI, in the words of its developers, is a “text based front-end to a search engine” with some extensions for voice commands into other applications. It is not a comprehension technology and does not understand a single word of the language it processes. In marked contrast to Apple’s other much admired technology, we believe SIRI is more likely to be made fun of than made use of. SIRI’s comparable responses to the same inputs given to our Alpha sapiens in the dialog presented previously dramatically illustrate just how limited it actually is. IBM’s Watson IBM is devoting much advertising hype to possible applications for its Jeopardy game winning program, yet like SIRI we believe it processes language simply as patterns and uses statistical probabilities to “guess” the response to [...]

Assessing AI

By | 2017-07-27T13:02:45+00:00 September 16th, 2015|Modelled Intelligence|

Measuring Language Comprehension How intelligent will our sapiens become?  For the first time in the history of computing, the language comprehension of a software technology can be measured with tools designed for people.  We expect human language comprehension tools to be useful to assess our technology’s increasing language comprehension at regular intervals.  The performance level of a Modeled Intelligence is determined solely by the scope and fidelity of its world model.  There is no limit to how well the world can be modeled as the history of human knowledge attests.  However, the computational bandwidth and memory capacity of an individual human brain is forever bounded in ways computer technology is not. We expect the baseline language comprehension to climb quickly through the grade levels, continuing to college, graduate levels, and beyond.  Such a notion has been inconceivable for any other approach because, without world models, they have no language comprehension to measure and no thoughts to articulate.  Since its beginnings in the 1980s, the AI community has been rife with hyperbole and vague claims of programs that “think like humans,” but always without measurable results. We believe that era is now in the past.  With quantifiable comprehension, we foresee that New Sapience’s Modeled Intelligence will demonstrate a breakthrough potential to move into a field of machine-human interface applications that is basically unlimited as compared to the technologies currently available. Blooms Taxonomy of Learning Bloom’s Taxonomy provides an important framework teachers use to focus on higher order thinking.  By [...]

Load More Posts