admin

//admin

About admin

This author has not yet filled in any details.
So far admin has created 13 blog entries.

New Sapience 101

By | 2017-12-05T20:07:22+00:00 December 4th, 2017|Competition, Modelled Intelligence|

New Sapience 101 is a short course in AI as seen from the New Sapience perspective. It is the best way to understand how our approach to Artificial General Intelligence fits into the larger landscape of AI as it is generally understood today. A must read for prospective investors! Use the link below to view or download a PDF file: New Sapience 101    

“AI” Today: Reality Check

By | 2017-11-08T16:58:40+00:00 November 6th, 2017|AGI, Competition, Narrow AI|

Narrow AI's Dark Secrets Articles about AI are published everyday. The term "AI" is used in a very narrow sense in the majority of these articles: it means applications based on training artificial neural networks under the control of sophisticated algorithms to solve very particular problems. Here is the first dark secret: This kind of AI isn't even AI. Whatever this software has, the one thing it lacks is anything that resembles intelligence. Intelligence is what distinguishes us from the other animals as demonstrated by its product: knowledge about the world. It is our knowledge and nothing else that has made us masters of the world around us. Not our clear vision, our acute hearing or our subtle motor control, other animals do that every bit as well or better. The developers of this technology understand that and so a term was invented some years ago to separate these kind of programs with real AI; Narrow AI which is in used in contrast to Artificial General Intelligence (AGI) which is the kind that processes and creates world knowledge. Here's the second dark secret. The machine learning we have been hearing about isn't learning at all in the usual sense.  When a human "learns" how to ride a bicycle, they do so by practicing until the neural pathways that coordinate the interaction of the senses and muscles have been sufficiently established to allow one to stay balanced. This “neural learning” is clearly very different than the kind of “cognitive [...]

The Turing Test

By | 2017-08-07T09:41:27+00:00 July 27th, 2017|Foundations|

An Imitation Game The great mathematician and computer scientist, Alan Turing, proposed his now famous test for Artificial Intelligence in 1950. The test was simple, in a text conversation (then via teletype – today we would say texting) with a person and a machine, if the judge could not reliably tell which was which, then as Turing put it - (hedging even here), it would be unreasonable to say the machine was not intelligent. The Turing Test bounds the domain of intelligence without defining what it is. We can only recognize intelligence by its results. However, over the more than 50 years since Turing’s formulation, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do,” but rather merely emulate some perceived component of intelligence such as inference or some structure of the brain such as a neural network. Recently the term “Artificial General Intelligence” (AGI) has come into use to refer precisely to the domain as Turing defined it. There are lots of issues with such a test, the machine would have to be taught how to lie or the judge would have to be very restricted in what could be talked about, the judgement could be shaded from the standpoint of the judge’s expectations with respect to current state-of-the-art in AI, and finally do we really want to build artificial humans [...]

Knowledge And Language

By | 2017-07-27T11:54:48+00:00 July 20th, 2017|Foundations|

Knowledge and Language A common misconception about the relationship of knowledge and language is that the latter contains the former. Certainly, we describe the Library of Congress as a great repository of knowledge. But consider what is going on when people use language to share knowledge using language. Our concepts are composed of simpler concepts so in order to convey a concept from one person to another we first mentally break it down into its’ component parts. For example, to convey the idea of “homepod” we break it down into component concepts represented here by their English word referents: age: new device: smart speaker manufacturer: Apple This is a parts list of concepts which the person speaking guesses will already exist in the listener's mind or at least reasonably similar concepts with the same word referents. If one or more parts are lacking, full communication is not possible. If all the components are well understood it may be obvious how they connect to form the new concept. However, more complex concepts require assembly instructions to indicate how the parts are to be connected to each other: grammar. We wrap the parts list in grammar plus a few connection words like articles and copulas and say: “A homepod is the new smart speaker from Apple.” As a child learns language they learn that what follows the word is are properties of what precedes it and the location and form of new, modifies the term "smart speaker" which, following the [...]

A New Epistemology

By | 2017-08-07T13:15:06+00:00 July 20th, 2017|Modelled Intelligence|

How do we know what we know? If we want to endow machines with knowledge we had better understand what it is. Epistemology, a term first used in 1854, is the branch of philosophy concerned with the theory of knowledge. It is not much studied in the schools these days and certainly not in computer science curriculums. Traditionally, epistemologists have focused on such concepts as truth, belief and justification as applied to any given assertions. From that perspective it is not much help since previous attempts to put knowledge into machines failed because they treated knowledge as just that, a vast collection of assertions (facts or opinions). That is not knowledge - that is data. We need to find an organizing structure for all these facts that will transform them into a road map of the world.  Since the dawn of civilization there have successive descriptions of the our world or reality. The ancients created, as beautifully articulated by the theorems of the Alexandrian mathematician Ptolemy, an elegant geometric model of the universe with the earth at the center and everything else travelling around it in perfect circles, at a constant velocity. They had to put circles traveling on other circles to make the model match the actual celestial observations - but it worked![1] Claudius Ptolemy AD 100 - 170 The Ptolemaic System The Sextant Later this model was (what should one say, refuted, replaced, superseded?) by Newton [...]

AGI

By | 2017-07-27T11:56:34+00:00 July 20th, 2017|AGI, Foundations|

Artificial General Intelligence (AGI) AGI or Artificial General Intelligence, the quest for software that does have genuine comprehension that would be recognizable as such by anyone because (in the spirit of the Turing Test) you can hold a general unscripted conversation with it. Today, outside of our work, AGI research efforts fall into two categories. Whole brain emulation The approach is that you first create a neural network with the size and complexity of the human brain and then program it to recapitulate, in some form, human cognitive processes that will eventually result in the production of world knowledge. The assumption here is that intelligence is kind of an emergent property of a vast neural network. We find this assumption extremely doubtful, and there are numerous other problems associated with this approach even should it produce something. Ray Kurzweil, who popularized the idea of an AI “singularity” and is currently VP of Engineering at Google is pursuing this approach (no doubt with lots of money – he will need it). The project at Google and numerous other whole brain research projects at DARPA, IBM and other places are described are artificialbrains.com Cognitive algorithms This approach seeks to discover one or a small number of immensely powerful algorithms that endow the human brain with intelligence and then reverse engineer them such that the program will be able to process raw inputs and turn them into real knowledge as humans can do. We call this the magic algorithm approach. Significant [...]

AI at Google

By | 2017-07-27T11:56:52+00:00 September 20th, 2016|AGI, Competition|

Representation of a neural network Artificial Neural Networks  & Natural Language As we know, Google (and all of the big tech companies) have been making massive investments in the (we think misnamed) "cognitive computing" technology that is now considered almost  synonymous with AI by common usage. "Cognitive computing" is jargon for artificial neural networks (ANNs). Neural networks are "trained" over vast numbers of iterations on supercomputers to recognize patterns in equally vast databases. A very expensive process, but one that works reasonably well for things like pattern recognition in photographs, though even here, there are limitations, because ANNs lack any knowledge of the real world objects they are being trained to recognize. Applications of neural networks to natural language processing proceed in the same way as with images. The networks are trained under the control of algorithms designed to find certain patterns in huge databases, in this case, of documents, which from the standpoint of the program, are just an array of numbers (exactly as a photograph is nothing but an array of numbers to such programs.) The applications process these text databases but they have no reading comprehension as humans recognize it - no notion whatsoever about the content or meaning of the text. Humans curate the databases to limit the search scope and design “training” algorithms to identity patterns in the numbers – numbers because text in computers is represented in ASCII; a numerical representation. Once trained, and training may take days, weeks or longer [...]

The Third Singularity

By | 2017-07-27T12:06:35+00:00 September 20th, 2015|AGI, Foundations|

The Third Singularity Are Super Artificial Intelligences going to make humanity obsolete? If you’re not worried about this maybe you should be since some of the leading technical minds of our time are clearly very concerned. Eminent theoretical physicist, Stephen Hawking said about AI: “it would take off on its own, and re-design itself at an ever increasing rate. Humans who are limited by slow biological evolution, couldn’t compete, and will be superseded.” Visionary entrepreneur and technologist Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” No less than Bill Gates seconded his concern: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned." The scenario Hawking refers to, of A.I.s redesigning themselves to become ever more intelligent is called The Singularity. It goes like this: once humans create A.I.s as intelligent as they are, then there is no reason to believe they could not create A.I.s even more intelligent, but then those super A.I.s could create A.I.s more intelligent than themselves and so on ad-infinitum and in no time at all A.I.s would exist as superior to humans in intelligence as humans are to fruit flies. The term Singularity is taken from mathematics where it refers to a function that becomes undefined at a certain point beyond which its behavior becomes impossible [...]

Knowledge and Intelligence

By | 2017-11-07T07:48:59+00:00 September 20th, 2015|AGI|

Understanding Intelligence Alan Turing, in his 1950 paper “Computing Machinery and Intelligence,” proposed the following question: “Can machines do what we (as thinking entities) can do?” To answer it, he described his now famous test in which a human judge engages in a natural language conversation via a text interface with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. The Turing Test bounds the domain of intelligence without defining what it is. We recognize intelligence by its results. John McCarthy, who coined the term Artificial Intelligence in 1955, defined it as "the science and engineering of making intelligent machines." A very straight-forward definition, yet few terms have been more obfuscated by hype and extravagant claims, imbued with both hope and dread, or denounced as fantasy. Over the succeeding decades, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do.” The process by which this has come about is no mystery. A researcher formulates a theory about what intelligence or one of its key components is and attempts to implement it in software. “Humans are intelligent because we can employ logic” and so rule-based inference engines are developed. “We are intelligent because our brains are composed of neural networks” and so software neural networks are [...]

“Cognitive Computing”

By | 2017-07-27T11:58:02+00:00 September 20th, 2015|AI, Competition|

The Problem with Cognitive Computing Nearly all of the technologies today that describe themselves as AI in both the Narrow and General categories are characterized as Cognitive Computing -- defined as: "the simulation of human thought processes in a computerized model.  Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works." This approach has become so mainstream that some have suggested that the term Cognitive Computing has replaced the term Artificial Intelligence as a branch of computer science. If so, New Sapience is not in the AI space at all since considerations of human brain structure and function play no role in our technology. There are fundamental problems with the Cognitive Computing approach, which are not lost on some of its foremost practitioners.   Jerome Pesenti, vice president of the Watson team at IBM, is quoted in an article in "The Platform" titled “The Real Trouble With Cognitive Computing:” “When it comes to neural networks, we don’t entirely know how they work, and what’s amazing is that we’re starting to build systems we can’t fully understand.  The math and the behavior are becoming very complex and my suspicion is that as we create these networks that are ever larger and keep throwing computing power to it, .... (it) creates some interesting methodological problems.” He goes on to say, “The non-messy way to develop would be to create one big knowledge model, as with the semantic web, [...]

Load More Posts