Managing your expectations

You are your sapien’s principle. We expect that will come to have a special meaning for you over time as your sapiens “grows up” through frequent core model updates. But we are just beginning so  bear with us. It is very important at this point to manage your expectations. What you have is not like any software application in history, it is certainly nothing like SIRI or Alexa but it is also a long way from being a product.  You may find that it has some limited utility for helping you remember details about your everyday life at this point – or maybe not quite there yet – that is one of the things we are interested to find out with your help. Try not to become too frustrated. We are at version 0.3.0 of the Core and will be updating very frequently.  We will be dividing our development efforts between working on new capabilities and fixing the current shortcomings that you find for us.

While we have been talking about “Beta-testing” our technology, we are more properly at “Alpha-testing.”  The difference is that at beta you should have all the capabilities you want in the product you intend to release and are just finding bugs.  But that model is a little fuzzy for us anyway because it is not so much specific features we are going for, but a more general capability to comprehend language which, the better we are at, it the more things we can do. For example, if we decided that the “memory augmentation feature” was sufficiently helpful that it could be marketed as an app then we are probably at the “beta” stage for that app but certainly not at “beta” for the caregiver sapiens.

At this point we need your help to identify things that don’t work that we think should work, as opposed to the things that don’t work because we haven’t implemented them yet. Here are some things we don’t yet support so don’t waste your time testing them:  Unsupported

We have already done a lot of work toward these capabilities and are making good progress. Our original intent was to hold off putting any sapiens out there for testing until these features were in place because clearly it will be a lot more useful then but we decided to go ahead because most of you in this first group are actively talking to potential investors and should find having a sapiens to show helpful. So, experiment with it until you get familiar with what works well enough to show somebody and what doesn’t.

You can’t over-ride anything hard coded in the model (even if it is wrong). So, if it tells you something like “No that’s not right, games can’t be played” due a modelling error, it is pointless to argue with it by saying “Yes, they can.”

Teaching your sapiens new things

If one person says something to another and is understood, we can say the first person taught and the second learned.  That includes the comprehension of specific everyday facts like “The keys are under the mat.” Our sapiens are pretty good about comprehending a statement such as that and will create a new “fact” by connecting a set of keys and an instance of mat with a previously modelled spatial relationship.  But more specifically when we talk about teaching and learning we mean abstract knowledge not specific facts. This is sometimes spoken of as learning vocabulary, although it is really the more difficult problem of creating the model element the vocabulary word references that is learning.

The table illustrates the methods by which humans and sapiens learn (get their models extended)

  1. Direct learning comprehension. Learning new facts is the most immediately useful capability for a sapiens to have but it is bounded by the sophistication of the core model at each point in its development and the routines that extract information from language inputs to create the new model elements and locate it into the model in its proper location.  Today the sapiens is strongest in learning new abstract ideas in the substance category, learning actions, qualities and relationships through language are more challenging, ultimately doable but we have a long way to go there.  Ideas in the Time, Space and Quantity category would be very difficult to teach through language, fortunately, the number of concepts is small enough that they can all be handled by method (5).  That is also true of handing words such as prepositions, (try telling a child what the word “of” means in a few sentences.) However, there are only 60 propositions in English so handling them via (5) is very doable.Direct learning comprehension. Learning new facts is the most immediately useful capability for a sapiens to have but it is bounded by the sophistication of the core model at each point in its development and the routines that extract information from language inputs to create the new model elements and locate it into the model in its proper location.  Today the sapiens is strongest in learning new abstract ideas in the substance category, learning actions, qualities and relationships through language are more challenging, ultimately doable but we have a long way to go there.
  2. Inferring the meaning of unknown words. We currently have some capability to infer the actors and targets of defined actions and will be adding results. Learning new concepts based on known qualities is also very doable but is not currently supported, same with known relationships.  This is an area where we can achieve very noticeable results with relatively little development effort.
  3. Reasoning about existing model. For example, if Bill and Mary Jo visit Yosemite we conclude they enjoy nature.  This is Bloom’s Taxonomy Level 3: Application.  We haven’t started on this.  It is very straight-forward both from a modelling and code development standpoint, but we will defer until the model is more mature.
  4. Direct Interpretation of sensory experience. This is way down the road when sapiens will have sensory subsystems. One class of things humans learn to recognize through hearing is grammatical patterns in language. The rules and structure of grammar can be taught cognitively to some extent, but neural learning is easier and produces better results.  People teach their children correct grammar by speaking it to them.  Sapiens have no ability to learn grammar and do not need it since it is all going in via (5.)
  5. Direct extension of model. Has the potential to produce models better than any that can be acquired through senses or language because the senses are so limited and communication through language is so imperfect. Takes a lot of work but only has to be done once.
  6. Database Import Our system can import information to create instances of abstract classes via software routines. We already import lists of state, cities and common first and last names. We will also be using it for medications, diseases and so forth for the Caregiver sapiens.

Working with your sapiens

Specified outputs.

“I’m sorry, I didn’t get that, could you please rephrase.” or “I didn’t understand your question” indicate that the system failed to interpret the grammar of the sentence correctly and so could not proceed to the “comprehension” processing phase.

This is usually caused buy an incorrect grammatical parse.  We currently employ a third-party statistical parser (from Google). It doesn’t know what the words mean and so sometime comes out with parses that are non-sensical. When we identify cases of this, we can write a few lines of code to “tidy” it up so we can process it next time. Eventually we will build our own parser that will utilize knowledge of words and grammar and will never get it wrong.

“I know the answer, but I don’t know how to express it” Means exactly what is says.  The system understood the question, found the answer, but failed to articulate it.

Specified inputs

“We need to work on your vocabulary.”

Our system supports multiple conversational contexts that can be active at any given time.  Currently only two are defined, normal and” pedagogic.” In the normal (default) context the system will only ask questions when it needs to disambiguate homonyms or gather some specific required information. In the” pedagogic” mode it will ask questions about words when it is unable to deduce the meaning of a new word based on how it is used in the sentence. This can be a bit tedious, like conversing with an inquisitive 3-year-old so it is turned off by default.

Switching modes keys off the input pattern not the concept graph it generates, so it needs to be exact. “Let’s works on your vocabulary” or “you can ask questions if you want” doesn’t work. This chatbot-like practice of keying off an input pattern rather than comprehending what was said is a legacy going back to before we learned how to create true cognition graphs.  There are only a few of these left in the system and eventually all will be removed, in the meantime if it works – don’t fix it.

“stop asking questions”

Turns pedagogic mode off.

“Why?” and “Why not?”

When the system says something like “No that can’t be right” these inputs (they work the same) with cause it to explain.

The system’s ability to extract information from a sentence and use it to extend the model is more extensive than it appears sometimes when the only response you get back is “I see” and so forth. Sometimes you can find a way to articulate a question that works when some others do not.