You are your sapien’s principle. We expect that will come to have a special meaning for you over time as your sapiens “grows up” through frequent core model updates. But we are just beginning so bear with us. It is very important at this point to manage your expectations. What you have is not like any software application in history, it is certainly nothing like SIRI or Alexa but it is also a long way from being a product. You may find that it has some limited utility for helping you remember details about your everyday life at this point – or maybe not quite there yet – that is one of the things we are interested to find out with your help. Try not to become too frustrated. We are at version 0.3.0 of the Core and will be updating very frequently. We will be dividing our development efforts between working on new capabilities and fixing the current shortcomings that you find for us.
While we have been talking about “Beta-testing” our technology, we are more properly at “Alpha-testing.” The difference is that at beta you should have all the capabilities you want in the product you intend to release and are just finding bugs. But that model is a little fuzzy for us anyway because it is not so much specific features we are going for, but a more general capability to comprehend language which, the better we are at, it the more things we can do. For example, if we decided that the “memory augmentation feature” was sufficiently helpful that it could be marketed as an app then we are probably at the “beta” stage for that app but certainly not at “beta” for the caregiver sapiens.
At this point we need your help to identify things that don’t work that we think should work, as opposed to the things that don’t work because we haven’t implemented them yet. Here are some things we don’t yet support so don’t waste your time testing them: Unsupported
We have already done a lot of work toward these capabilities and are making good progress. Our original intent was to hold off putting any sapiens out there for testing until these features were in place because clearly it will be a lot more useful then but we decided to go ahead because most of you in this first group are actively talking to potential investors and should find having a sapiens to show helpful. So, experiment with it until you get familiar with what works well enough to show somebody and what doesn’t.
You can’t over-ride anything hard coded in the model (even if it is wrong). So, if it tells you something like “No that’s not right, games can’t be played” due a modelling error, it is pointless to argue with it by saying “Yes, they can.”
If one person says something to another and is understood, we can say the first person taught and the second learned. That includes the comprehension of specific everyday facts like “The keys are under the mat.” Our sapiens are pretty good about comprehending a statement such as that and will create a new “fact” by connecting a set of keys and an instance of mat with a previously modelled spatial relationship. But more specifically when we talk about teaching and learning we mean abstract knowledge not specific facts. This is sometimes spoken of as learning vocabulary, although it is really the more difficult problem of creating the model element the vocabulary word references that is learning.
The table illustrates the methods by which humans and sapiens learn (get their models extended)
“I’m sorry, I didn’t get that, could you please rephrase.” or “I didn’t understand your question” indicate that the system failed to interpret the grammar of the sentence correctly and so could not proceed to the “comprehension” processing phase.
This is usually caused buy an incorrect grammatical parse. We currently employ a third-party statistical parser (from Google). It doesn’t know what the words mean and so sometime comes out with parses that are non-sensical. When we identify cases of this, we can write a few lines of code to “tidy” it up so we can process it next time. Eventually we will build our own parser that will utilize knowledge of words and grammar and will never get it wrong.
“I know the answer, but I don’t know how to express it” Means exactly what is says. The system understood the question, found the answer, but failed to articulate it.
“We need to work on your vocabulary.”
Our system supports multiple conversational contexts that can be active at any given time. Currently only two are defined, normal and” pedagogic.” In the normal (default) context the system will only ask questions when it needs to disambiguate homonyms or gather some specific required information. In the” pedagogic” mode it will ask questions about words when it is unable to deduce the meaning of a new word based on how it is used in the sentence. This can be a bit tedious, like conversing with an inquisitive 3-year-old so it is turned off by default.
Switching modes keys off the input pattern not the concept graph it generates, so it needs to be exact. “Let’s works on your vocabulary” or “you can ask questions if you want” doesn’t work. This chatbot-like practice of keying off an input pattern rather than comprehending what was said is a legacy going back to before we learned how to create true cognition graphs. There are only a few of these left in the system and eventually all will be removed, in the meantime if it works – don’t fix it.
“stop asking questions”
Turns pedagogic mode off.
“Why?” and “Why not?”
When the system says something like “No that can’t be right” these inputs (they work the same) with cause it to explain.
The system’s ability to extract information from a sentence and use it to extend the model is more extensive than it appears sometimes when the only response you get back is “I see” and so forth. Sometimes you can find a way to articulate a question that works when some others do not.