The Natural Language Comprehension Core, a B2B Product

The Natural Language Comprehension Core (NLCC) endows any technology with the capability to interact conversationally with its users in their native language. It is a B2B offering designed to be embedded in third party products that have or would be enhanced by a conversational interface.

The Problem It Solves

People need to be able to talk with their technology – but none of the “conversational interfaces,” Apple’s Siri, Google Assistant, Amazon Alexa, Microsoft Cortana, Facebook Messenger, etc. can actually converse. In fact, they don’t actually comprehend a single word you say to them.
Embedding our NLCC in third party products such as the aforementioned will, for the first time ever, enable people to explain their wants, needs, and desires to a machine and expect a sensible response.
Our technology now surrounds us wherever we go. Interfacing with it via visual displays, keyboards and pointing devices has become increasingly problematical. The big tech companies have recognized this and are currently engaged in an expensive arms race to own the “conversation as a platform.” They are doing this because they understand that conversational interfaces have the potential to completely alter the landscape with respect to how people interact with their products and that impacts how these companies generate revenue. For example, where is Goggle if you could ask your digital assistant to google things for you and you never see the ads?
So far, these companies have a level playing field. The narrow AI tools they have at their disposal, lacking in the capacity to create or process knowledge, are ill suited to language comprehension and progress has been painfully slow. Even today, after $millions in investments, the proliferating crop of natural language digital personal assistants are as likely to be made fun of as made use of.
Computational Knowledge, even very early on, represents a quantum leap in the ability of machines to understand what humans are saying and the first of the big tech companies to license it will win the race to own the conversation (unless or until we license it to the rest.)

Product Description

The Natural Language Comprehension Core  (NLCC) is a product designed to be embedded in third party products, exactly as it embedded in our own line of Sapiens products. If the customer’s product already has a natural language interface the CC will take input from that and the customer can decide whether to forward all inputs to the NLCC or only those which their current system does not handle elegantly.  We envision that in the majority of cases the capability to have a short discussion with the user to determine what it is they are actually after will vastly enhance the user experience.
Once the users intent has been determined and the relevant information has been gathered the NLCC will forward that information to the customers back end for execution.


At 41.4 million monthly unique users, Apple’s Siri is the most popular virtual personal assistant. However, between May 2016 and May 2016, SIRI lost 7.3 million monthly users, or about 15% of its total U.S. user base, according to data from researcher Verto analytics.

As stated above, no company has a more vested interest in being at the forefront of conversational interfaces than Google since if you can ask your digital assistant to retrieve information for you and don’t have to search for it yourself – you don’t see Google’s ads.

Alexa would become smart enough the ask you: “Were you talking to me? Did you really mean to order a Elephant just now?

Facebook Messenger would be much better at doing whatever Mark wants it to do to connect people if it could actually have a conversation with people.

Of specific relevance to Microsoft would include enhancement/augmentation of the Cortana Intelligence Suite, enabling interactive “Intelligent Search” in Bing, and over time evolving to integration at the computing device level (even at the OS level), with Office at the software application level, with IoT initiatives (devices up through Smart Building, etc. systems), as well as augmentation of HoloLens to interact with technicians/advisors in a more integrated fashion, etc.

IBM has been marketing Watson very heavily – as though the company’s future depends on it.  In spite of Watson’s high profile success at becoming the world Jeopardy champion in 2005, stochastic approaches to mining text databases remain very problematical. Not matter how good the training algorithms, these approaches will still return matches with high-probably ratings that a human can see at once make no sense.  This requires a lot of manual back-end work writing rules to filter out silly responses and makes actually fielding applications very expensive. The word is that Watson is not making IBM any money. Integration of Watson with our NLCC to analyze the output of the algorithms could be just what Watson needs.

MIKOS Module Architecture

It consists of the Language Comprehension Core, a bi-directional text interface (which may be enhanced with speech recognition/synthesis) and an one or more application interfaces to back-end devices or services.

Features/Use Cases

New Sapience technology, with the capability to actually understand what a human is saying to it, represents a quantum step forward in solving the human/machine interface problem. However, understanding what the person wants doesn’t mean you understand how to deliver it – which is precisely the real problem with connected systems, the more sophisticated the systems and the more you connect them – the more complex the operations.
While the superior language comprehension capabilities of New Sapience’s technology may make it seem like a super chatbot compared to SIRI or Alexa, it is not essentially a natural language technology. It can understand natural language because it has internal knowledge of what the words mean which is in turn dependent on its internal knowledge model of the everyday world. A model constructed using the company’s proprietary knowledge modeling technology.
This modeling approach was inspired by previous work used to successfully monitor and control spacecraft and launch vehicles, the ultimate connected systems. Endowing a sapiens with knowledge of the control and operation of household devices in general, combined with the ability to import technical data for each type of device will result in a connected home that works, “as if by magic.”