People need to be able to talk with their technology – but none of the “conversational interfaces” such as Apple’s Siri, Google Assistant, Amazon Alexa, Microsoft Cortana, Facebook Messenger, etc. can actually converse. In fact, they don’t actually comprehend a single word you say to them.
Our technology now surrounds us wherever we go. Interfacing with it via visual displays, keyboards and pointing devices has become increasingly problematical. The big tech companies have recognized this and are currently engaged in an expensive arms race to own the “conversation as a platform.” They are doing this because they understand that conversational interfaces have the potential to completely alter the landscape with respect to how people interact with their products and that impacts how these companies generate revenue. For example, where would Goggle’s profit be if you could ask your digital assistant to google things for you without seeing the ads?
So far, these companies have a level playing field. The narrow AI tools they have at their disposal, lacking in the capacity to create or process knowledge, are ill suited to language comprehension and progress has been painfully slow. Even today, after $millions in investments, the proliferating crop of natural language digital personal assistants are just as likely to be made fun of as made use of.
Our Conversational Interface in third party products such as the aforementioned for the first time ever will enable people to explain their wants, needs, and desires to a machine and expect a sensible, reasoned response.
Even very early on, Modelled Intelligence represents a quantum leap in the ability of machines to understand what humans are saying and the first of the big tech companies to license it will win the race to own the conversation (unless or until we license it to the rest.)
Far too many people would welcome additional companionship, especially young children and the elderly. “If only I had someone to talk to” is an increasingly heard complaint.
Our sapiens, as a personal companion, will be as attentive, eager to please, and loyal as a Labrador retriever – but more that that, it will be a talking Labrador retriever.
The sapiens’ capacity to comprehend human language is so unprecedented that people often think of Modelled Intelligence as natural language technology. In fact, MI is about representing and processing knowledge in the machine. It is the sapiens’ internal model of the everyday world that allows it to comprehend and converse – but extending that model with specialized or technical knowledge enables our technology to tackle an ever-increasing field of problems.
Hiring people to use knowledge is very expensive.
Our sapiens, as automated knowledge workers, will perform many of these same jobs for pennies per hour, 24/7, with their capability and utility increasing over time.
Consider just how much general intelligence beyond being able to talk to customers to determine their needs, wants and desires, is actually required of the people currently employed in the first tier of knowledge worker jobs such as bank tellers, receptionists or call centers? Consider also that people tend to find these jobs boring; they require a level of focus, attention to detail, and memory that humans find challenging (but sapiens will not).
When combined with specialist knowledge model extensions, the number of jobs that sapiens can perform as well or better than people will increase dramatically and move up into the professions. As modelled intelligence is interfaced to synergistic narrow AI techniques such as those for image recognition, auditory discriminatory and machine motor control, sapiens enhanced robots will perform more and more of the jobs that humans would rather not do.
Too many tasks performed by humans are tedious, difficult, dirty or dangerous.
Our sapiens, as real AI in mechanical bodies, will liberate humanity from danger and drudgery.