MK endows computers with a model of reality compatible with human knowledge in a program and apply computers’ already superb reasoning and memory processing capacities to extend and apply that model, potentially equaling or surpassing human performance, over a vast field of application domains.
Our focus is to put knowledge, the most powerful end product of human cognitive processes, in computers.
The “Brain” is the software engine that interprets our MICA knowledge processing language.
The MIKOS engine is a multi-threaded Ruby program. The J-Ruby version, itself written in Java, was chosen to enable import of third party libraries as well as enabling our OS to run on on top of a variety of standard operating systems. The engine’s modular design supports “plugging-in” multiple data interfaces which can support information ranging from simple commands and data to conversational natural language.
The engine integrates with a third-party, highly available, multi-data model, non-SQL database which stores the knowledge model. The engine supports multiple application instances on a single engine instance and has recently been tested on Amazon Web Services (AWS). At runtime, the engine’s primary function is to interpret MICA our reasoning and query language that acquires, processes, and extends knowledge.
The “Intelligence” is a set of MICA procedures and functions that reason about the model.
Programs and subroutines written in our proprietary MICA language provide the “intelligence” for an application. MICA routines covert incoming information into knowledge to extend the run-time knowledge model or as a result of reasoning about knowledge, perform tasks such as converting knowledge into an outgoing stream of information.
The MICA language is itself a multi-threaded, object-oriented language with a full suite of programming flow control constructs such as procedures, functions and macros. MICA is fully cable of supporting the intellectual functions we perceive in our own minds involved with acquiring, extending and refining knowledge, such as interference, inductive and deductive reasoning, pattern-recognition and memory management.
The “Knowledge” is an integrated directed graph structure residing in a non-SQL database. It is independent from the engine and MICA code.
All applications of our technology have a world model as their core. Models are information structures that exists independently of the MICA routines that process them. Models are designed to represent the reality of the problem domain, not the way people talk about it, and therefore are not related to any human language or any linguistic considerations at all. The model data structure is a unique modified directed graph where edges not only connect nodes but edges themselves may have outgoing and incoming edges.
To build a model, the modeller must first understand the domain to be modelled. Then through introspection of that knowledge, core concepts and their relationships are identified and using out knowledge editor, represented in the system. Representing concepts about the world or a domain as they are rather than as we talk about them takes some getting used to and has no analog in describing something using human language or writing a computer program. We call this unique technique, Epistemological Engineering (EpE).
The Modelling Process
Whether in a human mind or in a machine, most world or domain knowledge, is composed of concepts that are themselves composed of simpler concepts and so on until we get down to the core or atomic concepts. The most fundamental concepts are “meta-knowledge” concepts, those that relate to the nature and structure of knowledge itself. The discovery/design and incorporation of these concepts into our “epistemological kernel” represents an inter-displinary breakthrough combining insights drawn from computer science and an original theory of epistemology.
To build a model, the modeller need understand nothing of epistemology, since the epistemological kernel is already built-in. More importantly, no knowledge of algorithm design or of programming of any kind is required. The modeller need only understand the domain to be modelled and through introspection of that knowledge, core concepts and their relationships are identified and using our knowledge editor, incorporated into the application.
Core models are surprisingly compact depending on the domain. Even the Common-Sense Core model that New Sapience is developing to support comprehension of everyday human language has less than 3000 core world knowledge concepts. This is not too surprising since a human 5-year-old that has just learned to read and is ready to begin formal education is estimated to have a working vocabulary of around 2500 words. This is also consistent with computational studies that have shown that 80 percent of all the words on the Internet are the same 2000 words.
Most but not all applications will require some custom MICA programming that specifies how the system will interpret incoming information to extend or modify the original core model. The routines or “reasoners” that support general natural language comprehension are built-in. Applications may require custom reasoners to apply knowledge to solve specific problems. Here again, because the knowledge being reasoned about is totally in the model and never in the code, the lines of code are drastically reduced compared with conventional programming. Many reasoners operate at a level of abstraction allowing the system to solve classes of problems with the specifics being instantiated at runtime.
As the product advances, more abstract reasoners will be built-in such that the out-of-box product will have a successively higher “AIQ” with each new upgrade.