Mission Control for the 21st Century

It is no accident that the term “rocket science” has come to denote the epitome of technical difficulty.  The specifications for the first vehicles with the capability to reach orbit were a quantum leap over the technology of the time.  In those early days satellites, and the vehicles that carried them, truly were “science experiments” that pushed the boundaries of scientific knowledge and engineering practice.

While there are excellent reasons to be conservative in an industry where the cost of a single failure can run into millions even billions of dollars, the science project mentality increases both cost and, much more importantly, operational risk.

In the past, spacecraft were one-of-a-kind and so expensive that it didn’t matter if you threw away the launch vehicle or needed a control center filled with experts to operate it.
But for today’s growing space fleets, keeping control centers full of humans in the loop is becoming increasing impractical.
In the new space era led by commercial companies like SpaceX, the industry is on the threshold of constellations of tens of thousands of spacecraft and aiming for launch cadences approaching that of airline operations.

The solution has always been to throw more people at the problem, a practice now firmly ingrained in the space community from manufacturing to operations. Today most space missions still relay on human-in-the-loop control centers filled with highly trained experts analyzing information and data in real time.

The idea of an “expert system;” a computer program that embodies some domain of human expertise to operate complex systems or analyze and diagnose complex problems, has been a dream of computer science for over 50 years.  Despite early optimism about technologies such as rule-based inference engines going back to the 1980s, software that emulates human logic and decision-making is, today, still highly code intensive and expensive to design, develop and maintain.

What is need is a technology that can efficiently migrate expert knowledge that now resides only in the minds of human experts into the control system itself.  Fortunately, such a technology already exists. New Sapience has it today. It is called Machine Knowledge.

A machine endowed with the New Sapience Cognitive Core is more than a mere computer, it is a thinking machine, we call them sapiens. A sapiens can reason about about incoming information such as telemetry with human-like expertise but with a more than human ability to handle high-bandwidth data.

Sapiens can be programmed to emulate human operational roles such as Range Control Officer, Mission Specialist or Flight Engineer, augmenting or standing in place of their human counterparts.  Science-fiction no longer, New Sapience has the technology to build entities comparable to HAL 9000 from 2001.

Problem solved.

Mission Operations Sapiens

A sapiens is a digital entity that combines computers’ bandwidth for processing data and information with a human-like ability to comprehend the world and communicate.

Computationally efficient, sapiens can reside in the ground system or on-board computers.

The sapiens processes engineering telemetry to recognize current state and status of the system and all subsystems down to the smallest instrumented components.

The sapiens controls the system by executing procedures that embed digital commands in modelled state-transitions, automatically verifying that each command results in the expected state changes.

Unlike chatbots, Sapiens, process statements in natural language in a manner functionally equivalent to how humans do, that is they can converse with humans in the same sense as humans converse with each other.

They are able to do this because they incorporate Machine Knowledge; abstract commonsense knowledge that is independent from language or semantics. This internal model, not of language, but of the real world is the key to genuine comprehension of language. Sapiens know what words mean.

Sapiens FE: “Aborting countdown due to premature start on engine 6. Initiating Shutdown Procedure.”

Unlike HAL 9000, Sapiens FEs come with build-in safeguards that make them 100% reliable

Role-Models are plugin modules that give the sapiens knowledge and behaviors that emulate human roles.

  • Spaceport Control Officer
  • Range Control Officer
  • Dispatcher
  • Mission Director
  • Flight Engineer
  • Science Officer

Building Your Sapiens FE

New Sapience Mikos Platform for sapiens customization provides an extremely powerful, cost effective way to extend the built-in model with spacecraft and ground system design and operational knowledge.

  • MK models feature abstraction, classes of spacecraft can be modeled and then instantiated for each actual vehicle
  • Spacecraft are modelled as a hierarchical array of subsystems
    • Lowest level of subsystems are devices with instrumentation
    • The model specifies all known states (nominal or anomalous) for each system, state transitions and state topologies
    • At run-time, the state reasoner recognizes states as patterns of telemetry values at the device level and propagates up the system hierarchy.
    • Digital commands are mapped into defined state transitions permitting automatic state-to-state control at any subsystem level

Models of complex systems, especially specifically engineered technology like spacecraft, are seen as hierarchical assemblies where each discrete component is regarded as a finite state machine. Using our MIKOS platform to develop a space mission control application begins by modelling the type of spacecraft to be operated. The models are directly developed by systems engineers who understand the spacecraft and its operation since the process is basically a matter of transferring that knowledge from the human mind to the sapiens “mind.”

States (both nominal and anomalous) of components at the base of the hierarchy are modelled as patterns of EU converted values of instrumentation on those components. The states of higher level systems are defined by patterns of subsystem states. The state topology associated with each component defines the sequence that must be followed to transition from one state to another.

Experience has shown that the development of spacecraft models using this is extremely efficient as space systems engineers naturally think in terms of systems of systems as they breakdown the extremely complex vehicle into cognitively digestible components.  However, the formal exercise of defining states and state topologies has also been shown to result in the engineers having a better understanding of the system than they started with.

Integration of a new specific spacecraft class into the off-the-shelf sapiens world model is extremely straight-forward since it comes with a generic spacecraft and several subclasses of model built-in as are the data models for telemetry and digital commands.  Parameter definition for individual spacecraft are imported into the model via a database import utility.

Extraordinarily little custom software is required as the logic to recognize and propagate states and command state transitions is built-in to the system to the OS. Exception are interfaces to the customer’s flight and/or ground system hardware for data monitoring and control and interfaces to operations personal, assuming the application is not designed to be completely autonomous.

Human interfaces can be graphics status displays provided by the customer but driven by model execution or specified by the customer and provided by New Sapience. In addition, to graphical displays, conversational interfaces using plain English are also supported.

New Sapience has a long legacy of applying AI to Space Operations

The road to Machine Knowledge started years ago in the Hubble Space Telescope Mission Operations Center when Bryant Cruse, an ex-navy pilot turned space systems engineer, thought there had to be a better way to fly the spacecraft than looking at numbers.

The problem, as he saw it, was to find a way to capture engineers expert knowledge in software that could applied to comprehend the telemetry stream in real-time. That lead him fist to the so-called knowledge-based AI technology of the time, rule-based expert systems. But off-the-shelf “shells” were too slow to keep up with a real-time telemetry stream. He became the moving force behind a program at Lockheed’s AI Center to develop the world’s first real-time expert system technology and later co-founded a startup, Talarian, to commercialize it.

The Conestoga Launch Control system was developed by Altair Aerospace, the second space operations company founded by New Sapience founder, Bryant Cruse.

Altair developed an object-oriented software engine to compile a state model of the spacecraft based on a specification created directly by space operations engineers using a simple xml-like file format. No programming skills and no paradigm shift (as in knowledge to inference chains) was required.

Putting large amounts of expertise into the system was relativity easy (acquiring the expertise in the first place was hard – that’s why they call it rocket science) and it ran extremely fast since the basic intelligence was a simple state pattern matching algorithm. While other ground systems ran on mainframes this ran on PCs.

It was widely recognized as the most highly automated launch control system ever built. It performed spectacularly and was delivered for less than 10% of the cost of conventional systems.

Developed by Altair for Final Analysis Inc. of Lanham, Maryland for a constellation of Low Earth Orbiting spacecraft that provided tracking for transponders on trucks and shipping containers. A control center was constructed in the company’s home office complete with graphical displays of vehicle state and status but it was mostly left unmanned except for special ops or anomalies.

The Aqua Model-Based Advisor

  • Fully automated cloud-based system monitoring integrity of aqua on-board computers
  • Alerted mission operations personnel in near real-time via email/pagers
  • Capable of detecting transient anomalies lasting seconds
  • Automatic generation of displays of actionable information vs datasets via web-browsers
Loading...

Contact Us

Launch your sapiens