A New Science of Artificial Intelligence

A New Science of Artificial Intelligence

The founding event of artificial intelligence as a field is generally considered to be the Dartmouth Summer Research Project on Artificial Intelligence in 1956 where the term itself was first coined. The proposal for the conference states:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

For centuries, alchemists labored to produce more valuable materials from more basic ones. They were inspired by a different conjecture, the one first made by Democritus in the fourth century BC. If you successively break something into smaller and smaller pieces, eventually you will get to the indivisible pieces that he called atoms.

They were on the right track, but without a fundamental science or body of theory about what atoms are and how they interact, they could only try things to see what would happen. Sometimes things did happen, like changing cinnabar into liquid mercury, or maybe the lab blew up. But they never changed lead into gold, which was the whole point.

Since 1956, AI researchers have been trying things “to see what would happen”, and interesting things have come along. But the goal of creating an artificial general intelligence, which is the whole point, remains elusive. There is no agreement even about what it is, let alone a coherent roadmap on how to achieve it.

Researchers don’t agree on what intelligence is. They don’t agree on what sentience is. And they don’t agree on what consciousness is. They don’t even agree to what extent, if any, these things need to be understood, or whether they are fundamental to the enterprise.

When I listen to AI researchers talk about these phenomena, I am reminded of medieval scholastics arguing about angels dancing on the head of a pin. Their logic is impeccable, but their premises are vague and subjective.

Naturally, within this vacuum of established scientific theory, wildly diverging views exist. Perhaps the most extreme is held by former Google engineer Blake Lemoine who stated his belief that a large language model (LaMDA) was sentient and deserving of human rights. AI researcher and entrepreneur Gary Marcus stated:

“We in the AI community have our differences, but pretty much all of us find the notion that LaMDA might be sentient completely ridiculous.”

Recently Marcus asked Lemoine via Twitter if he thought the latest LLM, Galactica, which recently attracted so much derision (deservedly, I think) might also be sentient. Lemoine did not think so, but at the end of a surprising civil interchange, given their differences, Marcus summed up the entire AI community to perfection:

We are living in a time of AI Alchemy; on that we can agree

Alchemy was around for thousands of years before it was superseded by the science of Chemistry, but in the end, the transformation happened very rapidly with the discovery/invention of the Periodic Table of the Elements.

Suddenly we knew which materials were elemental and which were composites, and we could predict which elements would combine and which would not. An elegant classification schema gave us the key to understanding the vast universe of materials and their properties.

At New Sapience we have laid the groundwork for what we believe is a new science of Artificial Intelligence or, more precisely, Synthetic Intelligence. To achieve this, we looked in a completely different direction from where the entire AI community has been looking.

“What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory…”

Rather than attempt to emulate natural human intelligence, we studied what it creates, knowledge in order to engineer, to synthesize it. David Deutsch is a quantum computation physicist at Oxford, not an AI scientist but often the most prescient observations come from outside the mainstream. He is correct, at New Sapience we are more about epistemology than neuroscience.

Our journey has been a stunning recapitulation of the transformation of Alchemy into Chemistry. We too began with the conjecture of Democritus but transposed it from the material to the intellectual. Complex ideas are composed of simpler ones, and if you keep breaking them down, eventually you must get to the “atoms.”

We have identified and classified about 150 “atoms of thought” into a two-dimensional array called the Cognitive Core. Now we know which concepts are elemental and which are composites, and we understand how they combine to make sense rather than nonsense.

This elegant classification schema has given us the key to a science of knowledge, and the solid foundation needed to engineer synthetic intelligences. We call them sapiens.

Already our sapiens:

  • Learn through language comprehension
  • Understand the contextual meaning of language
  • Have common sense
  • Can explain their reasoning
  • Learn by reasoning about existing knowledge
  • Distinguish between perceptions, feelings, and thoughts

Our breakthroughs in the philosophy of epistemology have led to a science of knowledge. A new science leads to new engineering disciplines.

At New Sapience we are practical ontologists and applied epistemologists.

Our cognitive core is equivalent to the discovery of the arch. Once people discovered they could stack stones in such a way as to cross a stream, they had a direct roadmap to bridges, aqueducts, and the Pantheon.

As I talk with young people excited about AI, it soon becomes evident that few have any real interest in data science per se, they study it because, until now, it has been offered as the only game in town to reach what truly excites them. That is a vision where thinking machines work side by side with their human counterparts to build a world of unlimited productivity and unleash human potential.

That world is now within reach, join us, and become an epistemological engineer.