The possibility of building machines that are intelligent in the sense that we perceive ourselves to be intelligent has always been accompanied by a very reasonable concern: how can we make sure our creations will always serve us and not the other way around?
People familiar with New Sapience know we are one of the few companies that claim to be making progress towards Artificial General Intelligence (AGI). OpenAI is another. Many of our followers and investors want to get our take on the current controversy surrounding OpenAI’s ChatGPT and other Large Language Models (LLMs).
The issues are not something that can be fairly dealt with in a couple of Twitter posts. So here is a concise roadmap to the controversy and how it looks from the New Sapience perspective.
Data science has many useful applications in the area of pattern recognition but never forget that ML algorithms themselves only compile statistics, they are mindless, there is nothing like intelligence in there. Yet these algorithms are being used every day to decide who gets a loan, who gets a job, and who gets admitted to schools. There are even cases of people being falsely arrested. When ML is used to make decisions that affect people’s lives, they can only treat people as statistics. re you okay with that?
It happens like this: someone has a theory about what intelligence is and develops some software to implement it. Even if it does not work or doesn’t do anything that looks like intelligence, it is still considered “AI” because that is what they were aspiring to create.
Complex ideas are aggregates of simpler ones. The inescapable conclusion is that, if you keep decomposing ideas into their components, at some point you get to the end, or rather the beginning. This is the same conjecture that Democritus made about the material world: if you keep breaking things apart, eventually you get to the indivisible pieces he called “atoms.”
New Sapience began with a simple thesis: the quickest way to create a thinking machine is to give it something to think about. The symbolic crowd was on the right track when they focused, not on emulating the human brain like the connectionists, but on the end product of human cognition: knowledge. But there was a fatal flaw in their approach: the symbols themselves.
“Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough.”
“What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory…”
David Deutsch, quantum computation physicist at the University of Oxford
Futurist Ray Kurzweil popularized the idea of the AI Singularity; when AIs first equal then far surpass humans in intelligence. But the advent of sapiens illustrates that AI is driven by knowledge not computing power. A revolution in how humans acquire, share, and use knowledge will indeed produce a singularity. But not for the first time.