The Chatbot Controversy

The Chatbot Controversy

I don’t know anyone who is not blown away by how human-like the output of ChatGPT or the other latest large language models (LLMs) are, me included. If they did not know ahead of time, no one would even suspect the output was generated by a computer program. They are amazing achievements in computer science and computational statistics.

Over the last several months since ChatGPT was released, app developers, venture-backed startups, and all the Big Tech companies have joined the rush to capitalize on the perceived potential of this technology to revolutionize our economy. Goldman Sachs analysts estimate that automation by generative AI could impact 300 million full-time jobs globally. [i]

But this week, in an open letter citing potential risks to society, Elon Musk and a group of artificial intelligence experts and industry executives call for a six-month pause in developing AI systems more powerful than GPT-4.

From the letter:

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

It would seem answering these questions with a resounding “No!” would be a no-brainer. However, there are layers of meaning and underlying assumptions here that need to be untangled. Foremost of these is that the danger and potential of large language models (which are narrow AI) and those of the envisioned AGI (which are general AI) are fundamentally different. The sense of this letter is that LLMs are the precursor to AGI and that the transformation of one to the other is happening very rapidly.

Here is the New Sapience point-of-view, sentence by sentence:

“Contemporary AI Systems Are Now Becoming Human-Competitive At General Tasks”

Many people think so, but despite ChatGPT’s dazzling ability to generate human-like language, more than a 

a little caution is warranted.

AI has been promising productivity increases for decades and it has yet to arrive. Since 2005, billions of dollars of investments have been poured into machine learning applications. Nonetheless, labor productivity has grown at an average annual rate of just 1.3 percent, lower than the 2.1-percent long-term average rate from 1947 to 2018. [ii]

The optimism that these language models are going to become human-competitive may stem from confusion about the fundamental difference between what a thing appears to be and what it is underneath. We are seeing people define AGI as when a machine exhibits human-level performance on any task which it is presumed that humans need intelligence to perform, no matter how narrow the context.

Language models are commonly said to generate text with human-level or better performance. But humans do not generate text. They speak and write via an entirely different process. GPT-4 scored in the 99th percentile on the Uniform Bar Exam law. Does this imply lawyers need to fear for their jobs? It may be true that text generated by an LLM and a paragraph written by a human has the same words in the same order. Is this enough to conclude that LLMs are intelligent and ready to replace human knowledge workers?

When humans write they are communicating, endeavoring to convey an idea or opinion in their own mind to other people. It is fundamental to understanding the current controversy that being able to create a clever and compelling illusion of a thing, in this case, the illusion of intelligence and mind that people experience when reading text generated by LLMs, is not in any sense evidence that you are closer to achieving the reality.

When is it ever?

Underneath, the processes of text generation as opposed to writing and speaking are fundamentally different. Given a text input, a “prompt”, LLMs string together sequences of words in statistically relevant patterns based on what humans have written across enormous sets of text. But the whole time LLMs interact with a human, they encompass nothing resembling knowledge or comprehension. They have no idea what they are talking about. Indeed, they have no ideas at all. They are mindless algorithms. Yet the illusion is so slick that people are taken in.


“Should We Let Machines Flood Our Information Channels With Propaganda And Untruth?”

There are two distinct parts to this. The first is straightforward, and is the problem of “bots,” programs designed to appear as humans and flood social media channels with a particular message or opinion which may be false or destructive propaganda.

AI scientist Gary Marcus raised the alarm about how LLMs could push this problem to society harming levels:

“The problem is about to get much, much worse. Knockoffs of GPT-3 are getting cheaper and cheaper, which means that the cost of generating misinformation is going to zero, and the quantity of misinformation is going to rise—probably exponentially.”

We agree this is a very concerning problem that LLMs exacerbate by the very compelling nature of the illusion they create. It should be noted that IF we were talking about real AI here, with genuine knowledge of the world and the ability to use natural language to communicate, this issue would not be the problem it is with LLMs, but that discussion is outside the scope of this article.

The second issue here is what constitutes propaganda and untruth. We live in extremely partisan times when propaganda and untruths proliferate even without help from bots. AI bias is a hot issue.

This issue needs clarity. First, LLMs do not have bias. Human beings have biases. LLMs just mindlessly generate text. But people object to detecting human bias when they read the text output as if the chatbot were a person with opinions of its own. They are shooting the chatbot messenger since it can only string together words consistent with the majority of the text in its dataset, and if there is statistical bias there it will inevitably be reflected.

Elon Musk and others have said LLMs are biased in favor of the political left and this needs to be corrected. If true, where might such a bias come from? Who writes all the text in those datasets? Journalists? In 2014, the year of the last survey, only 7% of journalists identified as Republicans. Academics? A survey in 2007 concluded that only 9% of college professors were conservative. We live in a time when people are deeply divided almost exactly in half by the numbers. Half of the people write a lot, contributing to the training datasets, the other half do not so much. So, perhaps it’s not intentional bias at all. Chatbots can only reflect what is in their training dataset. If you don’t like what chatbots are constrained to say, perhaps you shouldn’t talk to them.

Those who would like to salvage the situation talk about using human curators to cull the apparent bias in chatbots. This is both expensive and time-consuming, and when one side’s deeply held belief is the other side’s untruth, who gets to decide? People (Elon Musk) have already been offended by choices made by chatbot “moderators” to repress certain outputs.  

In any case, when we acknowledge that chatbots are controlled by unthinking algorithms and have no opinions or biases this question simply becomes: “Should we let people flood our information channels with propaganda and untruth?” We should not, if we can stop it.

Should We Automate Away All The Jobs, Including The Fulfilling Ones?

Would you hire someone or create a robot to take your vacation for you? Of course not. But new technology has been obsoleting jobs since the beginning of the industrial revolution. Usually, the first to go are the difficult, dangerous, and tedious ones. But now we are talking about knowledge workers.

But here too it is not the fulfilling jobs that LLMs threaten. Keep in mind that the generated content is ultimately derived from what other people have already written again and again. Whose job does that threaten? Consider the unfortunate reporters stuck writing their 27,000th story about Friday’s high school baseball game next week’s weather or yesterday’s school board meeting. Consider the executive assistant who needs to turn the boss’s few terse statements into a smooth letter or turn bullet points into formal minutes and the paralegal preparing repetitive briefs and filings where much of the content is boilerplate.

The use of LLMs as a writing aide and research assistant, autocomplete on steroids, is perhaps the least problematic use case for them. But will they really replace jobs or just change them? Perhaps the time saved writing prose will be expended scrubbing the generated text for bias and misinformation. Again, all the digital technology since 2005 has changed the way we work without making us more productive.


Should We Develop Nonhuman Minds That Might Eventually Outnumber, Outsmart, Obsolete, And Replace Us?

LLMs are dangerous, but no one thinks they will take over the world anytime soon. It is misleading to conflate them with imagined future super-human intelligence as the open letter does. Machine Learning pioneer Yann LeCun called LLMs the off-ramp on the road to AGI. We, together with a growing number of other experts in this field, agree. There is no evidence and no roadmap by which LLMs, which mindlessly order words into statistically likely sequences, will at some point magically transform into thinking machines with knowledge of the world and comprehension of language. So, pausing their development is irrelevant to this issue.

But fear of advanced Artificial General Intelligence has been expressed by several really smart people. Called the ‘Singularity’, the notion is that if we create super-human AIs, they would be able to create AIs superior to them and so on until there would be AIs as superior to humans as we are to earthworms, for example, and they will “supersede us” as Stephen Hawking delicately put it, or maybe just kill us all as others have said.

Here again, there are hidden assumptions. First, these experts apparently assume (and this is a prevailing opinion) that AGI will be achieved at some point in the near future using the current methodologies rather than from a radical departure from them. The current methodology is to keep mimicking features of human intelligence until you find some master algorithm or other processing techniques probably built on or at least incorporating artificial neural networks.

AI has been called today’s ‘alchemy’. People are just trying things out to see what will happen because they don’t have a fundamental science of what intelligence is, either in human brains or in machines. Machine learning algorithms on artificial neural networks are already notoriously difficult to understand and explain. [iii] If AGI is ever stumbled upon this way, then some fear about what we are doing is justified, just like the alchemists needed a healthy fear because sometimes the lab blew up. But a healthy fear is one thing and predictions of doomsday are something else.

In any case, current experience shows caution even with narrow AI obviously is needed. It is not clear from where we are today, whether LLMs are a great breakthrough or “the lab blowing up.”

From the New Sapience point of view, it seems highly unlikely that AGI will ever be achieved using these current methodologies. It is so difficult to build an imitation brain when we have so little understanding of how the natural brain operates. In any case, we believe synthetic intelligence, our radical departure from the practice of imitating natural intelligence, will supersede the traditional approach long before we need to worry about it creating dangerous AGIs.

The second underlying fear of the Singularity results from a failure of epistemology (the theory of knowledge itself.) It is the belief that intelligence is something that can be increased without limit. Where does this come from? This sounds more like magic than science. Maybe humans are as intelligent as it gets in our corner of the universe and AI is a technique that can amp it up some but not so far that we can no longer get our minds around our own creations.

From our perspective, practical intelligence is directly proportional to the quality and quantity of knowledge available for solving problems and predicting results. So knowledge and the intelligence that acquires and applies it go hand in hand. The greater the quantity and quality of knowledge available, the easier it is to extend that knowledge. At New Sapience, we are creating synthetic knowledge for computers curated from human minds. Our epistemology holds that practical reality for humans is the intersection of human cognitive and perceptual apparatus and whatever it is they interact with. This means that no matter how much knowledge our sapiens are given or create themselves, no matter how sophisticated the information processing routines that we call intelligence they attain, they are working in a reality that is distinctly and forever (for practical purposes) human-centric.

The third Singularity assumption is purely anthropomorphic. Humans evolved intelligence so they could adapt their environments to fit their own needs, the ultimate survival skill. But intelligence would be impotent unless along with it humans had not also evolved the motivation to use it to control things. People who fear AGI appear to assume that the need to control is inseparable from intelligence. So the more powerful the AI, the greater its control needs, and thence humans lose out. There is no reason to assume this. If AIs are designed using deterministic methods such as New Sapience is using, rather than resulting from a lab accident, they will be designed to do what we tell them and not have some uncontrollable lust to take over the world.

Should We Risk Loss Of Control Of Our Civilization?



Relax everyone, New Sapience has this covered.


An Alternative Proposal

We agree that LLMs are dangerous, not because they are intelligent, but because the illusion that they are intelligent is so good that people are misled; and this will lead to mistakes, some of which will be serious. Again, this is not about AGI. The problem with LLMs is not that they are giant minds, but that they are slick illusions of intelligence while having no minds at all.

The letter’s proposal to back off LLMs is not unreasonable but is highly unlikely to happen. There are vast sums of money at stake and none of the signers of the open letter appear to be executives of the companies that are cashing in on this technology or hope to.

The industry won’t police itself and forgive me for having skepticism that governments will be able to sort this out in a useful way in a reasonable timeframe.

Here is an alternative proposal. Artificial Intelligence as a branch of computer science was effectively born in 1956 at the conference at Dartmouth where the term ‘artificial intelligence’ was first coined. The call for the conference states:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Basically, the call was to imitate features of human intelligence. Try things out and see what happens: Alchemy.

After 67 years it is time to reboot the discipline of Artificial Intelligence. Let’s have a new conference, a second founding. This time let’s start from the first principles and lay down principles for a new science of intelligence that defines what it is, irrespective of whether it is in a biological brain or a machine. While we are at it, we can define with some precision and for the first time what should be called Artificial Intelligence and what should not, instead of the current practice of using it as a bucket term to hype every innovation in computer science.

Such a conference would be an ideal place for the AI community to discuss the ethical and practical uses of innovative technology in general, but most especially that created our pursuit of long-awaited thinking machines.

[i] Generative AI Could Affect 300 Million Full-Time Jobs, Goldman Sachs (

[ii] The U.S. productivity slowdown: an economy-wide and industry-level analysis : Monthly Labor Review: U.S. Bureau of Labor Statistics (

[iii] “When it comes to neural networks, we don’t entirely know how they work, and what’s amazing is that we’re starting to build systems we can’t fully understand.” Jerome Pesenti, VP of AI at Meta

Aspirational AI

Aspirational AI

In a recent TED talk, AI researcher Janelle Shane shared the weird, sometimes alarming antics of Artificial Neural Network (ANNs) AI algorithms as they try to solve human problems. [i]

She points out that the best ANNs we have today are maybe on par with worm brains. So how is it that ANNs were ever termed AI in the first place? Worms aren’t intelligent.

Calling ANNs AI is like being invited into a hangar to look at a new aircraft design but finding nothing but landing gear. You ask: “I thought you said there was an airplane.” And are told: “Yes, there it is – it is just not a very good airplane yet.”

We saw another example of this “Aspirational AI” in a recent article in the Analytics India magazine [ii] that listed New Sapience among 10 companies in the Artificial General Intelligence space. They all say they are working on the AGI problem, but we are the only one that has nothing to show for our efforts: a working prototype that comprehends language in the same sense as humans do. The others aspire to have a cogent theory about reaching our same goal, but they are not accomplishing it; like the medieval alchemists who mixed this and that together to see what might result.

It is also evident that these other “AGI” companies continue to focus on ANNs and look to the human brain as their inspiration. This general fixation was mentioned in a recent article in the Wall Street Journal titled, “The Ultimate Learning Machines” which describes DARPA’s latest big AI project: Machine Common Sense. [iii]

The ultimate learning machines, we are told in the WSJ article, are human babies because they are far superior at pulling patterns out of vast amounts of data (in this case we are talking about the data that comes into the brain through the senses) compared to what “AI” researchers can achieve with artificial neural networks. A human brain compared to a worm brain? – not surprising babies are better.

But infants are totally incapable of learning that “George Washington was the first president of the United States.” However, a five-year-old can learn that easily. Assuming infants to be the best learners presupposes a single path to common sense knowledge that must be based on running algorithms in neural networks because the human brain is a neural network. But somewhere between infancy and early childhood, the human brain acquires an ability to learn in a way that is vastly different from the kind of neural learning, like recognizing faces, that infants do.

AI today (exclusive of what we are doing at New Sapience) has been called a one-trick pony because of its fixation with neural networks and the brain. We stand by our earlier comparison that this approach is similar to the people (prior to the Wright Brothers) who tried to build aircraft that flapped their wings like birds because, after all, birds were the best flyers in the universe, hence this was the only way to accomplish the goal. History proved that was not true.

The process of transformation that an infant goes through to become a 5-year-old with the capacity to learn abstract ideas through language comprehension is quite amazing. The idea that you could start with an artificial neural network of the complexity of a worm brain and somehow program it to recapitulate the functionality that millions of years of natural evolution have endowed a human infant’s brain with seems – well, ambitious.

We have found a better way. From the article:

In the past, scientists unsuccessfully tried to create artificial intelligence by programming knowledge directly into a computer.”

We have succeeded where others have failed by understanding that functional knowledge is an integrated model with a unique hidden structure, not just an endless collection of facts and assertions. At New Sapience we are giving computers the commonsense world model and language comprehension of the five-year-old. We don’t need to know how the brain works to create the end product – because we know how computers work.

Today, if you tell a “sapiens,” created by New Sapience: “My horse read a book.” It will reply something like: “I have a problem with what you said, horses can’t read.” If you ask why, it will tell you: “Only people can read.” This is machine common sense and we are already there.

[i] Ted Talk: The danger of AI is weirder than you think.

[ii] 9 Companies Doing Exceptional Work in AGI

[iii] The Ultimate Learning Machines



Artificial Neural Networks

Artificial Neural Networks

Narrow AI’s Dark Secrets

Articles about AI are published every day. The term “AI” is used in a very narrow sense in the majority of these articles: it means applications based on training artificial neural networks under the control of sophisticated algorithms to solve very particular problems.

Here is the first dark secret: This kind of AI isn’t even AI. Whatever this software has, the one thing it lacks is anything that resembles intelligence. Intelligence is what distinguishes us from the other animals as demonstrated by its product: knowledge about the world. It is our knowledge and nothing else that has made us masters of the world around us. Not our clear vision, our acute hearing, or our subtle motor control, other animals do that every bit as well or better. The developers of this technology understand that and so a term was invented some years ago to separate this kind of program with real AI; Narrow AI which is in use in contrast to Artificial General Intelligence (AGI) which is the kind that processes and creates world knowledge.

Here’s the second dark secret. The machine learning we have been hearing about isn’t learning at all in the usual sense. When a human “learns” how to ride a bicycle, they do so by practicing until the neural pathways that coordinate the interaction of the senses and muscles have been sufficiently established to allow one to stay balanced. This “neural learning” is clearly very different than the kind of “cognitive learning” we do in school which is based on the acquisition and refinement of knowledge. Neural learning cannot be explained and cannot be unlearned, no abstract knowledge of the world is produced. A circus bear can ride a bike but we don’t say it is intelligent because of that.

The third dark secret: We don’t understand how the sophisticated algorithms that control the training of these networks actually work. This fact is probably at the root of the fear that Artificial Intelligence may someday escape human control.

But if narrow AI is not real AI why is it considered AI at all? It is because of the hope that someday these narrow techniques may be extended to become the real thing and real AI is a very exciting, world-changing prospect. That makes these current efforts more glamorous to the general public, easier to hype, and easier to attract funding. But the hype has gone too far and has engendered a growing expectation that real AI is just around the corner and we had better be prepared for its civilization-changing effects.

Today, the AI community is starting to back-pedal big time. We are seeing a growing admission coming from both the big tech companies and academia that the hope that these techniques can be evolved into real AI is, if not totally forlorn, certainly not so imminent as the general public and the media have been led to believe.

Will the Future of AI Learning Depend More on Nature or Nurture?

Yann LeCun, a computer scientist at NYU and director of Facebook Artificial Intelligence Research.
“None of the AI techniques we have can build representations of the world, whether through structure or through learning, that are anywhere near what we observe in animals and humans”

Facebook’s head of AI wants us to stop using the Terminator to talk about AI

“We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do.”
“… in terms of general intelligence, we’re not even close to a rat.”
“The crucial piece of science and technology we don’t have is how we get machines to build models of the world.”
“The step beyond that is common sense, when machines have the same background knowledge as a person.”

Inside Facebook’s Artificial Intelligence Lab

“Right now, even the best AI systems are dumb, in the way that they don’t have common sense.”
“We don’t even have a basic principle on which to build this. We’re working on it, obviously, We have lots of ideas, they just don’t work that well.”

Why Google can’t tell you if it will be dark by the time you get home — and what it’s doing about it
Emmanuel Mogenet, head of Google Research Europe:

  • “But coming up with the answer is not something we’re capable of because we cannot get to the semantic meaning of this question. This is what we would like to crack.”
  • He explained that Google needs to try and build a model of the world so that computers know things like …
  • “I’ll be honest with you, I believe that solving language is equivalent to solving general artificial intelligence. I don’t think one goes without the other. But it’s a different angle of attack. I think we’re going to push towards general AI from a different direction.”

Microsoft CEO says artificial intelligence is the ‘ultimate breakthrough’
Satya Nadella, Microsoft CEO

“We should not claim that artificial general intelligence is just around the corner,”
“We shouldn’t over-hype it.”
“Ultimately, the real challenge is human language understanding – that still doesn’t exist. We are not even close to it…”

“The Real Trouble With Cognitive Computing”
 Jerome Pesenti, former vice president of the Watson team at IBM.

“When it comes to neural networks, we don’t entirely know how they work, and what’s amazing is that we’re starting to build systems we can’t fully understand.  The math and the behavior are becoming very complex and my suspicion is that as we create these networks that are ever larger and keep throwing computing power to it, …. (it) creates some interesting methodological problems.”
Read More

Calm down, Elon. Deep learning won’t make AI generally intelligent
Mark Bishop, professor of cognitive computing and a researcher at the Tungsten Centre for Intelligent Data Analytics (TCIDA) at Goldsmiths, University of London:

It’s this lack of understanding of the real world that means AI is more artificial idiot than artificial intelligence. It means that the chances of building artificial general intelligence is quite low, because it’s so difficult for computers to truly comprehend knowledge, Bishop told The Register.

The Dark Secret at the heart of AI.
Joel Dudley leads the Mount Sinai AI team.

“We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Creative Blocks, Aeon Magazine
David Deutsch, quantum computation physicist at the University of Oxford:

“Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough.”
“No Jeopardy answer will ever be published in a journal of new discoveries.”
“What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory…”