AI Today: Imitation Intelligence

Cutting Through the Hype

Hype, confusion, intentional misinformation, and fear-mongering about AI has never been deeper. Not since the dot-com boom and meltdown have we seen such a spectacle of honest technical aspiration so entangled with money and greed.

For over 60 years AI researchers have tried to imitate features of natural human intelligence without defining what it is. In the absence of fundamental definitions, all one can do is try things and hope to stumble on a solution.

Hype, confusion, intentional misinformation, and fear-mongering about AI has never been deeper. Not since the dot-com boom and meltdown have we seen such a spectacle of honest technical aspiration so entangled with money and greed.

For over 60 years AI researchers have tried to imitate features of natural human intelligence without defining what it is. In the absence of fundamental definitions, all one can do is try things and hope to stumble on a solution.

Many approaches have been tried, including rule-based systems, semantic networks, and the latest fad, machine learning.

But none of these approaches have given us thinking machines that can build models of the world, the key capability that is the hallmark of human intelligence. Not even close.

Many approaches have been tried, including rule-based systems, semantic networks, and the latest fad, machine learning.

But none of these approaches have given us thinking machines that can build models of the world, the key capability that is the hallmark of human intelligence. Not even close.

“The crucial piece of science and technology we don’t have is how we get machines to build models of the world.”

Yann LeCun

Machine Learning Pioneer, VP AI Meta

Why are people so susceptible to chatbot illusions?

Language is protocol by which thoughts and ideas can be communicated from one mind to another. Communication would not be possible without the assumption of common perceptions and core concepts of the world. But people don’t need proof that this commonality exists. The assumption of commonality is a built-in aspect of our psychology called Theory of Mind.

Theory of mind explains anthropomorphism – the attribution of human traits, emotions, or intentions to non-human entities – since any sensory or verbal cue that resembles a human cue invokes the full model.

In 1964 Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory created ELIZA, the first chatbot. He had no intention to fool anyone and was shocked that many early users became convinced that the program had intelligence and understanding.

This phenomenon, now known as the ELIZA Effect, is anthropomorphism applied to machines, a clear result of human theory of mind.

Today’s chatbots, based on Large Language Models, have no more intelligence and understanding than Eliza. They are stochastic parrots. But they are very good parrots, and the illusion is so compelling that even people who understand they are talking with a parrot find it takes an act of will to overcome the ELIZA Effect.

Why are people so susceptible to chatbot illusions?

Language is protocol by which thoughts and ideas can be communicated from one mind to another. Communication would not be possible without the assumption of common perceptions and core concepts of the world. But people don’t need proof that this commonality exists. The assumption of commonality is a built-in aspect of our psychology called Theory of Mind.

Theory of mind explains anthropomorphism – the attribution of human traits, emotions, or intentions to non-human entities – since any sensory or verbal cue that resembles a human cue invokes the full model.

In 1964 Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory created ELIZA, the first chatbot. He had no intention to fool anyone and was shocked that many early users became convinced that the program had intelligence and understanding.

This phenomenon, now known as the ELIZA Effect, is anthropomorphism applied to machines, a clear result of human theory of mind.

Today’s chatbots, based on Large Language Models, have no more intelligence and understanding than Eliza. They are stochastic parrots. But they are very good parrots, and the illusion is so compelling that even people who understand they are talking with a parrot find it takes an act of will to overcome the ELIZA Effect.

We are living in a time of AI Alchemy; on that we can agree

Gary Marcus

AI Researcher

AI Alchemy

Researchers have insisted on calling every attempt “AI” until the term has become a bucket for whatever aspirational technique is currenty in vogue.  But as might be expected, imitation without understanding is more likely to produce something that resembles intelligence without actually being intelligent. In other words, an illusion of intelligence.  The field today, like medieval Alchemy, is as much magical thinking as science. 

Researchers have insisted on calling every attempt “AI” until the term has become a bucket for whatever aspirational technique is currenty in vogue.  But as might be expected, imitation without understanding is more likely to produce something that resembles intelligence without actually being intelligent. In other words, an illusion of intelligence.  The field today, like medieval Alchemy, is as much magical thinking as science. 

For centuries alchemists labored to change lead into gold.

Alchemists were inspired by the conjecture, first made by Democritus in the fourth century BC, that if you successively break something into smaller and smaller pieces, eventually you will get to the indivisible pieces that he called atoms.

They were on the right track. But without a fundamental science or body of theory about what atoms are and how they interact, they could only try things to see what would happen. Sometimes things did happen, like changing cinnabar into liquid mercury. Or maybe the lab blew up. But they never changed lead into gold, which was the whole point.

Since 1956, AI researchers have been trying things “to see what would happen” and interesting things have come along. But the goal of creating an artificial general intelligence, which is the whole point, remains elusive. There is no agreement even about what it is, let alone a coherent roadmap on how to achieve it.

 

For centuries alchemists labored to change lead into gold.

Alchemists were inspired by the conjecture, first made by Democritus in the fourth century BC, that if you successively break something into smaller and smaller pieces, eventually you will get to the indivisible pieces that he called atoms.

They were on the right track. But without a fundamental science or body of theory about what atoms are and how they interact, they could only try things to see what would happen. Sometimes things did happen, like changing cinnabar into liquid mercury. Or maybe the lab blew up. But they never changed lead into gold, which was the whole point.

Since 1956, AI researchers have been trying things “to see what would happen” and interesting things have come along. But the goal of creating an artificial general intelligence, which is the whole point, remains elusive. There is no agreement even about what it is, let alone a coherent roadmap on how to achieve it.

 

Machine Learning

Today the technical community has embraced Machine Learning (ML) so strongly that it’s considered synonomous with AI. The technique has many useful applications, but misconceptions about it and its potential are widespread.

It is a misnomer to call machine learning AI in the first place. “Data science” is the correct term. Computers have always been good at processing data and information. But intelligence requires knowledge (models of the commonsense world), which data science has not achieved and for which it has no roadmap.

Today the technical community has embraced Machine Learning (ML) so strongly that it’s considered synonomous with AI. The technique has many useful applications, but misconceptions about it and its potential are widespread.

It is a misnomer to call machine learning AI in the first place. “Data science” is the correct term. Computers have always been good at processing data and information. But intelligence requires knowledge (models of the commonsense world), which data science has not achieved and for which it has no roadmap.

Machine Learning, How it works

Machine Learning is a software technology based on a programming paradigm called Artificial Neural Networks (ANNs) inspired by biological neurons. Each element has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be data values from datasets such as images or documents, or they can be the outputs of other neurons. The values in the final output neurons of the network accomplish the task, such as recognizing an object in an image.

This kind of learning is analogous to the process we see in humans when, through repeated training, we master some skill by activating neural pathways in the brain and body such as learning how to ride a bicycle. Although humans do this, it is misleading (hype) to compare this with the style of learning that is the hallmark of intelligence in people since many animal species learn neurally. (A bear can be trained to ride a bike.)

The second style is cognitive or intellectual learning, extending an internal model of the world with new knowledge. Until now only humans could learn cognitively. But we are no longer alone, as sapiens also learn in this way.

Machine Learning, How it works

Machine Learning is a software technology based on a programming paradigm called Artificial Neural Networks (ANNs) inspired by biological neurons. Each element has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be data values from datasets such as images or documents, or they can be the outputs of other neurons. The values in the final output neurons of the network accomplish the task, such as recognizing an object in an image.

This kind of learning is analogous to the process we see in humans when, through repeated training, we master some skill by activating neural pathways in the brain and body such as learning how to ride a bicycle. Although humans do this, it is misleading (hype) to compare this with the style of learning that is the hallmark of intelligence in people since many animal species learn neurally. (A bear can be trained to ride a bike.)

The second style is cognitive or intellectual learning, extending an internal model of the world with new knowledge. Until now only humans could learn cognitively. But we are no longer alone, as sapiens also learn in this way.

ML Uses

Artificial neural network applications are a powerful technique for finding statistical patterns in large collections of data. When we read about AI discovering new drugs or solving problems in plasma physics it often sounds like the applications are the scientists themselves, formulating theories and proving theorems. That is journalistic hype. These data science applications, however useful and even groundbreaking, are computers doing what computers have long exceled at, crunching numbers and compiling statistics.

ML Uses

Artificial neural network applications are a powerful technique for finding statistical patterns in large collections of data. When we read about AI discovering new drugs or solving problems in plasma physics it often sounds like the applications are the scientists themselves, formulating theories and proving theorems. That is journalistic hype. These data science applications, however useful and even groundbreaking, are computers doing what computers have long exceled at, crunching numbers and compiling statistics.

ML Abuses

ANNs had been around for decades, but the vast datasets and giant computers required to do useful things with them did not arrive until well after the turn of the century.  The growth of online commerce in the same timeframe powered the rise of today’s Big Tech companies. Huge revenues could be generated by matching consumer needs and desires with product and service offerings. The key to this is gathering and analyzing consumer data. ANNs proved ideal for this purpose and have become the engines that are spinning Big Tech’s massive repositories of user data into advertising gold.

But as it has turned out, society is paying a bitter price for the relentless pursuit of advertising clicks that machine learning is powering. Algorithms track users’ online behavior and feed people more content to keep them engaged and generate more clicks. This has led to device addiction, depression, and rising suicide rates especially among teens.

Even more destructive to society is that people are as likely to become fixated on what outrages them as on what they agree with. Thus, the algorithms mindlessly and relentlessly serve up more and more radicalism and rabid partisanship. Journalists and activists, who depend on the algorithms for readership, in turn produce more and more “clickbait” content.

“These algorithms have brought out the worst in us. They’ve literally rewired our brains so that we’re detached from reality and immersed in tribalism.”

Tim Kendall, former director of monetization at Meta

In the name of automation and cost savings, organizations are replacing human judgement with machine leaning algorithms to make decisions that affect individual people’s lives. Algorithms are deciding who gets into college, who gets a home loan, even who gets arrested and who goes free.

Machine learning algorithms have no judgement and no knowledge. They just mindlessly calculate statistics. Since when has it been okay to reduce all the complexity of individual characteristics, aspirations, and unique abilities to a statistic?

“There are three kinds of lies: lies, damned lies, and statistics.”

Mark Twain

ML Abuses

ANNs had been around for decades, but the vast datasets and giant computers required to do useful things with them did not arrive until well after the turn of the century.  The growth of online commerce in the same timeframe powered the rise of today’s Big Tech companies. Huge revenues could be generated by matching consumer needs and desires with product and service offerings. The key to this is gathering and analyzing consumer data. ANNs proved ideal for this purpose and have become the engines that are spinning Big Tech’s massive repositories of user data into advertising gold.

But as it has turned out, society is paying a bitter price for the relentless pursuit of advertising clicks that machine learning is powering. Algorithms track users’ online behavior and feed people more content to keep them engaged and generate more clicks. This has led to device addiction, depression, and rising suicide rates especially among teens.

Even more destructive to society is that people are as likely to become fixated on what outrages them as on what they agree with. Thus, the algorithms mindlessly and relentlessly serve up more and more radicalism and rabid partisanship. Journalists and activists, who depend on the algorithms for readership, in turn produce more and more “clickbait” content.

“These algorithms have brought out the worst in us. They’ve literally rewired our brains so that we’re detached from reality and immersed in tribalism.”

Tim Kendall, former director of monetization at Meta

In the name of automation and cost savings, organizations are replacing human judgement with machine leaning algorithms to make decisions that affect individual people’s lives. Algorithms are deciding who gets into college, who gets a home loan, even who gets arrested and who goes free.

Machine learning algorithms have no judgement and no knowledge. They just mindlessly calculate statistics. Since when has it been okay to reduce all the complexity of individual characteristics, aspirations, and unique abilities to a statistic?

“There are three kinds of lies: lies, damned lies, and statistics.”

Mark Twain

“AI chatbots are good at saying what an answer should sound like, which is different from what an answer should be.”

Rodney Brooks

Roboticist

LLM Chatbots: Dangerous Illusions

Today, ChatGPT and the other so-called Large Language Models (LLMs) are the most compelling (and dangerous) illusions yet. LLM chatbots create an illusion of communication. But communicatrion is transferring ideas from one mind to another. LLMs cannot communicate because they have no idea what the words in their inputs or outputs mean. They have no ideas at all, nor minds to contain them.

Today, ChatGPT and the other so-called Large Language Models (LLMs) are the most compelling (and dangerous) illusions yet. LLM chatbots create an illusion of communication. But communicatrion is transferring ideas from one mind to another. LLMs cannot communicate because they have no idea what the words in their inputs or outputs mean. They have no ideas at all, nor minds to contain them.

How LLMs Work

Large Language Models sequence words in accordance with statistical probabilities calculated by processing vast numbers of documents written by humans for humans. What LLMs give us is not truth, but something that generally looks like the truth even when it’s false, as it often is.

How LLMs Work

Large Language Models sequence words in accordance with statistical probabilities calculated by processing vast numbers of documents written by humans for humans. What LLMs give us is not truth, but something that generally looks like the truth even when it’s false, as it often is.

1) IF THE ILLUSION IS GOOD ENOUGH, IT'S THE SAME

ChatGPT and the other Large Language Models (LLMs) output text, even entire documents, that can be mistaken for something written by humans. If that output meets your needs, fine, but read it carefully. Text from generative AI is not what it appears to be. It is not a communication of thoughts and ideas from another mind. There is no other mind; the algorithm has collected statistics on the order of words from the documents in its training datasets and strings new patterns of words together on the basis of those probabilities.

The people who wrote the documents in the training set were communicating ideas to other people, but the bots have no idea what those words mean nor of what the word patterns they generate might mean to you. They have no ideas, period.

A rhinestone is an attractive illusion of a diamond, but you can’t use one as a drill bit.

2) LLMs MAKE STUFF UP, BUT THAT CAN BE FIXED

It’s anthropomorphic to says LLMs “make stuff up” that implies they have a mind to make stuff up with. They do not.

They have algorithms that can reliably generate text that “sounds right” but have no capability to check accuracy. In fact, the so-call hallucinations are inherent in the technology. The algorithm has to generate more text that flows statistically from what it already generated when processing some prompt. So, once it happens to generate an inaccuracy, that becomes part of its “baseline” and from there it will generate more and more falsehoods.

That is consistent with what we people see when they ask chatbots to generate longer texts. The first paragraph is pretty good, just mimicking what people in the training set said about the topic but by the second paragraph it starts to confuse the topic with statistically similar material and by the third it is generating complete fabrications.

 

“This isn’t fixable, It’s inherent in the mismatch between the technology and the proposed use cases.”

Emily Bender

Director of the University of Washington’s Computational Linguistics Laboratory.

3) LLMs ALREADY HAVE HUMAN-LEVEL PERFORMANCE

You hear this all the time, as a direct statement or implied as when a chatbot get high scores on multiple choice tests like the Law Boards. Doing well on standardized tests is no different than when the “Watson” algorithm beat the Ken Jennings at Jeopardy in 2013 by matching answer patterns with question pattens – without having any comprehension of what the words meant.

 There is a fundamental fallacy behind all these “human-level” claims and as well behind statements that chatbots have one cognitive skill or another. For example, that chatbots have more knowledge than a person because one person could never read all the books that were in their training dataset. The assertion is absurd. Chatbots process text and extract statistical patterns – they do not read the words as humans do, decoding syntax and semantics to assemble pre-existing concepts into new configurations, to learn cognitively.

 The fundamental fallacy is to assume that similar results have similar causes. If a chatbot comes out with an answer a human knows to be correct because a human understands the question and possesses the corresponding knowledge that answers it – than the chatbot must as well.

 The assumption is patently false and representative of the kind of magical thinking that permeates AI today. In fact, it is explicitly magical thinking, it is the magical “Law of Similarity” that supposedly makes Voodoo dolls effective.

4) LLMs ARE BRINGING US CLOSER TO AGI

The term AI originally referred to endowing machines with humans’ unique ability to solve problems and change the world around them through the application of knowledge and reason. Over the years many techniques were developed. None came near the goal, but some found more narrow applications. The term narrow AI was invented to refer to these and Artificial General Intelligence (AGI) was coined to refer to the original sense.

 Today the term AGI has become a topic of public interest and concern as never before. OpenAI released ChatGPT to the public last year and ignited a firestorm of controversy that continues to grow every day. The company was founded and lavishly financed with the explicit charter to create AGI. What they have given us is generative AI chatbots.

 These chatbots generate text that sounds remarkably like human language and have some utility for document preparation. But the technology, as implemented by OpenAI and others, is generating vast volumes of text that are polluting the Internet with falsehoods and toxic statements while running roughshod over people’s privacy and intellectual property.

 Hardly the AGI we were promised. If this is a great advance and not yet another attempt that will end up in the narrow AI category, what is the roadmap forward? Sam Altman, OpenAI’s CEO, who has become the human face of generative AI, has admitted that “the age of giant AI models is already over, that they must improve in other ways.” What other ways?

 Meanwhile Altman has become embroiled in another controversy that resulted in his short-lived dismissal from OpenAI recently. Amazingly, it is not about whether generative AI is leading to AGI or not but whether it is leading there too fast. On one side, the “Effective Accelerationists” including Altman say, “full speed ahead,” while the other side, the “Effective Altruists” (dubbed doomers by the accelerationists), fear that we are rushing toward creating super AI that will pose an existential threat to humanity.

 But as journalist Corry Doctorow recently put it:

 This “AI debate” is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we’ll get a locomotive.

 Machine learning pioneer Yann LeCun commented in the samed vein several years ago:

 “Trying to build intelligent machines by scaling up language models is like [building] a high-altitude airplane to go to the moon. You might beat altitude records, but going to the moon will require a completely different approach…. On the highway towards Human-Level AI, Large Language Model is an off-ramp.”

5) LLMs KNOW THINGS THAT NO ONE HAS TOLD THEM

Again, the statement is anthropomorphic at best. Knowledge is an internal model of the world. LLMs don’t have one so they don’t know things. This myth proceeds from the belief that in the output text of chatbots there can be information that wasn’t in the training set.  Why not? They are generating word sequences based on statistics without any understanding. Much of it will match what is in the dataset, some will not. Of the latter, some will turn out to be false, the so-called hallucinations, and some correct.

 When humans know things no one told them it is because they used their intelligence to draw original conclusions from what they already know and here again the “similar results imply similar processes” fallacy is at work.

6) LLMs ARE SELF-EVOLVING

This is a common plot device from science fiction. It typically goes like this; hapless humans attach one more unit of memory or additional processors to a large computer system and it spontaneously “wakes-up” and rapidly reprograms itself to become a super-intelligent entity.

Except its not science its fantasy. Just because artificial neural networks are crudely modelled on biological neurons does not mean they are alive. In any case, organisms evolve because the mechanisms are built into them, but evolution proceeds one small step at a time over millions of years, organisms do not transform into something entirely different overnight.

No doubt people hyping the rapid advent of AGI, without having a clue about to bring it about, find this fantasy very appealing.

First Principles

Deep thinkers and the philosophical minded should start here. Discover how New Sapience is laying the foundations for a new science of machine intelligence.

Cutting Through the AI Hype

Skeptical that ChatGPT and the other chatbots are anything more than “statistical parrots” or “autocomplete on steroids?” Start here for a step-by-step walkthrough of imitative AI today, what it is, what it is not, and what it will never be.

Our Technology

You don’t need to be deeply technical to grasp the power and plausibility of our approach but if you already know something about traditional approaches to AI, you will need the flexibility of mind to “unlearn” some of it.

Products and Applications

How we intend to build our technology into multiple product lines to create what we anticipate will become an extraordinary business with the potential for explosive growth.

Origins

How we got here. Learn how, by taking a contrarian approach, New Sapience solved the problem that no one else could.

Delivering on the Promise

Pass through this portal to experience a vison of human life in a world where everyone can be partnered with a selfless companion of unlimited knowledge and deep intelligence.

Invest

By investing in New Sapience, you’re not just supporting innovation. You’re unlocking transformative potential for humanity.