The Chatbot Controversy

The Chatbot Controversy

I don’t know anyone who is not blown away by how human-like the output of ChatGPT or the other latest large language models (LLMs) are, me included. If they did not know ahead of time, no one would even suspect the output was generated by a computer program. They are amazing achievements in computer science and computational statistics.

Over the last several months since ChatGPT was released, app developers, venture-backed startups, and all the Big Tech companies have joined the rush to capitalize on the perceived potential of this technology to revolutionize our economy. Goldman Sachs analysts estimate that automation by generative AI could impact 300 million full-time jobs globally. [i]

But this week, in an open letter citing potential risks to society, Elon Musk and a group of artificial intelligence experts and industry executives call for a six-month pause in developing AI systems more powerful than GPT-4.

From the letter:

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

It would seem answering these questions with a resounding “No!” would be a no-brainer. However, there are layers of meaning and underlying assumptions here that need to be untangled. Foremost of these is that the danger and potential of large language models (which are narrow AI) and those of the envisioned AGI (which are general AI) are fundamentally different. The sense of this letter is that LLMs are the precursor to AGI and that the transformation of one to the other is happening very rapidly.

Here is the New Sapience point-of-view, sentence by sentence:

“Contemporary AI Systems Are Now Becoming Human-Competitive At General Tasks”

Many people think so, but despite ChatGPT’s dazzling ability to generate human-like language, more than a 

a little caution is warranted.

AI has been promising productivity increases for decades and it has yet to arrive. Since 2005, billions of dollars of investments have been poured into machine learning applications. Nonetheless, labor productivity has grown at an average annual rate of just 1.3 percent, lower than the 2.1-percent long-term average rate from 1947 to 2018. [ii]

The optimism that these language models are going to become human-competitive may stem from confusion about the fundamental difference between what a thing appears to be and what it is underneath. We are seeing people define AGI as when a machine exhibits human-level performance on any task which it is presumed that humans need intelligence to perform, no matter how narrow the context.

Language models are commonly said to generate text with human-level or better performance. But humans do not generate text. They speak and write via an entirely different process. GPT-4 scored in the 99th percentile on the Uniform Bar Exam law. Does this imply lawyers need to fear for their jobs? It may be true that text generated by an LLM and a paragraph written by a human has the same words in the same order. Is this enough to conclude that LLMs are intelligent and ready to replace human knowledge workers?

When humans write they are communicating, endeavoring to convey an idea or opinion in their own mind to other people. It is fundamental to understanding the current controversy that being able to create a clever and compelling illusion of a thing, in this case, the illusion of intelligence and mind that people experience when reading text generated by LLMs, is not in any sense evidence that you are closer to achieving the reality.

When is it ever?

Underneath, the processes of text generation as opposed to writing and speaking are fundamentally different. Given a text input, a “prompt”, LLMs string together sequences of words in statistically relevant patterns based on what humans have written across enormous sets of text. But the whole time LLMs interact with a human, they encompass nothing resembling knowledge or comprehension. They have no idea what they are talking about. Indeed, they have no ideas at all. They are mindless algorithms. Yet the illusion is so slick that people are taken in.

 

“Should We Let Machines Flood Our Information Channels With Propaganda And Untruth?”

There are two distinct parts to this. The first is straightforward, and is the problem of “bots,” programs designed to appear as humans and flood social media channels with a particular message or opinion which may be false or destructive propaganda.

AI scientist Gary Marcus raised the alarm about how LLMs could push this problem to society harming levels:

“The problem is about to get much, much worse. Knockoffs of GPT-3 are getting cheaper and cheaper, which means that the cost of generating misinformation is going to zero, and the quantity of misinformation is going to rise—probably exponentially.”

We agree this is a very concerning problem that LLMs exacerbate by the very compelling nature of the illusion they create. It should be noted that IF we were talking about real AI here, with genuine knowledge of the world and the ability to use natural language to communicate, this issue would not be the problem it is with LLMs, but that discussion is outside the scope of this article.

The second issue here is what constitutes propaganda and untruth. We live in extremely partisan times when propaganda and untruths proliferate even without help from bots. AI bias is a hot issue.

This issue needs clarity. First, LLMs do not have bias. Human beings have biases. LLMs just mindlessly generate text. But people object to detecting human bias when they read the text output as if the chatbot were a person with opinions of its own. They are shooting the chatbot messenger since it can only string together words consistent with the majority of the text in its dataset, and if there is statistical bias there it will inevitably be reflected.

Elon Musk and others have said LLMs are biased in favor of the political left and this needs to be corrected. If true, where might such a bias come from? Who writes all the text in those datasets? Journalists? In 2014, the year of the last survey, only 7% of journalists identified as Republicans. Academics? A survey in 2007 concluded that only 9% of college professors were conservative. We live in a time when people are deeply divided almost exactly in half by the numbers. Half of the people write a lot, contributing to the training datasets, the other half do not so much. So, perhaps it’s not intentional bias at all. Chatbots can only reflect what is in their training dataset. If you don’t like what chatbots are constrained to say, perhaps you shouldn’t talk to them.

Those who would like to salvage the situation talk about using human curators to cull the apparent bias in chatbots. This is both expensive and time-consuming, and when one side’s deeply held belief is the other side’s untruth, who gets to decide? People (Elon Musk) have already been offended by choices made by chatbot “moderators” to repress certain outputs.  

In any case, when we acknowledge that chatbots are controlled by unthinking algorithms and have no opinions or biases this question simply becomes: “Should we let people flood our information channels with propaganda and untruth?” We should not, if we can stop it.

Should We Automate Away All The Jobs, Including The Fulfilling Ones?

Would you hire someone or create a robot to take your vacation for you? Of course not. But new technology has been obsoleting jobs since the beginning of the industrial revolution. Usually, the first to go are the difficult, dangerous, and tedious ones. But now we are talking about knowledge workers.

But here too it is not the fulfilling jobs that LLMs threaten. Keep in mind that the generated content is ultimately derived from what other people have already written again and again. Whose job does that threaten? Consider the unfortunate reporters stuck writing their 27,000th story about Friday’s high school baseball game next week’s weather or yesterday’s school board meeting. Consider the executive assistant who needs to turn the boss’s few terse statements into a smooth letter or turn bullet points into formal minutes and the paralegal preparing repetitive briefs and filings where much of the content is boilerplate.

The use of LLMs as a writing aide and research assistant, autocomplete on steroids, is perhaps the least problematic use case for them. But will they really replace jobs or just change them? Perhaps the time saved writing prose will be expended scrubbing the generated text for bias and misinformation. Again, all the digital technology since 2005 has changed the way we work without making us more productive.

 

Should We Develop Nonhuman Minds That Might Eventually Outnumber, Outsmart, Obsolete, And Replace Us?

LLMs are dangerous, but no one thinks they will take over the world anytime soon. It is misleading to conflate them with imagined future super-human intelligence as the open letter does. Machine Learning pioneer Yann LeCun called LLMs the off-ramp on the road to AGI. We, together with a growing number of other experts in this field, agree. There is no evidence and no roadmap by which LLMs, which mindlessly order words into statistically likely sequences, will at some point magically transform into thinking machines with knowledge of the world and comprehension of language. So, pausing their development is irrelevant to this issue.

But fear of advanced Artificial General Intelligence has been expressed by several really smart people. Called the ‘Singularity’, the notion is that if we create super-human AIs, they would be able to create AIs superior to them and so on until there would be AIs as superior to humans as we are to earthworms, for example, and they will “supersede us” as Stephen Hawking delicately put it, or maybe just kill us all as others have said.

Here again, there are hidden assumptions. First, these experts apparently assume (and this is a prevailing opinion) that AGI will be achieved at some point in the near future using the current methodologies rather than from a radical departure from them. The current methodology is to keep mimicking features of human intelligence until you find some master algorithm or other processing techniques probably built on or at least incorporating artificial neural networks.

AI has been called today’s ‘alchemy’. People are just trying things out to see what will happen because they don’t have a fundamental science of what intelligence is, either in human brains or in machines. Machine learning algorithms on artificial neural networks are already notoriously difficult to understand and explain. [iii] If AGI is ever stumbled upon this way, then some fear about what we are doing is justified, just like the alchemists needed a healthy fear because sometimes the lab blew up. But a healthy fear is one thing and predictions of doomsday are something else.

In any case, current experience shows caution even with narrow AI obviously is needed. It is not clear from where we are today, whether LLMs are a great breakthrough or “the lab blowing up.”

From the New Sapience point of view, it seems highly unlikely that AGI will ever be achieved using these current methodologies. It is so difficult to build an imitation brain when we have so little understanding of how the natural brain operates. In any case, we believe synthetic intelligence, our radical departure from the practice of imitating natural intelligence, will supersede the traditional approach long before we need to worry about it creating dangerous AGIs.

The second underlying fear of the Singularity results from a failure of epistemology (the theory of knowledge itself.) It is the belief that intelligence is something that can be increased without limit. Where does this come from? This sounds more like magic than science. Maybe humans are as intelligent as it gets in our corner of the universe and AI is a technique that can amp it up some but not so far that we can no longer get our minds around our own creations.

From our perspective, practical intelligence is directly proportional to the quality and quantity of knowledge available for solving problems and predicting results. So knowledge and the intelligence that acquires and applies it go hand in hand. The greater the quantity and quality of knowledge available, the easier it is to extend that knowledge. At New Sapience, we are creating synthetic knowledge for computers curated from human minds. Our epistemology holds that practical reality for humans is the intersection of human cognitive and perceptual apparatus and whatever it is they interact with. This means that no matter how much knowledge our sapiens are given or create themselves, no matter how sophisticated the information processing routines that we call intelligence they attain, they are working in a reality that is distinctly and forever (for practical purposes) human-centric.

The third Singularity assumption is purely anthropomorphic. Humans evolved intelligence so they could adapt their environments to fit their own needs, the ultimate survival skill. But intelligence would be impotent unless along with it humans had not also evolved the motivation to use it to control things. People who fear AGI appear to assume that the need to control is inseparable from intelligence. So the more powerful the AI, the greater its control needs, and thence humans lose out. There is no reason to assume this. If AIs are designed using deterministic methods such as New Sapience is using, rather than resulting from a lab accident, they will be designed to do what we tell them and not have some uncontrollable lust to take over the world.

Should We Risk Loss Of Control Of Our Civilization?

 

 

Relax everyone, New Sapience has this covered.

 

An Alternative Proposal

We agree that LLMs are dangerous, not because they are intelligent, but because the illusion that they are intelligent is so good that people are misled; and this will lead to mistakes, some of which will be serious. Again, this is not about AGI. The problem with LLMs is not that they are giant minds, but that they are slick illusions of intelligence while having no minds at all.

The letter’s proposal to back off LLMs is not unreasonable but is highly unlikely to happen. There are vast sums of money at stake and none of the signers of the open letter appear to be executives of the companies that are cashing in on this technology or hope to.

The industry won’t police itself and forgive me for having skepticism that governments will be able to sort this out in a useful way in a reasonable timeframe.

Here is an alternative proposal. Artificial Intelligence as a branch of computer science was effectively born in 1956 at the conference at Dartmouth where the term ‘artificial intelligence’ was first coined. The call for the conference states:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Basically, the call was to imitate features of human intelligence. Try things out and see what happens: Alchemy.

After 67 years it is time to reboot the discipline of Artificial Intelligence. Let’s have a new conference, a second founding. This time let’s start from the first principles and lay down principles for a new science of intelligence that defines what it is, irrespective of whether it is in a biological brain or a machine. While we are at it, we can define with some precision and for the first time what should be called Artificial Intelligence and what should not, instead of the current practice of using it as a bucket term to hype every innovation in computer science.

Such a conference would be an ideal place for the AI community to discuss the ethical and practical uses of innovative technology in general, but most especially that created our pursuit of long-awaited thinking machines.

[i] Generative AI Could Affect 300 Million Full-Time Jobs, Goldman Sachs (businessinsider.com)

[ii] The U.S. productivity slowdown: an economy-wide and industry-level analysis : Monthly Labor Review: U.S. Bureau of Labor Statistics (bls.gov)

[iii] “When it comes to neural networks, we don’t entirely know how they work, and what’s amazing is that we’re starting to build systems we can’t fully understand.” Jerome Pesenti, VP of AI at Meta