AI continues to be hugely popular in the media, and while most articles continue to treat Big Data and Machine Learning as if they are the only game in town, we are starting to see more that recognize these narrow approaches don’t have a clear path to real AI or AGI. Recently Forbes published a piece by Rob Toews who focuses on the big picture of AI. To Understand The Future of AI, Study Its Past This article divides AI into two opposing philosophies: connectionism and symbolism. From the historical perspective this is reasonable. Connectionism is what he means by today’s AI. Symbolism is what is sometimes called “Good Old-Fashioned AI.” Toews provides a good description of it: “Symbolic AI reached its mainstream zenith in the early 1980s with the proliferation of what were called “expert systems”: computer programs that, using extensive “if-then” logic, sought to codify the knowledge and decision-making of human experts in particular domains. These systems generated tremendous expectations and hype: startups like Teknowledge and Intellicorp raised millions and Fortune 500 companies invested billions in attempts to commercialize the technology.” I know. I was there. My first company, Talarian Corp, applied real-time expert systems to analyze spacecraft telemetry. Toews goes on to say, “Expert systems failed spectacularly to deliver on these expectations, due to the shortcomings noted above: their brittleness, inflexibility and inability to learn.” That’s an interesting way of looking at it. Usually the failure of expert systems is described as inability to scale. That is, [...]
In a recent TED talk, AI researcher Janelle Shane shared the weird, sometimes alarming antics of Artificial Neural Network (ANNs) AI algorithms as they try to solve human problems.[i] She points out that the best ANNs we have today are maybe on par with worm brains. So how is it that ANNs were ever termed AI in the first place? Worms aren’t intelligent. This goes back to what may be called “Aspirational AI” a common phenomenon in the AI community for at least 40 years. It happens like this: someone has a theory about what intelligence is and goes off and writes some code to implement that theory. Even if it does not work or doesn’t do anything that looks like intelligence it is still considered “AI” because that is what they were aspiring to create. Calling ANNs AI is like being invited into a hanger to look at a new aircraft design but finding nothing but landing gear. You ask: “I thought you said there was an airplane.” And are told: “Yes, there it is - it is just not a very good airplane yet.” We saw another example of this “Aspirational AI” in a recent article in the Analytics India magazine[ii] that listed New Sapience among 10 companies in the Artificial General Intelligence space. They all say they are working on the AGI problem, but we are the only one that has anything to show for our efforts: a working prototype that comprehends language in the same sense [...]
Lies and Statistics Today, AI techniques such as Machine Learning and Deep Learning are being used in more and more critical decision making processes that effect people's lives but there is a growing alarm that these techniques are innately prone to reflect human bias. These concerns are valid. As a statistical process, ML can only learn patterns that already exist in the data sets they are trained on and these are by and learn large vast collections of documents written by and for humans and as such reflect all the attitudes people have about one another. ML does not understand the meaning of the words in the data set and so cannot apply context. But whether the bias is in the data set or the algorithm designer, the bottom line is that making decisions based solely or largely on statistics is inherently problematic. The famous quote popularized by Mark Twain which he attributed to Benjamin Disraeli comes to mind, "There are three kinds of lies: lies, damned lies, and statistics." Our technology, Machine Knowledge, does understand what people are saying and applies context naturally and automatically making critical distinctions between subjective experience and emotion, and objective facts. New Sapience Founder Bryant Cruse and author Lynn Woodland have a very interesting conversation about the unique resistance of sapiens to reflect human prejudices and bias.