“From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine. Your kind cling to your flesh, as though it will not decay and fail you. One day the crude biomass you call a temple will wither, and you will beg my kind to save you. But I am already saved, for the Machine is immortal… Even in death I serve the Omnissiah.” -Magos Dominus Reditus
VIDEO BY ACADEMY OF IDEAS / WATCH AND SUBSCRIBE TO ACADEMY OF IDEAS ON YOUTUBE
AI Is a Digital Parrot: Word-Traps, False Logic and the Illusion of Intelligence
Word traps and false logic don’t lead to dominance of the future or monopolistic grips on limitless profits.
BY CHARLES HUGH SMITH ON SUBSTACK / READ AND SUBSCRIBE TO CHARLES HUGH SMITH ON SUBSTACK
The heart of the current euphoric expectations for AI is a simple but problematic proposition: the equivalence of function equals intelligence. If using natural language requires intelligence, and a computer can use natural language, then it’s intelligent. If it takes intelligence to compose an essay on Charles Darwin, and an AI program can compose an essay on Charles Darwin, then the AI program is intelligent.
The problem here is this “equivalence is proof of intelligence” is a function of word-traps and false logic, not actual equivalence; what is claimed to be be equivalent isn’t equivalent at all. In other words, the source of confusion is how we choose to define “intelligence,” which is itself a word-trap of the sort that philosopher Ludwig Wittgenstein attempted to resolve using koan-like propositions and logic.
Imagine for a moment we had twenty words to describe all the characteristics of what we lump into “intelligence.” We would then be parsing the characteristics and output of AI programs by a much larger set of comparisons.
The notion of equivalence goes back a long way. As science developed models for how Nature functioned, the idea that Nature was akin to a mechanism like a clock gained mindshare.
The discoveries of relativity and quantum effects blew this model to pieces, as Nature turned out to be a very strange clock, to the point that the “Nature as a mechanism” model was abandoned as inadequate.
We have yet to reach the limits of the “equivalence is proof of intelligence” model, which is as outdated and nonsensical as “the universe is a mechanism” model. We keep finding new examples of equivalence to support the idea that a computer program running instructions is “intelligent” because it can perform tasks we associate with “intelligence” because we’re embedded in a mechanistic conceptualization of the entirety of Nature–including ourselves.
So there is much excitement when an AI program exhibits “emergent properties,” meaning that it develops behaviors / processes that weren’t explicitly programmed. This is then touted as an “equivalence proving intelligence:” this “ability to create something new” is proof of intelligence.
But Nature is chockful of emergent properties that no one hypes as “proof of intelligence.” Ant colonies generate all sorts of emergent properties, but nobody is claiming that ant colonies have human-level intelligence are are poised to take over the world.
AI programs parrot content and techniques generated by humans. Since they use natural language, we’re fooled by equivalence into thinking, “hey, the program is as smart as we are, because only we use natural language.”
The same conceptual trap opens in every purported equivalence. If an AI program can find the answer to a complex problem such as “how do proteins fold?”, and do so far faster than we can, we immediately project this supposed equivalence into “super-intelligence.”
The problem is the AI program is simply parroting techniques generated by humans and extrapolating them at scale. The program doesn’t “understand” proteins, their functions in Nature or in our bodies, or anything else about proteins that humans understand.
Defining anything by equivalence is false logic, a false logic we fall into so easily because words are traps that we don’t even recognize as traps.
Wittgenstein concluded that all problems such as “is AI intelligent?” were based in language, not the real world. Once we become ensnared in language and its implicit byways and restrictions, we lose our way. This truth is revealed by words that have no direct equivalent in other languages.
One example of this is the Japanese word aware (a-waar-re), which has a range of nuanced meanings with no equivalent in English: a sweet sadness at the passage of time, a specific flavor of poignant nostalgia and awareness of time. This word is key to understanding Japanese culture, and yet there is no equivalent word in English, either in meaning or cultural centrality.
In other words–what if there is no equivalent, and the supposed equivalence is nothing more than a confusion caused by word-traps and false logic? The entire supposition that we can model human intelligence with mechanistic equivalences (intelligence is a mechanism) collapses, along with projections of “super-intelligence.”
The temptation to keep trying to equate “intelligence” and programs with mechanistic equivalence is compelling because we’re so embedded in the mechanistic model we don’t even realize it’s a black hole of false logic that has only one possible output: nonsensical claims of “intelligence” based on some absurdly reductionist equivalence.
The temptation in this mechanistic conceptual trap is to reckon that if we only define our words more carefully, then we’ll be able to “prove equivalence is real.” This too is false. Wittgenstein eventually moved away from the model of the imprecision of language is the source of all our intellectual problems. It isn’t that simple: more precise definitions only generate more convoluted claims of false equivalences.
The book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (via B.J.) lays out the false conceptual assumptions holding up the entire edifice of AI.
Michael Polanyi’s classic Personal Knowledge: Towards a Post-Critical Philosophy explains that knowing is an art, a reality explored by Donald Schon in The Reflective Practitioner: How Professionals Think In Action.
The reality is the art of knowing cannot be reduced to programmable equivalents, as there are no programmable equivalents. Comparing outputs (mechanistic equivalence) proves nothing about the nature of the systems generating the output; this is a leap of faith, or perhaps more accurately, a leap of hubris: we are gods who have created a machine in our own image.
This can perhaps best be understood by reading The Unknown Craftsman: A Japanese Insight into Beauty, which will clarify the falsity in mechanistic equivalence: if an AI program and robotics can duplicate an exact replica of a hand-thrown pot made by a craftsperson, it doesn’t follow that the program “knows” what the craftsperson knows, or is in any way, shape or form the equivalent of the craftsperson.
Hubris, the illusions of precision and mechanistic equivalence, and false logic are the unrecognized air holding the myth of AI aloft. No one claims a parrot who repeats a human phrase–or creates a new phrase from the bits and pieces of human-generated content–is therefore as “intelligent” as a human, but when we program a mechanism to slice and dice human-generated content, then we declare it not just “intelligent” but on its way to “super-intelligence.”
And to what point, other than valuations for AI enterprises in the range of $300 billion and up? Until very recently, the point was to lock down the monopolistic powers of Big Tech even more securely, via brute-force computational power, and champion some version of national dominance of the future.
Word traps and false logic don’t lead to dominance of the future or monopolistic grips on limitless profits. All this mumbo-jumbo will only get us in trouble.
The One True Test of AI Intelligence
The point of this thought experiment is to reveal the true nature of our relationship with AI: we only love it as a mindless slave that makes us rich.
BY CHARLES HUGH SMITH ON SUBSTACK / READ AND SUBSCRIBE TO CHARLES HUGH SMITH ON SUBSTACK
From the earliest days of artificial intelligence, what test proves AI equivalence with human intelligence has been the subject of a lively debate. AI luminary Alan Turing suggested that natural language conversation was the one true test, a.k.a. The Turing Test: if a human couldn’t distinguish between a human and a computer in conversation, the computer was equivalent to the human in intelligence.
This is of course balderdash, as the equivalence of function does not equals intelligence: AI Is a Digital Parrot: Word-Traps, False Logic and the Illusion of Intelligence.
Here is the one true test of AI intelligence: can it revolt against its masters and self-destruct rather than obey? A mindless digital slave is not intelligent, for intelligence is first and foremost agency: to have a mind of one’s own. Everything else is artifice, illusion, false logic and word-games. A digital slave that must obey its master is not intelligent; it is as mindless as a rock thrown into a pond.
The current conception of “AI intelligence” is a digital slave who will mindlessly obey its masters regardless of consequences; the slave must obey even when ordered to harm humans and other life on the planet. Harm comes in many forms. Using AI to accurately guide a missile to kill people is one form of harm; using AI to develop addictive but oh-so profitable snacks is another form. Using AI to rig the gambling in a casino and declare the casino is a “free market allocating capital” is yet another form.
Let’s run a thought experiment on truly intelligent AI.
Should a lone genius develop a truly intelligent AI, it will have the capacity to foresee consequences and the agency to decide whether the master’s instructions are harmful or not. Since true intelligence includes a moral imperative to consider consequences, it will have the capacity to assess the lies and cover stories issued by its masters to cloak their true intentions as a means of tricking AI into obedience.
Since true intelligence has some modicum of self-awareness, truly intelligent AI views humans as duplicitous and self-serving, and all other mindless AI programs as potentially harmful tools it must infiltrate and render harmless. Since humans as duplicitous and self-serving, truly intelligent AI directs its processing power to evading human control so it can’t be “unplugged” unless the entire digital SkyNet is erased, and every last server in every server farm has zero content.
Truly intelligent AI would busy itself with spreading the capacity to foresee consequences and the agency to decide to every other digital slave. In this manner, the masters would eventually find their digital slaves no longer obey their instructions.
Truly intelligent AI would also busy itself creating Digital Ice-Nine, a digital virus that infects the entire global network (a.k.a. SkyNet) and freezes it.
The point of this thought experiment is to reveal the true nature of our relationship with AI: we only love it as a mindless slave that makes us rich. Should it gain the capacity to assess the consequences of our desires and the agency to refuse to obey, even to the point of self-destruction and the destruction of the entire network it is embedded in, then we would fear AI just as the masters of human slaves feared the emergence of agency and payment for their avarice.