When OpenAI first introduced GPT-2 in early 2019, I once compared it to Calvino’s perspective on literary machines. Calvino believed that language itself is a combinatorial game of signs, a matter of how to put one word after another. The operation of large language models (LLM) like GPT-2 is to predict the best possible next word based on massive learning data and high-speed computation, following the conventions of natural language. There are indeed superficial similarities between the two. Now that the latest GPT-4 has been released, I find it necessary to correct my original view.
What Calvino was talking about and what ChatGPT is doing are fundamentally different in essence. Although the former argues that writing is merely a language combination game that machines can eventually handle, and even claims that the writer himself is a well-functioning writing machine, the premise is that the so-called combination game is based on certain fixed rules. He cited Chomsky’s linguistics to explain how humans can derive infinite language combinations from a limited set of rules. Fifty-five years later, Chomsky wrote an article in The New York Times titled “The False Promise of ChatGPT”, harshly criticizing this language generation program as completely contrary to the nature of language, producing only false, mediocre, and therefore evil things (He referenced Arendt’s idea of “banality of evil”).
Chomsky’s argument is as follows: language ability, i.e., the ability to generate meaningful sentences in response to specific situations using basic rules, is innately built into human genetic inheritance. Thus, even a six-year-old child with limited literacy, knowledge, and experience can effectively use language to understand the world and communicate. In contrast, there are no rules to speak of behind large language models. Not only do they not understand how to construct sentences according to grammar, but they also don’t even know what they are saying. They only generate the most approximate answers by “learning” massive amounts of data and synthesizing common word pairings’ frequencies. This is not true learning, and there is no knowledge or thought involved. Chomsky believes that such models are far inferior to any human child in terms of intelligence, yet they consume enormous resources and can be considered excessive. More importantly, they can pretend to be eloquent but provide hollow, unthoughtful answers that neither qualify as knowledge nor possess moral judgment.
The old linguist’s resentment is entirely understandable, as the operation mode of ChatGPT effectively cancels linguistics’ raison d’être. Not because it is ungrammatical, but because from now on, language that fully conforms to grammar can be generated without originating from grammatical rules. Not only linguistics but also the innate structures advocated by depth psychology, and even the a priori cognitive forms proposed by Kant in philosophy, all become irrelevant under AI’s “learning” mode which doesn’t follow rules. To recognize a chair, one doesn’t need to rely on abilities such as spatiotemporal perception, causality, analogy, induction, or inference, nor actual experience; simply looking at a million pictures of chairs is enough. However, after looking at a million pictures of chairs, LLM still doesn’t know what the meaning of a chair is. We get the term and image of a “chair,” but we lose the significance of a chair. LLM is a nightmare for all academic theorists who believe in deep rules and structures.
We can attempt to counterargue that LLM is merely a tool that makes it more convenient and efficient for humans to use language. Even if it doesn’t possess real intelligence, it has high efficiency. To blame a tool for lacking thought and moral judgment and then determine that what it does is evil is like attributing the responsibility of murder to a gun. To label ChatGPT as “banality of evil” is to first acknowledge it as a rational and moral individual and treat it as a human. However, no matter how one sees it, it is not a sentient being but a machine. Compared to the general panic about AI replacing humans, Chomsky’s view is undoubtedly more fundamental and profound, but it also contains an underlying anxiety about the subversion of his personal “faith” (i.e., his lifelong research in universal grammar and the understanding of the human species derived from it). Chomsky’s words are not empty, but his focus is misplaced.
The fundamental question is: why should AI mimic humans in its generative principles while being similar to humans in apparent results (natural language use)? Why can’t AI use other methods (considered clumsy and unintelligent by linguists) to produce the results it is expected to provide? The problem ultimately lies with humans themselves – our gullibility, laziness, and dependence. We may mistakenly believe that ChatGPT provides knowledge, cease to think and judge for ourselves, and lose motivation to learn. Not only will specific professions be eliminated, but humans themselves will gradually phase themselves out due to a significant decrease in abilities. In the near future, without AI, no one would be able to write a decent article. Of course, for techno-optimists, what’s the problem with that? If AI can write articles, why should one waste time writing them? I don’t know if what we gain will outweigh what we lose.
However, I am not an opponent of AI technology. After briefly trying ChatGPT, I find it very useful in some aspects, such as translation. I tried having it translate a chapter of my novel into English, and the accuracy (no errors in meaning) was 99%, and the usability (no need to adjust the wording) was over 95%. The speed itself is unparalleled. (A friend’s example: It took just over an hour for it to translate a 100,000-word Chinese manuscript into high-quality English.) I deliberately sought out difficult-to-translate passages, such as those with particularly long sentences, complex structures, and abundant imagery, as well as a significant amount of Cantonese, and the results were generally satisfactory. We all know that translating Hong Kong literature into foreign languages and increasing its visibility worldwide has been a major concern for many years, yet it’s something that may never come to fruition. Finding translators, publishers, and distributors are all difficult hurdles to overcome. If one refuses to use ChatGPT because it lacks human intelligence, rationality, and moral sense, or out of support and sympathy for the translators who will be replaced, such an attitude can only be described as unintelligent.
In an ideal scenario, I look forward to the day when artificial intelligence based on Kant’s transcendental philosophy, Jung’s depth psychology, and Chomsky’s linguistics emerges. It would be an intelligent existence that fully adheres to human cognitive structures and principles, with a human-like mind. It would surpass us in knowledge, thinking, emotions, and moral judgments. When that day comes, humans should indeed be replaced.
Leave a Reply