Notes on consciousness. (XI) Why large language models are not conscious

The current discussions about AI are plagued with anthropomorphism. Of course, the name “artificial intelligence” has probably something to do with the matter. A conscious artificial intelligence might sound like science-fiction, but a sentient statistical model certainly sounds a bit more bizarre. In this post, I want to address the following question: are large language models (LLM), such as ChatGPT, or more generally deep learning models, conscious? (perhaps just a little bit?)

The first thing that must be realized is that the question of what it takes for something to be conscious is a very old question, which has been the subject of considerable work in the past. The opinions and arguments that we read today are not original. Very often, they unknowingly repeat common arguments that have been discussed at length in the past. It might seem for the general public or AI engineers that the spectacular successes of modern AI programs should bring a new light on these old questions, but that is not really the case. These spectacular programs already existed 50 years ago, just as thought experiments: which for logical arguments makes no difference at all. For example, in 1980, Searle articulated an argument against the idea that a program that seems to behave like a human would necessarily have a mind. The argument was based on a thought experiment, where an average English-speaking person would apply the operations of a sophisticated program that takes a question written in Chinese as input and outputs an answer in Chinese. Searle argues that while that person is able to have a conversation in Chinese by mechanically applying rules, she does not understand Chinese. Therefore, he concludes, understanding cannot be identified with the production of acceptable answers. Now such programs exist, and their existence does not change anything to either the argument or the objections that have been raised then. All these discussions were precisely based on the premises that such systems would exist.

This is an important preamble: a lot has been said and written about consciousness, what it means, what it takes for something to be conscious, etc. Answers to these questions are not to be found in current AI research but in the philosophy literature, and whether a program produces convincing answers to questions, or can correctly categorize images, bears very little relevance to any of these discussions. A corollary is that AI engineering skills provide no particular authority on the subject of AI consciousness, and indeed, as far as I can tell, many arguments we hear in the AI community on this subject tend be rather naïve, or at least not original and with known weaknesses.

Another important remark is that, while there is a lot of literature on the subject, there is no consensus whatsoever in the specialized community on those questions. Views on consciousness go from consciousness does not exist (eliminativism/illusionism) to everything is conscious (panpsychism(s)). In my view, there is no consensus because there is currently no convincing theory of consciousness (some say that there can be no theory of consciousness, which is another proposition on which there is no consensus). There are good reasons for this state of affair, which I scratched here and there (and others have too, obviously).

This remark is important because I have seen it become an argument for why a particular artefact might be conscious: we don’t know what consciousness is exactly, or what it takes for something to be conscious, therefore we cannot rule out the possibility that something in particular is conscious.

However, this is a fallacy. We do not know what it takes for an entity to be conscious, but we certainly do know that certain entities cannot be conscious, if the meaning we give to this notion must have some vague resemblance with its usual meaning.

Now, I will make a few general points against the idea that LLMs, or more generally formal deep neural networks, are conscious, by discussing the concept of a “conscious state” (i.e., “how it feels like now”).

Once we remove the buzzwords, such as “intelligence”, “learning”, etc., a modern deep learning model is essentially a massively parallel differentiable program. In effect, it is essentially a tensor calculator, the state of which is updated by a series of iterations.

The default assumption in the computational view of mind is that a mental state is something like a program state. But a conscious state is a very particular kind of state. First of all, a conscious state has a subject (the entity that is conscious) and an object (what it is about). Both relate to well-known topics in philosophy of mind, namely the unity of consciousness and the intentionality of consciousness.

When we wonder whether something is conscious, that thing is typically an organism, or a machine, or even, if you adopt panpsychism, a rock or an atom. But we could consider many other combinations of molecules and ask whether they are conscious. How about the entity made of a piece of my finger (still attached to my body) plus two coins in a fountain in Rome? This seems absurd, but why? There is a reason why: two objects that do not interact at all have the same properties whether we look at them one by one or together. New properties above the individual elements can only arise if there is some kind of interaction between the elements. So, if neither my finger nor any of the coins is conscious, then the combination is not conscious either. Thus, we can say that a necessary condition for a set of components to constitute a subject is that there is some causal integration between the components. This is actually the basis of one particular theory of consciousness, Integrated Information Theory (which I have criticized here, there and there, essentially because a necessary condition is not a sufficient condition). If a deep learning network is conscious, then which layer is the subject?

I leave this tricky question hanging there to address the more critical ones (but in case you want to dig, look up these keywords: autonomy of the living, biological organism, biological agency). One is the question of intentionality: the fact that a conscious state is about something: I am conscious of something in particular. A large language model is a symbol processing system. The problem is that it is humans who give meaning to symbols. The program is fed with sequences of bits, and outputs sequences of bits. If one asks an LLM “What is the color of a banana?” and the LLM replies “yellow”, does the program understand what a banana is? Clearly, it doesn’t have the visual experience of imagining a banana. It has never seen a banana. It doesn’t experience the smell or touch of a banana. All it does is output 0100, which we have decided stands for the word banana, when we input a particular stream of numbers. But the particular numbers are totally arbitrary: we could choose to assign a different sequence of numbers to each of the words and we would still interpret the computer’s output in exactly the same way, even though the program would now have different inputs and outputs. So, if the program has a conscious experience, then it is about nothing in particular: therefore, there is no conscious experience at all.

This is known as the “symbol grounding problem”, again a well-known problem (which I examined in the context of “neural codes”). Many people consider that a necessary ingredient has to do with embodiment, that is, the idea that the machine has to have a body interacting with the world (but again, careful, necessary does not mean sufficient). It seems that it is Yann Le Cun’s position, for example. Again, it is not clear at all what it takes for something to be conscious, but it is clear that there can be no conscious experience at all unless that experience is somehow about something, and so the symbol grounding problem must be addressed.

These are well-known problems in philosophy of mind. The final point I want to discuss here is more subtle. It relates to the notion of “state”. In computationalism (more generally, functionalism), a conscious state is a sort of program state. But there is a big confusion here. Both terminologies use the word “state”, but their meaning is completely different. A state of mind, an experience, is not at all a “state” in the physical sense of pressure, volume and so on, that is, the configuration of a system. Consider a visual experience. Can it be the case that to see something is to have the brain in a certain state? For example, some neurons in the inferotemporal cortex fire when a picture of Jennifer Anniston is presented: is the visual experience of Jennifer Anniston the same as the active state of the neuron? If the answer is positive, then why should you have the experience, rather than that neuron? Otherwise, perhaps the neuron triggers the experience, but then we need to assign the experience to the state of some downstream neurons and we face the problem of infinite regress.

The issue is that an experience is simply not a physical state; to treat it as such is a category error. To see this, consider this thought experiment, which I called the “Bewitched thought experiment” (self-quote):

In the TV series Bewitched, Samantha the housewife twitches her nose and everyone freezes except her. Then she twitches her nose and everyone unfreezes, without noticing that anything happened. For them, time has effectively stopped. Was anyone experiencing anything during that time? According to the encoding view of conscious experience, yes: one experiences the same percept during the entire time, determined by the unchanging state of the brain. But this seems wrong, and indeed in the TV series the characters behave as if there had been no experience at all during that time.

It would seem bizarre that people experience something when their brain state is maintained fixed. That is because to experience is an activity, not a thing. Therefore, a system cannot experience something just by virtue of being in some state. The vector of activation states of a neural network is not a conscious “state”. It is not even false: it is a category error. So, when a deep network outputs a symbol that we associate to Jennifer Anniston, it doesn’t actually see Jennifer Anniston. Having the output “neuron” in a particular state is not an experience, let alone an experience of seeing Jennifer Anniston.

All these remarks, which tend to show that the application of tensor calculus does not produce conscious experience, can be perplexing because it is hard to imagine what else could possibly produce conscious experience. Current “artificial intelligence” is our best shot so far at mimicking consciousness, and I am saying that it is not even a little bit conscious. So, what else could consciousness be then?

My honest answer is: I don’t know. Will we know someday? I also don’t know. Should it be possible, hypothetically, to build a conscious artifact? Maybe, but I don’t know (but why not? I don’t know, how then? I also don’t know). I wish I did know, but I don’t find it particularly bizarre that certain things are still unexplained and unclear. It doesn’t mean that there’s nothing interesting to be said on consciousness.

But in any case, I want to point out that the “what else” argument is a self-destructive argument. Asking “what else?” is just admitting that you have no logical argument that allows you to prove your proposition. This is not an argument, but rather an admission of failure (which is fine).

Anyway: I don’t know what it takes for something to be conscious, but I think we can be fairly confident that LLMs or deep networks are not conscious.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *