Notes on consciousness. (VIII) Is a thermostat conscious?

A theory of consciousness initially proposed by David Chalmers (in his book the Conscious Mind) is that consciousness (or experience) is a property of information processing systems. It is an additional property, not logically implied by physical laws; a new law of nature. The theory was later formalized by Giulio Tononi into Integrated Information Theory, based on Shannon’s mathematical concept of information. One important feature of this theory is it is a radical form of panpsychism: it assigns consciousness (to different degrees) to virtually anything in the world, including a thermostat.

The Bewitched experiment of thought

I have criticized IIT previously on the grounds that it fails to define in a sensible way what makes a conscious subject (eg a subsystem of a conscious entity would be another conscious entity, so for example your brain would produce an infinite number of minds). But here I want to comment specifically on the example of the thermostat. It is an interesting example brought up by Chalmers in his book. The reasoning is as follows: a human brain is conscious; a mouse brain is probably conscious, but with a somewhat lower degree (for example, no self-consciousness). As we go down the scale of information-processing systems, the system might be less and less conscious, but why would it be that there is a definite threshold for consciousness? Why would a billion neurons be conscious but not a million? Why would a million neurons be conscious but not one thousand? And how about just one neuron? How about a thermostat? A thermostat is an elementary information-processing system with just two states, so maybe, Chalmers argue, the thermostat has a very elementary form of experience.

To claim that a thermostat is conscious defies intuition, but I would not follow Searle on insisting that the theory must be wrong because it assigns consciousness to things that we wouldn’t intuitively think are conscious. As I argued in a previous post, to claim that biology tells us that only brains are conscious is to use circular arguments. We don’t know whether anything else than a brain is conscious, and since consciousness is subjective, to decide whether anything is conscious is going to involve some theoretical aspects. Nonetheless, I am skeptical that a thermostat is conscious.

I propose to examine the Bewitched experiment of thought. In the TV series Bewitched, Samantha the housewife twitches her nose and everyone freezes except her. Then she twitches her nose and everyone unfreezes, without noticing that anything happened. For them, time has effectively stopped. The question is: was anyone experiencing anything during that time? To me, it is clear that no one can experience anything if time is frozen. In fact, that whole time has not existed at all for the conscious subject. It follows that a substrate with a fixed state (e.g. hot/cold) cannot experience anything, because time is effectively frozen for that substrate. Experience requires a flow of time, a change in structure through time. I leave it open whether the interaction of the thermostat with the room might produce experience for that coupled system (see below for some further thoughts).

What is “information”?

In my view, the fallacy in the initial reasoning is to put the thermostat and the brain in the same scale. That scale is the set of information-processing systems. But as I have argued before (mostly following Gibson’s arguments), it is misleading to see the brain an information-processing system. The brain can only be seen to transform information of one kind into information of another kind by an external observer, because the very concept of information is something that makes sense to a cognitive/perceptual system. The notion of information used by IIT is Shannon information, a notion from communication theory. This is an extrinsic notion of information: for example, neural activity is informative about objects in the world in the sense that properties of those objects can be inferred from neural activity. But this is totally unhelpful to understand how the brain, which only ever gets to deal with neural signals and not things in the world, sees the world (see this argument in more detail in my paper Is coding a relevant metaphor for the brain?).

Let’s clarify with a concrete case: does the thermostat perceive temperature? The thermostat can be in different states depending on temperature, but from its perspective, there is no temperature. There are changes in state that seems to be unrelated to anything else (there is literally nothing else for the thermostat). One could replace the temperature sensor with some other sensor, or with a random number generator, and there would be literally no functional change in the thermostat itself. Only an external observer can link the thermostat’s state with temperature, so the thermostat cannot possibly be conscious of temperature.

Thus, Shannon’s notion of information is inappropriate to understand consciousness. Instead of extracting information in the sense of communication theory, what the brain might do is build models of sensory (sensorimotor) signals from its subjective perspective, in the same way as scientists make models of the world with observations (=sensory signals) and experiments (=actions). But this intrinsic notion of information, which corresponds eg to laws of physics, is crucially not what Shannon’s notion of information is. And it is also not the kind of information that a thermostat is dealing with.

This inappropriate notion of information leads to what in my view is a rather absurd quantitative scale of consciousness, according to which entities are more or less conscious along a graded scale (phi). Differences in consciousness are qualitative, not quantitative: there is dreaming, being awake, being self-conscious or not, etc. These are not different numbers. This odd analog scale arises because Shannon information is counted in bits. But information in the sense of knowledge (science) is not counted in bits; there are different kinds of knowledge, they have different structure, relations between them etc.

Subjective physics of a thermostat

But let us not throw away Chalmers’ interesting experiment of thought just now. Let us ask, following Chalmers: what does it feel like to be a thermostat? We will examine it not with Shannon’s unhelpful notion of information but with what I called “subjective physics”: the laws that govern sensory signals and their relations to actions, from the perspective of the subject. This will define my world from a functional viewpoint. Let’s say I am a conscious thermostat; a homunculus inside the thermostat. All I can observe is a binary signal. Then there is a binary action that I can make, which for an external observer corresponds to turning on the heat. What kind of world does that make to me? Let’s say I’m a scientist homunculus, what kind of laws about the world can I infer?

If I’m a conventional thermostat, then the action will be automatically triggered when the signal is in a given state (“cold”). After some time, the binary signal will switch and so will the action. So in fact there is an identity between signal and action, which means that all I really observe is just the one binary signal, switching on and off, probably with some kind of periodicity. This is the world I might experience, as a homunculus inside the thermostat (note that to experience the periodicity requires memory, which a normal thermostat doesn’t have). In a way, I’m a “locked-in” thermostat: I can make observations, but I cannot freely act.

Let’s say that I am not locked-in and have a little more free will, so I can decide whether to act (heat) or not. If I can, then my world is a little bit more interesting: my action can trigger a switch of the binary signal, after some latency (again requiring some memory), and then when I stop, the binary signal switches back, after a time that depends on how much time my previous action lasted. So here I have a world that is much more structured, with relatively complex laws which in a way defines the concept of “temperature” from the perspective of the thermostat.

So if a thermostat were conscious, then we have a rough idea of the kind of world it might experience (although not how it feels like), and even in this elementary example, you can’t measure these experiences in bits - let alone the fact that a thermostat is not conscious anyway.

What does Gödel's theorem mean ?

Gödel's theorem is a result in mathematical logic, which is often stated as showing that « there are true things that cannot proved ». It is sometimes used to comment on the limits of science, or the superiority of human intuition. Here I want to clarify what this theorem means and what the epistemological implications are.

First, this phrasing is rather misleading. It makes the result sound almost mystical. If you phrase the result differently, by avoiding the potentially confusing reference to truth, the result is not that mystical anymore. Here is how I would phrase it : you can always add an independent axiom to a finite system of axioms. This is not an obvious mathematical result, but I wouldn't think it defies intuition.

Why is this equivalent to the first phrasing ? If the additional axiom is independent of the set of axioms, then it cannot be proved from them (by definition). Yet as a logical proposition it has to be either true or not true. So it is true, or its negation is true, but it cannot be proved. What is misleading in the first phrasing is that the statement « there are true things » is contextual. I can start from a set of axioms and add one, and that new one will be true (since it's an axiom). Instead I could add its negation, and then that one will be true. That the proposition is true is not a universal truth, as it would seem with the phrasing « there are true things ». It is true in a particular mathematical world, and you can consider another one where it is not true. Famous examples are Euclidean and non-Euclidean geometries, which are mutually inconsistent sets of axioms.

So, what Gödel's theorem says is simply that no finite system of axioms is complete, in the sense that you can always add one without making the system inconsistent.

What are the epistemological implications ? It does not mean that there are things that science cannot prove. Laws of physics are not proved by deduction anyway. They are hypothesized and empirically tested, and all laws are provisory. Nevertheless, it does raise some deep philosophical questions, which have to do with reductionism. I am generally critical of reductionism, but more specifically of methodological reductionism, the idea that a system can be understood by understanding the elements that compose it. For example : understand neurons and you will understand the brain. I think this view is wrong, because it is the relations between neurons, at the scale of the organism, which make a brain. The right approach is systemic rather than reductionist. Many scientists frown at criticisms of reductionism, but this is only because they confuse methodological and ontological reductionism. Ontological reductionism means that reality can be reduced to a small number of types of things (eg atoms) and laws, and everything can be understood in these terms. For example, the mind can in principle be understood in terms of interactions of atoms that constitute the brain. Most scientists seem to believe in ontological reductionism.

Let us go back now to Gödel's theorem. An interesting remark made by theoretical biologist Robert Rosen is that Gödel's theorem makes ontological reductionism implausible to him. Why ? The theorem says that, whatever system of axioms you choose, it will always be possible to add one which is independent. Let us say we have agreed on a small set of fundamental physical laws, with strong empirical support. To establish each law, we postulate it and test it empirically. At a macroscopic level, scientists postulate and test all sorts of laws. How can we claim that any macroscopic law necessarily derives from the small set of fundamental laws ? Gödel's theorem says that there are laws that you can express but that are independent of the fundamental laws. This means that there are laws that can only be established empirically, not formally, in fact just like the set of fundamental laws. Of course it could be the case that most of what matters to us is captured by a small of set of laws. But maybe not.