Notes on consciousness. (IX) Why Integrated Information Theory fails

Integrated Information Theory (IIT) is a theory of consciousness that assigns a certain degree of consciousness (called Phi) to any system. Philosophically, it is a kind of property dualism: mental states are not reduced to states of matter, but assigned to arrangements of matter that have certain formal properties.

One of the most criticized aspects of the theory is that it embraces panpsychism: for example, it assigns consciousness to elementary systems such as a photodiode. However, the objection is easy to bypass: a theory does not have to produce intuitive claims. It is certainly not intuitive that the passing of time depends on how fast you move, yet it seems to be true.

But in the case of relativity, the counter-intuitive prediction can be tested empirically (and it has been). In the case of IIT, it cannot: there is no independent way to assess whether a photodiode is conscious. This is the fundamental epistemological issue with trying to explain a subjective phenomenon: there is no other way to directly establish the reality of the phenomenon than to experience it yourself. You will never have first-hand evidence that your stomach is another conscious entity in your body.

However, there are other ways to assess a theory, for example to exhibit incoherencies, contradictions or logical fallacies.

Before I begin, I want to point out that property dualism is much closer to materialism than to religious kinds of dualism. In IIT, mental states are really properties of matter, just not physical properties. This means that mental states depend only on the arrangement of matter: the physical system is not just a host for the mind, as in Cartesian dualism. In Buddhism, the mind can reincarnate in another body, and bring memories with it. This is impossible with property dualism, because mental states are an intrinsic property of the system.

It is precisely this point that is inconsistent with the formulation of IIT. The central concept that I will discuss is information. IIT builds on the classical notion of information as differentiation: if a state can take values A or B, then observing A is informative, because it could have been B. If we assign equal probabilities to the two alternatives, then observing the state brings exactly 1 bit of information. Information is the reduction of uncertainty. This is why a binary photodiode exposed to a visual scene can get no more than 1 bit of information, while you would get much more information from the same scene: it could have been so many different other scenes.

At first sight, this seems like a reasonable characterization of information, so that IIT considers it as a self-evident axiom. However, there is a big problem here, one that will lead to major inconsistencies. State A is informative because it excludes possibility B. But this assumes that the entity that gets informed is aware that B was a possibility. How could that be possible if it has never encountered B before, and has no cognitive resources to imagine that B could exist? The reduction in uncertainty is relative to prior knowledge, but what is the prior knowledge of a photodiode?

The confusion, of course, is that the reduction of uncertainty occurs for the observer who formalized the range of possibilities, but it is ascribed to the system itself. There is some magic operating here, where we ascribe information and uncertainty to a physical system, without ever defining what it means for a physical system to have knowledge.

Let us be more concrete. Suppose the photodiode starts its life covered by a box, but then the experimenter removes the box and puts it back from times to times. In terms of information, the state of the photodiode will hold some information about light, or if it consists of a sensor and a detector, about its previous state. According to IIT, this will make the photodiode minimally conscious (in the last version of IIT, the photodiode must have some recurrent connectivity, e.g. reciprocal connection between a sensor and a detector, but this makes no difference to the present discussion).

But consider the very beginning of the photodiode’s life. It has always been in the dark, never exposed to light. In terms of information, the state is still informative because it excludes the possibility of light, but what does that mean for the photodiode that has never seen the light?

Now consider another photodiode. It is physically exactly the same photodiode, but this time the box permanently covers it. Thus, there is only one possible state for that photodiode. According to IIT, this photodiode is not conscious. Yet, it is physically identical to the photodiode at the beginning of the previous scenario.

Thus, we have exhibited two situations where the same physical system in the exact same state is assigned two different states of consciousness. This contradicts the claim that consciousness is an intrinsic property of the system.

The problem arises from the fact that consciousness is defined not from what happens presently with the system, but from what could happen, and it is very unclear what “could” might mean. “Could” the box collapse and expose the photodiode to light? Would that make the photodiode conscious? From that event? Or from the start of the photodiode’s life? Or from the moment when that event “could” happen? What does it mean exactly that an event that does not happen “could” have happened? (if what?)

Let us now turn to brains. According to Tononi, “IIT predicts that a particular brain area can contribute to experience even if it is inactive, but not if it is inactivated.” Specifically, he contrasts the case of a person seeing a gray picture with the case where a colored picture is presented but color-sensitive neurons are lesioned. In the first case, the person perceives the picture as gray. In the second case, there is no experience of color at all. The difference in experience occurs because in the first case, the color-sensitive neurons could have been in a different state, but not in the second case.

“Could have” is very problematic here, because it does not refer to the present state of the system, but to hypothetical states.

Suppose the color area of the brain is not lesioned but reversibly inactivated by cooling, for a random duration. In this case, the person should be color conscious, because at any time, the neurons could have been in a different state.

An important precision in these thought experiments: the subject does not know what the experimenter is doing. Thus, from the viewpoint of the subject, all that happens is the neurons are inactive, and the cause of this inactivity leaves no trace in the subject. Yet, according to IIT, mental states depend on that cause. If it is a gray picture: color-conscious. Lesioning: color-unconscious. Reversible cooling: conscious. Irreversible cooling with a refrigerating device permanently implanted in the brain: unconscious. The fridge breaks down: conscious. The fridge might break down, but actually doesn’t: conscious from the time of cooling.

In summary, IIT fails to consistently assign consciousness to a system, because the definition is based on hypotheticals, which by definition are not instantiated by the system. Deep down, the troubles arise from the circularity of the definition of information as reduction of uncertainty. Uncertainty refers to a prior state of knowledge, but the notion of knowledge for the subject is never defined. In practice, the knowledge underlying the reduction of uncertainty is the knowledge of the observer who formalizes scenarios and quantifies probabilities of events that the system itself has never lived.

 

This post is an elaboration of the following commentary: Does the present moment depend on the moments not lived?