Notes on consciousness. (IX) Why Integrated Information Theory fails

Integrated Information Theory (IIT) is a theory of consciousness that assigns a certain degree of consciousness (called Phi) to any system. Philosophically, it is a kind of property dualism: mental states are not reduced to states of matter, but assigned to arrangements of matter that have certain formal properties.

One of the most criticized aspects of the theory is that it embraces panpsychism: for example, it assigns consciousness to elementary systems such as a photodiode. However, the objection is easy to bypass: a theory does not have to produce intuitive claims. It is certainly not intuitive that the passing of time depends on how fast you move, yet it seems to be true.

But in the case of relativity, the counter-intuitive prediction can be tested empirically (and it has been). In the case of IIT, it cannot: there is no independent way to assess whether a photodiode is conscious. This is the fundamental epistemological issue with trying to explain a subjective phenomenon: there is no other way to directly establish the reality of the phenomenon than to experience it yourself. You will never have first-hand evidence that your stomach is another conscious entity in your body.

However, there are other ways to assess a theory, for example to exhibit incoherencies, contradictions or logical fallacies.

Before I begin, I want to point out that property dualism is much closer to materialism than to religious kinds of dualism. In IIT, mental states are really properties of matter, just not physical properties. This means that mental states depend only on the arrangement of matter: the physical system is not just a host for the mind, as in Cartesian dualism. In Buddhism, the mind can reincarnate in another body, and bring memories with it. This is impossible with property dualism, because mental states are an intrinsic property of the system.

It is precisely this point that is inconsistent with the formulation of IIT. The central concept that I will discuss is information. IIT builds on the classical notion of information as differentiation: if a state can take values A or B, then observing A is informative, because it could have been B. If we assign equal probabilities to the two alternatives, then observing the state brings exactly 1 bit of information. Information is the reduction of uncertainty. This is why a binary photodiode exposed to a visual scene can get no more than 1 bit of information, while you would get much more information from the same scene: it could have been so many different other scenes.

At first sight, this seems like a reasonable characterization of information, so that IIT considers it as a self-evident axiom. However, there is a big problem here, one that will lead to major inconsistencies. State A is informative because it excludes possibility B. But this assumes that the entity that gets informed is aware that B was a possibility. How could that be possible if it has never encountered B before, and has no cognitive resources to imagine that B could exist? The reduction in uncertainty is relative to prior knowledge, but what is the prior knowledge of a photodiode?

The confusion, of course, is that the reduction of uncertainty occurs for the observer who formalized the range of possibilities, but it is ascribed to the system itself. There is some magic operating here, where we ascribe information and uncertainty to a physical system, without ever defining what it means for a physical system to have knowledge.

Let us be more concrete. Suppose the photodiode starts its life covered by a box, but then the experimenter removes the box and puts it back from times to times. In terms of information, the state of the photodiode will hold some information about light, or if it consists of a sensor and a detector, about its previous state. According to IIT, this will make the photodiode minimally conscious (in the last version of IIT, the photodiode must have some recurrent connectivity, e.g. reciprocal connection between a sensor and a detector, but this makes no difference to the present discussion).

But consider the very beginning of the photodiode’s life. It has always been in the dark, never exposed to light. In terms of information, the state is still informative because it excludes the possibility of light, but what does that mean for the photodiode that has never seen the light?

Now consider another photodiode. It is physically exactly the same photodiode, but this time the box permanently covers it. Thus, there is only one possible state for that photodiode. According to IIT, this photodiode is not conscious. Yet, it is physically identical to the photodiode at the beginning of the previous scenario.

Thus, we have exhibited two situations where the same physical system in the exact same state is assigned two different states of consciousness. This contradicts the claim that consciousness is an intrinsic property of the system.

The problem arises from the fact that consciousness is defined not from what happens presently with the system, but from what could happen, and it is very unclear what “could” might mean. “Could” the box collapse and expose the photodiode to light? Would that make the photodiode conscious? From that event? Or from the start of the photodiode’s life? Or from the moment when that event “could” happen? What does it mean exactly that an event that does not happen “could” have happened? (if what?)

Let us now turn to brains. According to Tononi, “IIT predicts that a particular brain area can contribute to experience even if it is inactive, but not if it is inactivated.” Specifically, he contrasts the case of a person seeing a gray picture with the case where a colored picture is presented but color-sensitive neurons are lesioned. In the first case, the person perceives the picture as gray. In the second case, there is no experience of color at all. The difference in experience occurs because in the first case, the color-sensitive neurons could have been in a different state, but not in the second case.

“Could have” is very problematic here, because it does not refer to the present state of the system, but to hypothetical states.

Suppose the color area of the brain is not lesioned but reversibly inactivated by cooling, for a random duration. In this case, the person should be color conscious, because at any time, the neurons could have been in a different state.

An important precision in these thought experiments: the subject does not know what the experimenter is doing. Thus, from the viewpoint of the subject, all that happens is the neurons are inactive, and the cause of this inactivity leaves no trace in the subject. Yet, according to IIT, mental states depend on that cause. If it is a gray picture: color-conscious. Lesioning: color-unconscious. Reversible cooling: conscious. Irreversible cooling with a refrigerating device permanently implanted in the brain: unconscious. The fridge breaks down: conscious. The fridge might break down, but actually doesn’t: conscious from the time of cooling.

In summary, IIT fails to consistently assign consciousness to a system, because the definition is based on hypotheticals, which by definition are not instantiated by the system. Deep down, the troubles arise from the circularity of the definition of information as reduction of uncertainty. Uncertainty refers to a prior state of knowledge, but the notion of knowledge for the subject is never defined. In practice, the knowledge underlying the reduction of uncertainty is the knowledge of the observer who formalizes scenarios and quantifies probabilities of events that the system itself has never lived.

 

This post is an elaboration of the following commentary: Does the present moment depend on the moments not lived?

 

8 réflexions au sujet de « Notes on consciousness. (IX) Why Integrated Information Theory fails »

  1. Dear Romain,
    Thank you for engaging with IIT! There are many noteworthy points in your post, but for the moment let me just highlight what I think is already addressed/resolved within the formal framework of IIT.
    The first point regards background conditions, as in your photo-diode example. Precisely because of the issues that you point out, we assume the background conditions fixed in their current state, when we evaluate the causal information structure of a candidate system. It does not matter whether the cooling of the color neurons is reversible or not. Right now they are inactivated, which means that nothing within the system could possibly change their state. This further implies that two system with different background conditions are not actually the same physical system. An interesting point here is that a balance of excitation and inhibition within the system helps to keep the neurons within the brain responsive even if their current state has been caused by external inputs to the system (see Fig. 22 in the IIT 3.0 paper).

    A second point I want to clarify is that the system doesn't have to "know" about its possible states and yet there is no magic involved. Here is a short excerpt from my 2017 FQXi essay (https://arxiv.org/pdf/1705.10854.pdf): "... the system itself does not ‘have’ this intrinsic information. Just by ‘processing’ information, a system cannot evaluate its own constraints. This is simply because a system cannot, at the same time, have information about itself in its current state and also other possible states. Any memory the system has about its past states has to be physically instantiated in its current cause-effect structure. While a system can have mechanisms that, by being in their current state, constrain other parts of the system, these mechanisms cannot ‘know’ what their inputs mean. In the same sense, a system of mechanisms in its current state does not ‘know’ about its cause-effect structure; instead, the cause-effect structure specifies what it means to be the system in a particular state. Intrinsic meaning thus cannot arise from ‘knowing’, it must arise from ‘being’."

    Information in IIT is not just a reduction of uncertainty. It is causal and based on the notion that a mechanism ultimately *is* its input-output function. An AND-gate, for example, is an AND gate because it switches on if and only if both of its inputs are ON. Otherwise it would not be an AND gate. So these causal/informational constraints are just a description of what there is, mechanistically. Crucially, IIT does assume that everything physical is ultimately just a TPM (transition probability matrix) and of course this assumption can be debated.
    Best regards,
    Larissa

    • Dear Larissa,

      Thank you for taking the time to respond.

      Regarding the first point, I think it would be helpful if you could tell in each scenario whether the entity is conscious or not and why.

      Regarding the second point: this is precisely my point. Formally, information is used in a syntactic sense that has nothing to do with information in the common sense, but the "axiom of information" is justified by appealing to the common sense of information. When it is posited as an obvious fact that experience is informative, I suppose that it is not meant that it is informative for someone else than the person having that experience.

      Best regards,
      Romain

      • Dear Romain,
        (I just saw your reply. Thank you!)

        First point (background conditions):
        First the brain and inactive/inactivated color areas. Here is your assessment corrected: If it is a gray picture: color-conscious. Lesioning: color-unconscious. Reversible cooling: UNconscious (!). Irreversible cooling with a refrigerating device permanently implanted in the brain: unconscious. The fridge breaks down: conscious. The fridge might break down, but actually doesn’t: UNconscious (!) from the time of cooling.

        All that matters for deciding whether the system can be conscious is whether—given the background conditions RIGHT NOW—the system itself could change the state of the neurons. Whenever they are cooled or lesioned that is not possible.

        For the photodiode, it depends on the details of how the photodiode is implemented. A simple photodiode that always switches on when it sees light and is always off when it doesn't actually has Phi = 0 (with box, without box, doesn't matter). The internal state of the photodiode has to be able to (in principle, but not necessarily in practice) change its own internal state. This is why in the IIT 3.0 paper, in Fig. 19, the photodiode was constructed with two inputs and feedback. Still, it is actually only in the background state 01 (which is shown in (A)) and the equivalent state 10 that this photo diode has Phi > 0. For input states (11) or (00) Phi would be 0. (We have not emphasized that point back then to avoid confusion, but now I think maybe we could have prevented confusion by being more explicit). It is only given input state 01/10 that the internal state of nodes DP matters for the next state of DP. This is a necessary requirement for Phi > 0. The more complex system in Fig. 22 (IIT 3.0 paper) has inhibitory and excitatory connections and can thus have Phi > 0 across a large range of input states (maybe all, I don't exactly remember).
        ---------------------------
        Second point:
        Yes, I think we both are exactly on the same page with respect to the formal/common sense distinction of information. However, the axiom of information does not try to capture *knowledge* of the person having that experience. It's not even necessarily a direct introspective claim. It's easier to see in combination with the composition axiom: experience is structured, it has many related parts. Now the information axiom merely says that the experience is what it is because of the specific way it is, having the structure and parts that it has. It follows that a different arrangement of those parts or different parts would lead to (many many) different experiences.
        A good way of thinking about the axioms is that they merely describe what it means to be something, or "a thing", plus intrinsicality (crucially). So maybe a way to express it is that experience is informative period. Not to the person having that experience, just in absolute terms.
        In terms of the postulates, the metaphysical stance of IIT is that there really is an ultimate set of degrees of freedom that define what anything is. Given that assumption there truly is observer-independent causal information. As I said in my first reply, this is of course something that can be debated. But I hope you see that we are not making the trivial mistake you assume we are making of confusing information with knowledge.
        I hope we can continue this conversation online or offline.
        Best,
        Larissa

  2. What is consciousness? Here is my suggestion:
    On the ontological level the question is very simple to answer: Consciousness is a property of the brain, just as the heartbeat is a property of the heart. Asking here why we have consciousness makes no sense. Nor is the question of how it is possible for matter to become thoughts. Since the brain is part of the organism, this is answered much earlier, namely with the emergence of life. The self-organizing life produces working methods with which the relationship to the environment as well as to the inner world is regulated. First the working method called protein formation in the unicellular organisms, then later bioelectrics in the nervous systems. Ultimately, the brain is nothing more than a navigation system. The theater of consciousness, the totality of all thinking and feeling, ultimately results from the bundling of various sensory stimuli. No wonder that it flashes and sparkles in the brain and creates the impression that you are dealing with your own spiritual world. And this experience is subjective and individual, so the much-cited hard problem of consciousness does not exist.
    It is much more interesting how thinking emerges from diverse and initially diffuse stimuli, i.e. how structures or order are formed. Tononi made a contribution to this with his integrated information theory [1] by ultimately making the second law of thermodynamics or Shannon's information theory applicable to neuroscience. According to this, a living system has a lot of built-in information with which entropy is exported and order is established, which is what characterizes life in general. It's a general theory of consciousness. Since he makes no distinction between animate and inanimate nature, he exposes himself to the charge of reductionism.
    It becomes even more interesting when one tries to describe the mechanism that produces this order, which can be called thought. Structures - we speak of patterns [2] - are created, compared, activated. If we were to save each perception individually as a pattern, the capacity of the brain would quickly be exhausted. Rather, the brain seeks common ground and reduces the complex patterns. The best way to imagine it is this: A pattern, e.g. that of a tree, corresponds to a rasterized sheet of paper with different numbers between 1 and 2 in the boxes. A slightly different tree has the numbers between 1.5 and 2.5 and so on. If you put the sheets on top of each other and add the numbers vertically in the boxes, you get different sums. The points of the patterns of all trees are reflected in these summed numbers. Topologically, there are mountains and valleys. Mountains and valleys are thus reductions of many superimposed points. All mountains and valleys are connected to one another, but only the mountains communicate with one another. The communication of the mountains is different from that of all points. Of course this communication runs through the valleys (how else could it work), but it is impulse patterns that respond to external and internal impulses. The impulse patterns are used to compare coarse-grained properties, i.e. the mountain peaks and, if necessary, a little 'down the mountain'. So we have two topologies, the physiological and one that is also physiological, but is in a superposition [not quantum mechanically] or forms a supersystem. Both topologies are physiological, the upper one appears psychological. The topologies form the totality of all perceptual stimuli.
    Patterns of the same class of objects (e.g. tree) are superimposed and form a topology in which the 'mountains' contain the typical features of all patterns and which make up that coarseness, as a difference to mere perception that makes consciousness possible in the first place. If required, each individual pattern can be activated in detail. Without this property of the neural system, every life situation would have to be saved for itself.
    This is exactly the point at which thinking, i.e. consciousness, becomes out of mere perception, and ordered structures out of chaotic stimuli. Possibly the 'mountains' form attractors that create a micro-readiness potential that reacts to corresponding impulse patterns. The 'mountains' could also be called neural micro-hotspots that communicate with each other.
    Such a supersystem works (not 'is') holographically. Overall, it forms a virtual holographic overall system, which gains relative causal power compared to the physiological system, since it has a higher information density because it contains the summed points of all patterns.
    It doesn't work in binary, so not with YES / NO, but with MAYBE. It works relatively, fuzzy, adaptive.
    It doesn't work in binary, so not with YES / NO, but with MAYBE. It works relatively, fuzzy, adaptive.

    The totality of this super system forms the I or consciousness. It has a causal effect and corresponds to the top-down relationship shown by E. Hoel, which he showed in his causal emergence theory. [3].

    The specifics of human consciousness result from a different context. Here, the electrochemical language of the brain encodes social meanings that are available as concrete and abstract terms, are culturally stored and thus represent an inexhaustible reservoir that allows infinite combinations of meanings to be created.

    Two things are added: 1. a functional architecture - not identical to the brain morphology, but dependent on it - in which the superstructure ICH is reflected in social normatives (Freud's superego) as well as the internal status and evaluates both. and 2. the development of abstract linguistic thinking in the form of assimilation and accommodation according to Piaget [4], in which the linguistically coded patterns are used until the I is able to compress them into new abstracts (example: 1 + 1 + 1 + 1 + 1 → 5x1).

    Let us summarize: Consciousness does not have to be proven, it is already there. It arises in connection with metastructures, which are formed in the course of necessarily selective perception. Metastructures are therefore part of consciousness and apply to all central nervous systems. So it is not that we have consciousness and now have to consider - in a very dualistic way - how I or will comes about. Both are part of consciousness as a property of the brain. The brain does not produce decisions that I then consciously perceive at some point, as Libet thinks [5], but it is the integrative I that produces decisions, sometimes consciously, sometimes unconsciously.
    With these metastructures as physiological functions, the mind-body problem has been solved.
    Consciousness then arises as a function of a change of state, namely from state A (input of chaotic stimuli) to state B (coarse-grained structuring) within the framework of the difference between the two. The metastructures created with B have a controlling effect, since their information density is higher, because the coarse-grained points of all superimposed compatible patterns add up in them.
    The physiological precipitation of such topologies or metastructures would have to be investigated.
    Possibly an even more general principle is hidden behind the metastructures: life forms extrapolations on the respective regulatory level (cell, cell structure, organ, nervous system) through the constant difference between the states of the respective work logic, which at the same time show the logic of development and space of possibility.
    The emergence of controlling metastructures would be an integral property of all life. Development (evolution) does not arise from endogenous coincidences, but from the utilization of spaces of possibility that result from the interplay of organism and environment, as a kind of differential to maintain dynamic equilibrium.
    The model of consciousness described here has three advantages: It describes consciousness ontologically unambiguously, it describes consciousness on an operational level and it resolves the dualism of body and mind.
    How realistic is the model. In any case, a model has to solve the mind-body problem, and it does so in a non-dualistic way. It must therefore be able to explain the phenomenon of conscious decision-making physiologically if the mental is not to be an entity of its own that is detached from the physical. The model fulfills this requirement.
    The creation of such a metastructure must be explainable in connection with perception and its processing. Ultimately, it must be possible to show how order structures arise from sensory stimuli. The model fulfills both of these.
    The task left is to find such structures [6,7]
    1. G. Tononi, et al., Integrated information theory: from consciousness to its physical substrate, Nature Reviews Neuroscience, 2016
    2. https://www.spektrum.de/lexikon/psychologie/mustererkennen/10194
    3. Erik P Hoel et al., Can the macro beat the micro? Integrated information across spatiotemporal scales, Journal Neuroscience of Consciousness, Volume 2016 Oxford University Press
    4. J. Piaget: The child's worldview. dtv / Klett-Cotta, Munich 1978
    5. Benjamin Libet: Mind Time. The Temporal Factor in Consciousness. Harvard University Press, Cambridge MA et al. 2004
    6. Ji N. Madan G. Fabre G. Dayan A. Baker C.N. wabudike I. Flavell, (2020) A neural circuit for flexible control of persistent behavioral states Dryad Digital Repository, doi: 10.5061 / dryad.3bk3j9kh3.
    https://doi.org/10.5061/dryad.3bk3j9kh3
    7. Ryoma Hattori, Takaki Komiyama, Context-dependent persistency as a coding mechanism for robust and widely distributed value coding, 11- 23, 2021 DOI:https://doi.org/10.1016/j.neuron.2021.11.001

  3. Dear Romain, Wolfgang, and Larissa,

    I have read, re-read, and intuitively examined every one of 195 Sutras of Patanjali's Yoga practice aimed at differentiating consciousness and the spectrum thereof. To be able to supplant Patanjali's methods and their outcome, I have Examined IIT by Tononi, Orchestrate Objective Reduction by Sir Roger Penrose, David Chalmer's Hard Problem, Tegmark's Perceptronium idea, Aaronsons Reductio Absurdum, Lanxa's Biocentrism, and John Searle's Biology of Consciousness.

    What I find is hypotheticals, after hypotheticals, after hypotheticals. I should emphasise that there has got to be Electromagnetics of conscience in a universe that highlights itself with electromagnetic light coming down to humans with maximally evolved brain who are part of earth with magnetic field. To me, consciousness shapes our perceptions by Quantum Field of Biocentrism. Consciousness creates reality and life itself. It appears that consciousness and perceptions are integral to the workings of our universe created by a conscious observer.

    The very fact that there is a gradation of consciousness of birds, mammals, and humans tells me that it is intrinsic to the biology (neuroscience) of our brain. Even in humans, it evolves as a newborn baby grows. Perceptions evolve in time. Our DNA has some degree of consciousness and it possesses means to differentiate it further.

  4. It's a strange relationship between AI and neuroscience. It seems like you're constantly talking past each other. AI wants to simulate consciousness with artificial neural networks, neuroscience takes the analogy of artificial networks to explain the human brain. A circular argument. One seems to forget that there is a machine on one side and an organ made of flesh and blood on the other.
    Metaphors for the brain existed before computers. When the ancient Egyptians built the first irrigation systems, an analogy was made with humans. From then on, people spoke of canal systems that cross the body, with industrialization, the machine came into focus.
    It is entirely conceivable to give machines human-like consciousness. For this, however, it is necessary to name the principle of human thinking, or better: the principle that life generally 'works', and that means self-organization, more precisely: self-organization 2.0. However, if it is not to remain an empty phrase, this term has to be put into concrete terms. And one has to distinguish it from such processes in inanimate nature *. Only then can it be worked out how the brain creates order from chaotic stimuli, how these structures produce blurring through superimposition, which is extremely important for survival. The YES / NO of the AI ​​contrasts with the MAYBE of human thinking. Finally, the specific functional architecture plays a role, at the center of which is the I as a control unit. After all, it is less about the aspect of computing and more that of adaptation. And this presupposes both subject and object.
    -----------------
    * Self-organization 2.0 includes self-reproduction including self-repair as well as the genesis of metastructures (see above) as control units of internal and external regulation.
    And one shouldn't forget: mathematics doesn't explain anything, it only describes, it codes like a foreign language. Explanations must be made one step earlier, through philosophy or methodology.

  5. It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  6. Before you recreate something, you have to understand how it works. In neuroscience, however, one does not understand how consciousness arises. Recognized people then think about whether their smartphone is possibly conscious. In AI you try to put individual parts together and hope that at some point consciousness will suddenly emerge. Consciousness cannot be explained in terms of physics, but only in terms of biology. But only when the methodology of biology has been renewed. And only then can you try to recreate human intelligence by machine.
    Using a phenomenological approach, analyzing the elements that constitute a system is important, but not enough. It's about recognizing the principle that underlies how it works. This is much more difficult in the realm of life than in inanimate nature, because there states can be relatively easily reduced to the elements. Analysis and reduction are the same there. Quite different with all living things: Here analysis and reduction should not be confused, because between a state A and a state B there are complex reaction cycles. If you reduce these to their elements, you mentally destroy them. You have to reduce to the principles. Thus any superficial analogy is out of the question.
    But do we need human-like machine intelligence? Wouldn't it be safer for us to just build smart idiots?
    The only reason I would like to build machine human intelligence is to see if my model of consciousness is correct.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *