What is computational neuroscience? (XXX) Is the brain a computer?

It is sometimes stated as an obvious fact that the brain carries out computations. Computational neuroscientists sometimes see themselves as looking for the algorithms of the brain. Is it true that the brain implements algorithms? My point here is not to answer this question, but rather to show that the answer is not self-evident, and that it can only be true (if at all) at a fairly abstract level.

One line of argumentation is that models of the brain that we find in computational neuroscience (neural network models) are algorithmic in nature, since we simulate them on computers. And wouldn’t it be a sort of vitalistic claim that neural networks cannot be (in principle) simulated on computer?

There is an important confusion in this argument. At a low level, neural networks are modelled biophysically as dynamical systems, in which the temporality corresponds to the actual temporality of the real world (as opposed to the discrete temporality of algorithms). Mathematically, those are typically differential equations, possibly hybrid systems (i.e. coupled by timed pulses), in which time is a continuous variable. Those models can of course be simulated on computer using discretization schemes. For example, we choose a time step and compute the state of the network at time t+dt, from the state at time t. This algorithm, however, implements a simulation of the model; it is not the model that implements the algorithm. The discretization is nowhere to be found in the model. The model itself, being a continuous time dynamical system, is not algorithmic in nature. It is not described as a discrete sequence of operations; it is only the simulation of the model that is algorithmic, and different algorithms can simulate the same model.

If we put this confusion aside, then the claim that neural networks implement algorithms becomes not that obvious. It means that trajectories of the dynamical system can be mapped to the discrete flow of an algorithm. This requires: 1) to identify states with representations of some variables (for example stimulus properties, symbols); 2) to identify trajectories from one state to another as specific operations. In addition to that, for the algorithmic view to be of any use, there should be a sequence of operations, not just one operation (ie, describing the output as a function of the input is not an algorithmic description).

A key difficulty in this identification is temporality: the state of the dynamical system changes continuously, so how can this be mapped to discrete operations? A typical approach is neuroscience is to consider not states but properties of trajectories. For example, one would consider the average firing rate in a population of neurons in a given time window, and the rate of another population in another time window. The relation between these two rates in the context of an experiment would define an operation. As stated above, a sequence of such relations should be identified in order to qualify as an algorithm. But this mapping seems only possible within a feedforward flow; coupling poses a greater challenge for an algorithmic description. No known nervous system, however, has a feedforward connectome.

I am not claiming here that the function of the brain (or mind) cannot possibly be described algorithmically. Probably some of it can be. My point is rather that a dynamical system is not generically algorithmic. A control system, for example, is typically not algorithmic (see the detailed example of Tim van Gelder, What might cognition be if not computation?). Thus a neural dynamical system can only be seen as an algorithm at a fairly abstract level, which can probably address only a restricted subset of its function. It could be that control, which also attaches function to dynamical systems, is a more adequate metaphor of brain function than computation. Is the brain a computer? Given the rather narrow application of the algorithmic view, the reasonable answer should be: quite clearly not (maybe part of cognition could be seen as computation, but not brain function generally).

What is computational neuroscience? (XXIX) The free energy principle

The free energy principle is the theory that the brain manipulates a probabilistic generative model of its sensory inputs, which it tries to optimize by either changing the model (learning) or changing the inputs (action) (Friston 2009; Friston 2010). The “free energy” is related to the error between predictions and actual inputs, or “surprise”, which the organism wants to minimize. It has a more precise mathematical formulation, but the conceptual issues I want to discuss here do not depend on it.

Thus, it can be seen as an extension of the Bayesian brain hypothesis that accounts for action in addition to perception. It shares the conceptual problems of the Bayesian brain hypothesis, namely that it focuses on statistical uncertainty, inferring variables of a model (called “causes”) when the challenge is to build and manipulate the structure of the model. It also shares issues with the predictive coding concept, namely that there is a conflation between a technical sense of “prediction” (expectation of the future signal) and a broader sense that is more ecologically relevant (if I do X, then Y will happen). In my view, these are the main issues with the free energy principle. Here I will focus on an additional issue that is specific of the free energy principle.

The specific interest of the free energy principle lies in its formulation of action. It resonates with a very important psychological theory called cognitive dissonance theory. That theory says that you try to avoid dissonance between facts and your system of beliefs, by either changing the beliefs in a small way or avoiding the facts. When there is a dissonant fact, you generally don’t throw your entire system of beliefs: rather, you alter the interpretation of the fact (think of political discourse or in fact, scientific discourse). Another strategy is to avoid the dissonant facts: for example, to read newspapers that tend to have the same opinions as yours. So there is some support in psychology for the idea that you act so as to minimize surprise.

Thus, the free energy principle acknowledges the circularity of action and perception. However, it is quite difficult to make it account for a large part of behavior. A large part of behavior is directed towards goals; for example, to get food and sex. The theory anticipates this criticism and proposes that goals are ingrained in priors. For example, you expect to have food. So, for your state to match your expectations, you need to seek food. This is the theory’s solution to the so-called “dark room problem” (Friston et al., 2012): if you want to minimize surprise, why not shut off stimulation altogether and go to the closest dark room? Solution: you are not expecting a dark room, so you are not going there in the first place.

Let us consider a concrete example to show that this solution does not work. There are two kinds of stimuli: food, and no food. I have two possible actions: to seek food, or to sit and do nothing. If I do nothing, then with 100% probability, I will see no food. If I seek food, then with, say, 20% probability, I will see food.

Let’s say this is the world in which I live. What does the free energy principle tell us? To minimize surprise, it seems clear that I should sit: I am certain to not see food. No surprise at all. The proposed solution is that you have a prior expectation to see food. So to minimize the surprise, you should put yourself into a situation where you might see food, ie to seek food. This seems to work. However, if there is any learning at all, then you will quickly observe that the probability of seeing food is actually 20%, and your expectations should be adjusted accordingly. Also, I will also observe that between two food expeditions, the probability to see food is 0%. Once this has been observed, surprise is minimal when I do not seek food. So, I die of hunger. It follows that the free energy principle does not survive Darwinian competition.

Thus, either there is no learning at all and the free energy principle is just a way of calling predefined actions “priors”; or there is learning, but then it doesn’t account for goal-directed behavior.

The idea to act so as to minimize surprise resonates with some aspects of psychology, like cognitive dissonance theory, but that does not constitute a complete theory of mind, except possibly of the depressed mind. See for example the experience of flow (as in surfing): you seek a situation that is controllable but sufficiently challenging that it engages your entire attention; in other words, you voluntarily expose yourself to a (moderate amount of) surprise; in any case certainly not a minimum amount of surprise.

Draft of chapter 6, Spike initiation with an initial segment

I have just uploaded an incomplete draft of chapter 6, "Spike initiation with an initial segment". This chapter deals with how spikes are initiated in most vertebrate neurons (and also some invertebrate neurons), where there is a hotspot of excitability close to a large soma. This situation has a number of interesting implications which make spike initiation quite different from the situation investigated by Hodgkin and Huxley, that of stimulating the middle of an axon. Most of the chapter describes the theory that I have developed to analyze this situation, called "resistive coupling theory" because the axonal hotspot is resistively coupled to the soma.

The chapter is currently unfinished, because a few points require a little more research, which we have not finished. The presentation is also a bit more technical than I would like, so this is really a draft. I wanted nonetheless to release it now, as I have not uploaded a chapter for a while and it could be some time before the chapter is finished.

What is computational neuroscience? (XXVIII)The Bayesian brain

Our sensors give us an incomplete, noisy, and indirect information about the world. For example, estimating the location of a sound source is difficult because in natural contexts, the sound of interest is corrupted by other sound sources, reflections, etc. Thus it is not possible to know the position of the source with certainty. The ‘Bayesian coding hypothesis’ (Knill & Pouget, 2014) postulates that the brain represents not the most likely position, but the entire probability distribution of the position. It then uses those distributions to do Bayesian inference, for example, when combining different sources of information (say, auditory and visual). This would allow the brain to optimally infer the most likely position. There is indeed some evidence for optimal inference in psychophysical experiments – although there is also some contradicting evidence (Rahnev & Denison, 2018).

The idea has some appeal. The problem is that, by framing perception as a statistical inference problem, it focuses on the most trivial type of uncertainty, statistical uncertainty. It is illustrated by the following quote: “The fundamental concept behind the Bayesian approach to perceptual computations is that the information provided by a set of sensory data about the world is represented by a conditional probability density function over the set of unknown variables”. Implicit in this representation is a particular model, for which variables are defined. Typically, one model describes a particular experimental situation. For example, the model would describe the distribution of auditory cues associated with the position of the sound source. Another situation would be described by a different model, for example one with two sound sources would require a model with two variables. Or if the listening environment is a room and the size of that room might vary, then we would need a model with the dimensions of the room as variables. In any of these cases where we have identified and fixed parametric sources of variation, then the Bayesian approach works fine, because we are indeed facing a problem of statistical inference. But that framework doesn’t fit any real life situation. In real life, perceptual scenes have variable structure, which corresponds to the model in statistical inference (there is one source, or two sources, we are in a room, the second source comes from the window, etc). The perceptual problem is therefore not just to infer the parameters of the model (dimensions of the room etc), but also the model itself, its structure. Thus, it is not possible in general to represent an auditory scene by a probability distribution on a set of parameters, because the very notion of a parameter already assumes that the structure of the scene is known and fixed.

Inferring parameters for a known statistical model is relatively easy. What is really difficult, and is still challenging for machine learning algorithms today, is to identify the structure of a perceptual scene, what constitutes an object (object formation), how objects are related to each other (scene analysis). These fundamental perceptual processes do not exist in the Bayesian brain. This touches on two very different types of uncertainty: statistical uncertainty, variations that can be interpreted and expected in the framework of a model; and epistemic uncertainty,  the model is unknown (the difference has been famously explained by Donald Rumsfeld).

Thus, the “Bayesian brain” idea addresses an interesting problem (statistical inference), but it trivializes the problem of perception, by missing the fact that the real challenge is epistemic uncertainty (building a perceptual model), not statistical uncertainty (tuning the parameters): the world is not noisy, it is complex.

Is a thermostat conscious?

A theory of consciousness initially proposed by David Chalmers (in his book the Conscious Mind) is that consciousness (or experience) is a property of information processing systems. It is an additional property, not logically implied by physical laws; a new law of nature. The theory was later formalized by Giulio Tononi into Integrated Information Theory, based on Shannon’s mathematical concept of information. One important feature of this theory is it is a radical form of panpsychism: it assigns consciousness (to different degrees) to virtually anything in the world, including a thermostat.

The Bewitched experiment of thought

I have criticized IIT previously on the grounds that it fails to define in a sensible way what makes a conscious subject (eg a subsystem of a conscious entity would be another conscious entity, so for example your brain would produce an infinite number of minds). But here I want to comment specifically on the example of the thermostat. It is an interesting example brought up by Chalmers in his book. The reasoning is as follows: a human brain is conscious; a mouse brain is probably conscious, but with a somewhat lower degree (for example, no self-consciousness). As we go down the scale of information-processing systems, the system might be less and less conscious, but why would it be that there is a definite threshold for consciousness? Why would a billion neurons be conscious but not a million? Why would a million neurons be conscious but not one thousand? And how about just one neuron? How about a thermostat? A thermostat is an elementary information-processing system with just two states, so maybe, Chalmers argue, the thermostat has a very elementary form of experience.

To claim that a thermostat is conscious defies intuition, but I would not follow Searle on insisting that the theory must be wrong because it assigns consciousness to things that we wouldn’t intuitively think are conscious. As I argued in a previous post, to claim that biology tells us that only brains are conscious is to use circular arguments. We don’t know whether anything else than a brain is conscious, and since consciousness is subjective, to decide whether anything is conscious is going to involve some theoretical aspects. Nonetheless, I am skeptical that a thermostat is conscious.

I propose to examine the Bewitched experiment of thought. In the TV series Bewitched, Samantha the housewife twitches her nose and everyone freezes except her. Then she twitches her nose and everyone unfreezes, without noticing that anything happened. For them, time has effectively stopped. The question is: was anyone experiencing anything during that time? To me, it is clear that no one can experience anything if time is frozen. In fact, that whole time has not existed at all for the conscious subject. It follows that a substrate with a fixed state (e.g. hot/cold) cannot experience anything, because time is effectively frozen for that substrate. Experience requires a flow of time, a change in structure through time. I leave it open whether the interaction of the thermostat with the room might produce experience for that coupled system (see below for some further thoughts).

What is “information”?

In my view, the fallacy in the initial reasoning is to put the thermostat and the brain in the same scale. That scale is the set of information-processing systems. But as I have argued before (mostly following Gibson’s arguments), it is misleading to see the brain an information-processing system. The brain can only be seen to transform information of one kind into information of another kind by an external observer, because the very concept of information is something that makes sense to a cognitive/perceptual system. The notion of information used by IIT is Shannon information, a notion from communication theory. This is an extrinsic notion of information: for example, neural activity is informative about objects in the world in the sense that properties of those objects can be inferred from neural activity. But this is totally unhelpful to understand how the brain, which only ever gets to deal with neural signals and not things in the world, sees the world (see this argument in more detail in my paper Is coding a relevant metaphor for the brain?).

Let’s clarify with a concrete case: does the thermostat perceive temperature? The thermostat can be in different states depending on temperature, but from its perspective, there is no temperature. There are changes in state that seems to be unrelated to anything else (there is literally nothing else for the thermostat). One could replace the temperature sensor with some other sensor, or with a random number generator, and there would be literally no functional change in the thermostat itself. Only an external observer can link the thermostat’s state with temperature, so the thermostat cannot possibly be conscious of temperature.

Thus, Shannon’s notion of information is inappropriate to understand consciousness. Instead of extracting information in the sense of communication theory, what the brain might do is build models of sensory (sensorimotor) signals from its subjective perspective, in the same way as scientists make models of the world with observations (=sensory signals) and experiments (=actions). But this intrinsic notion of information, which corresponds eg to laws of physics, is crucially not what Shannon’s notion of information is. And it is also not the kind of information that a thermostat is dealing with.

This inappropriate notion of information leads to what in my view is a rather absurd quantitative scale of consciousness, according to which entities are more or less conscious along a graded scale (phi). Differences in consciousness are qualitative, not quantitative: there is dreaming, being awake, being self-conscious or not, etc. These are not different numbers. This odd analog scale arises because Shannon information is counted in bits. But information in the sense of knowledge (science) is not counted in bits; there are different kinds of knowledge, they have different structure, relations between them etc.

Subjective physics of a thermostat

But let us not throw away Chalmers’ interesting experiment of thought just now. Let us ask, following Chalmers: what does it feel like to be a thermostat? We will examine it not with Shannon’s unhelpful notion of information but with what I called “subjective physics”: the laws that govern sensory signals and their relations to actions, from the perspective of the subject. This will define my world from a functional viewpoint. Let’s say I am a conscious thermostat; a homunculus inside the thermostat. All I can observe is a binary signal. Then there is a binary action that I can make, which for an external observer corresponds to turning on the heat. What kind of world does that make to me? Let’s say I’m a scientist homunculus, what kind of laws about the world can I infer?

If I’m a conventional thermostat, then the action will be automatically triggered when the signal is in a given state (“cold”). After some time, the binary signal will switch and so will the action. So in fact there is an identity between signal and action, which means that all I really observe is just the one binary signal, switching on and off, probably with some kind of periodicity. This is the world I might experience, as a homunculus inside the thermostat (note that to experience the periodicity requires memory, which a normal thermostat doesn’t have). In a way, I’m a “locked-in” thermostat: I can make observations, but I cannot freely act.

Let’s say that I am not locked-in and have a little more free will, so I can decide whether to act (heat) or not. If I can, then my world is a little bit more interesting: my action can trigger a switch of the binary signal, after some latency (again requiring some memory), and then when I stop, the binary signal switches back, after a time that depends on how much time my previous action lasted. So here I have a world that is much more structured, with relatively complex laws which in a way defines the concept of “temperature” from the perspective of the thermostat.

So if a thermostat were conscious, then we have a rough idea of the kind of world it might experience (although not how it feels like), and even in this elementary example, you can’t measure these experiences in bits - let alone the fact that a thermostat is not conscious anyway.

What does Gödel's theorem mean ?

Gödel's theorem is a result in mathematical logic, which is often stated as showing that « there are true things that cannot proved ». It is sometimes used to comment on the limits of science, or the superiority of human intuition. Here I want to clarify what this theorem means and what the epistemological implications are.

First, this phrasing is rather misleading. It makes the result sound almost mystical. If you phrase the result differently, by avoiding the potentially confusing reference to truth, the result is not that mystical anymore. Here is how I would phrase it : you can always add an independent axiom to a finite system of axioms. This is not an obvious mathematical result, but I wouldn't think it defies intuition.

Why is this equivalent to the first phrasing ? If the additional axiom is independent of the set of axioms, then it cannot be proved from them (by definition). Yet as a logical proposition it has to be either true or not true. So it is true, or its negation is true, but it cannot be proved. What is misleading in the first phrasing is that the statement « there are true things » is contextual. I can start from a set of axioms and add one, and that new one will be true (since it's an axiom). Instead I could add its negation, and then that one will be true. That the proposition is true is not a universal truth, as it would seem with the phrasing « there are true things ». It is true in a particular mathematical world, and you can consider another one where it is not true. Famous examples are Euclidean and non-Euclidean geometries, which are mutually inconsistent sets of axioms.

So, what Gödel's theorem says is simply that no finite system of axioms is complete, in the sense that you can always add one without making the system inconsistent.

What are the epistemological implications ? It does not mean that there are things that science cannot prove. Laws of physics are not proved by deduction anyway. They are hypothesized and empirically tested, and all laws are provisory. Nevertheless, it does raise some deep philosophical questions, which have to do with reductionism. I am generally critical of reductionism, but more specifically of methodological reductionism, the idea that a system can be understood by understanding the elements that compose it. For example : understand neurons and you will understand the brain. I think this view is wrong, because it is the relations between neurons, at the scale of the organism, which make a brain. The right approach is systemic rather than reductionist. Many scientists frown at criticisms of reductionism, but this is only because they confuse methodological and ontological reductionism. Ontological reductionism means that reality can be reduced to a small number of types of things (eg atoms) and laws, and everything can be understood in these terms. For example, the mind can in principle be understood in terms of interactions of atoms that constitute the brain. Most scientists seem to believe in ontological reductionism.

Let us go back now to Gödel's theorem. An interesting remark made by theoretical biologist Robert Rosen is that Gödel's theorem makes ontological reductionism implausible to him. Why ? The theorem says that, whatever system of axioms you choose, it will always be possible to add one which is independent. Let us say we have agreed on a small set of fundamental physical laws, with strong empirical support. To establish each law, we postulate it and test it empirically. At a macroscopic level, scientists postulate and test all sorts of laws. How can we claim that any macroscopic law necessarily derives from the small set of fundamental laws ? Gödel's theorem says that there are laws that you can express but that are independent of the fundamental laws. This means that there are laws that can only be established empirically, not formally, in fact just like the set of fundamental laws. Of course it could be the case that most of what matters to us is captured by a small of set of laws. But maybe not.

A brief critique of predictive coding

Predictive coding is becoming a popular theory in neuroscience (see for example Clark 2013). In a nutshell, the general idea is that brains encode predictions of their sensory inputs. This is an appealing idea because superficially, it makes a lot of sense: functionally, the only reason why you would want to process sensory information is if it might impact your future, so it makes sense to try to predict your sensory inputs.

There are substantial problems in the details of predictive coding theories, for example with the arbitrariness of the metric by which you judge that your prediction matches sensory inputs (what is important?), or the fact that predictive coding schemes encode both noise and signal. But I want to focus on the more fundamental problems. One has to with “coding”, the other with “predictive”.

It makes sense that brains anticipate. But does it make sense that brains code? Coding is a metaphor of a communication channel, and this is generally not a great metaphor for what the brain might do, unless you fully embrace dualism. I discuss this at length in a recent paper (Is coding a relevant metaphor for the brain?) so I won’t repeat the entire argument here. Predictive coding is a branch of efficient coding, so the same fallacy underlies its logic: 1) neurons encode sensory inputs; 2) living organisms are efficient; => brains must encode efficiently. (1) is trivially true in the sense that one can define a mapping from sensory inputs to neural activity. (2) is probably true to some extent (evolutionary arguments). So the conclusion follows. Critiques of efficient coding have focused on the “efficient” part: maybe the brain is not that efficient after all. But the error is elsewhere: living organisms are certainly efficient, but it doesn’t follow that they are efficient at coding. They might be efficient at surviving and reproducing, and it is not obvious that it entails coding efficiency (see the last part of the abovementioned paper for a counter-example). So the real strong assumption is there: the main function of the brain is to represent sensory inputs.

The second problem has to with “predictive”. It makes sense that an important function of brains, or in fact of any living organism, is to anticipate (see the great Anticipatory Systems by Robert Rosen). But to what extent do predictive coding schemes actually anticipate? First, in practice, those are generally not prediction schemes but compression schemes, in the sense that they do not tell us what will happen next but what happens now. This is at least the case of the classical Rao & Ballard (1999). Neurons encode the difference between expected input and actual input: this is compression, not prediction. It uses a sort of prediction in order to compress: other neurons (in higher layers) produce predictions of the inputs to those neurons, but the term prediction is used in the sense that the inputs are not known to the higher layer neurons, not that the “prediction” occurs before the inputs. Thus the term “predictive” is misleading because it is not used in a temporal sense.

However, it is relatively easy to imagine how predictive coding might be about temporal predictions, although the neural implementation is not straightforward (delays etc). So I want to make a deeper criticism. I started by claiming that it is useful to predict sensory inputs. I am taking this back (I can because I said it was superficial reasoning). It is not useful to know what will happen. What is useful is to know what might happen, depending on what you do. If there is nothing you can do about the future, what is the functional use of predicting it? So what is useful is to predict the future conditionally to a different set of potential actions. This is about manipulating models of the world, not representing the present.

The substrate of consciousness

Here I want to stir some ideas about the substrate of consciousness. Let us start with a few intuitive ideas: a human brain is conscious; an animal brain is probably conscious; a stone is not conscious; my stomach is not conscious; a single neuron or cell is not conscious; the brainstem or the visual cortex is not a separate conscious entity; two people do not form a single conscious entity.

Many of these ideas are in fact difficult to justify. Let us start with single cells. To see the problem, think first of organisms that consist of a single cell. For example, bacteria, or ciliates. In this video, an amoeba’s engulfs and then digests two paramecia. At some point, you can see the paramecia jumping all around as if they were panicking. Are these paramecia conscious, do they feel anything? If I did not know anything about their physiology or size, my first intuition would be that they do feel something close to fear. However, knowing that these are unicellular organisms and therefore do not have a nervous system, my intuition is rather that they are not actually conscious. But why?

Why do we think a nervous system is necessary for consciousness? One reason is that organisms to which we ascribe consciousness (humans and animals, or at least some animals) all have a nervous system. But it’s a circular argument, which has no logical validity. A more convincing reason is that in humans, the brain is necessary and sufficient for consciousness. A locked-in patient is still conscious. On the other hand, any large brain lesion has an impact on conscious experience, and specific experiences can be induced by electrical stimulation of the brain.

However, this tends to prove that the brain is the substrate of my experience, but it says nothing about, say, the stomach. The stomach also has a nervous system, it receives sensory signals and controls muscles. If it were conscious, I could not experience it, by definition, since you can only experience your own consciousness. So it could also be, just as for the brain, that the stomach is sufficient and necessary for consciousness of the gut mind: perhaps if you stimulate it electrically, it triggers some specific experience. As ridiculous as it might sound, I cannot discard the idea that the stomach is conscious just because I don’t feel that it’s conscious; I will need arguments of a different kind.

I know I am conscious, but I do not know whether there are other conscious entities in my body. Unfortunately, this applies not just to the stomach, but more generally to any other component of my body, whether it has a nervous system or not. What tells me that the liver is not conscious? Imagine I am a conscious liver. From my perspective, removing one lung, or a foot, or a large part of the visual cortex, has no effect on my conscious experience. So the fact that the brain is necessary and sufficient for your conscious experience doesn’t rule out the fact that some other substrate is necessary and sufficient for the conscious experience of another entity in your body. Now I am not saying that the question of liver consciousness is undecidable, only that we will need more subtle arguments than those exposed so far (discussed later).

Let us come back to the single cell. Although I feel that a unicellular organism is not conscious because it doesn’t have a nervous system, so far I have no valid argument for this intuition. In addition, it turns out that Paramecium, as many other unicellular organism including (at least some) bacteria, is an excitable cell with voltage-gated channels, structurally very similar to a neuron. So perhaps it has some limited form of consciousness after all. If this is true, then I would be inclined to say that all unicellular organisms are also conscious, for example bacteria. But then what about a single cell (eg a neuron) in your body, is it conscious? One might object that a single cell in a multicellular organism is not an autonomous organism. To address this objection, I will go one level below the cell.

Eukaryotic cells (eg your cells) have little energy factories called mitochondria. It turns out that mitochondria are in fact bacteria which have been engulfed in cells a very long (evolutionary) time ago. They have their own DNA, but they now live and reproduce inside cells. This is a case of endosymbiosis. If mitochondria were conscious before they lived in cells, why would they have lost consciousness when they started living in cells? So if we think bacteria are conscious, then we must admit that we have trillions of conscious entities in the cells of our body – not counting the bacteria in our digestive system. The concept of an autonomous organism is an illusion: any living organism depends on interactions with an ecosystem, and that ecosystem might well be a cell or a multicellular organism.

By the same argument, if we think unicellular organisms are conscious, then single neurons should be conscious, as well as all single cells in our body. This is not exclusive of the brain being conscious as a distinct entity.

A plausible alternative, of course, is that single cells are not conscious, although I have not yet proposed a good argument for this alternative. Before we turn to a new question, I will let you contemplate the fact that bacteria can form populations that are tightly coupled by electrical communication. Does this make a bacteria colony conscious?

Let us now turn to another question. We can imagine that a cell is somehow minimally conscious, and that at the same time a brain forms a conscious entity of a different nature. Of course it might not be true, but there is a case for that argument. So now let us consider two people living their own life on opposite sides of the planet. Can this pair form a new conscious entity? Here, there are arguments to answer negatively. This is related to a concept called the unity of consciousness.

Suppose I see a red book. In the brain, some areas might respond to the color and some other areas might respond to the shape. It could be then that the color area experiences redness, and the shape area experience bookness. But I, as a single conscious unit, experiences a red book as a whole. Now if we consider two entities that do not interact, then there cannot be united experiences: somehow the redness and the bookness must be put together. So the substrate of a conscious entity cannot be made of parts that do not interact with the rest. Two separated people cannot form a conscious entity. But this does not rule out the possibility that two closely interacting people may not form a conscious superentity. Again, I do not believe this is the case, but we need to find new arguments to rule this out.

Now we finally have something a little substantial: a conscious entity must be made of components in interaction. From this idea follow a few remarks. First, consciousness is not a property of a substrate, but of the activity of a substrate (see a previous blog post on this idea). For example, if we freeze the brain in a particular state, it is not conscious. This rules out a number of inanimate objects (rocks) as conscious. Second, interactions take place in time. For example, it takes some time, up to a few tens of ms, for an action potential to travel from one neuron to another. This implies that a 1 ms time window cannot enclose a conscious experience. The “grain” of consciousness for a human brain should thus be no less than a few tens of milliseconds. In the same way, if a plant is conscious, then that consciousness cannot exist on a short timescale. This puts a constraint on the kind of experiences that can be ascribed to a particular substrate. Does consciousness require a nervous system? Maybe it doesn’t, but at least for large organisms, a nervous system is required to produce experiences on a short timescale.

I want to end with a final question. We are asking what kind of substrate gives rise to consciousness. But does consciousness require a fixed substrate? After all, the brain is dynamic. Synapses appear and disappear all the time, all the proteins get renewed regularly. The brain is literally a different set of molecules and a different structure from one day to the next. But the conscious entity remains. Or at least it seems so. This is what Buddhists call the illusion of self: contrary to your intuition, you are not the same person today and ten years ago; the self has no objective permanent existence. However, we can say that there is a continuity in conscious experience. That continuity, however, does not rely on a fixed material basis but more likely on some continuity of the underlying activity. Imagine for example a fictional worm that is conscious, but the substrate of consciousness is local. At some point it is produced by the interaction of neurons at some particular place of the nervous system, then that activity travels along the worm’s spine. The conscious entity remains and doesn’t feel like it’s travelling, it is simply grounded on a dynamic substrate.

Now I don’t think that this is true of the brain (or of the worm), but rather that long-range synchronization has something to do with the generation of a global conscious entity. However, it is conceivable that different subsets of neurons, even though they might span the same global brain areas, are involved in conscious experience at different times. In fact, this is even plausible. Most neurons don’t fire much, perhaps a few Hz on average. But one can definitely have a definite conscious experience over a fraction of second, and that experience thus can only involve the interaction of a subset of all neurons. We must conclude that the substrate of consciousness is actually not fixed but involve dynamic sets of neurons.

A summary of these remarks. I certainly have raised more questions than I have answered. In particular, it is not clear whether a single cell or a component of the nervous system (stomach, brainstem) is conscious. However, I have argued that: 1) any conscious experience requires the interaction of the components that produce it, and this interaction takes place in time; 2) the set of components that are involved in any particular experience is dynamic, despite the continuity in conscious experience.

Tip for new PIs : always do administrative work at the last minute, or later

This is a tip that has taken me years to really grasp, and I still haven't fully internalized it. I don't like to work at the last minute. If I have something to do and I don't do it, then it stays in the back of my mind until I do it. So, especially if it's some boring task like administrative work, I like to get rid of it as soon as possible. That's a mistake. I'm speaking of my experience in France, so maybe it doesn't apply so much elsewhere. The reason it's a mistake is that what you are required to do changes all the time, so the latest you do it, the least work you will have to do.

Every new politician seems to want to add a new layer of bureaucracy, independently of their political origin, so the amount of administrative work you are required to do as a scientist keeps growing, and it doesn't seem to converge. But setting up new rules and reglementations in a complex bureaucratic monster is not easy, so the monster often outputs nonsensical forms and requirements. One example in France is the evaluation of labs (HCERES), whose role is unclear and changing. The amount of redundancy and the absurdity of some requirements is abysmal. For example, you are required to fill a SWOT diagram, to select and list 20 % of all your « outputs », but also to list each one of them in another form, etc. Because many of the requirements are vague and nonsensical, any organization that deals with them will take some time to converge to a clear set of rules issued to the labs. I have written my evaluation document about 4 times because of the changing instructions.

Another recent example is the new evaluation system set up by INSERM (national medical research institution). Someone there (external consulting company ?) apparently decided that having an online CV with fields to fill instead of uploading a text would be more convenient. So for example you have to insert, one by one in web forms, the list of all journals for which you have reviewed in your entire career, and how many papers you have reviewed for each of them. You need to insert the list of all students you have supervised in you entire carrier, with names and exact dates, etc, all one by one in separate fields. Imagine that for senior PIs. Guess what : one week before deadline, the requirement of filling that CV was lifted for most scientists because of many complaints (a bit too late for most of them, including me). About a quarter of them still have to, but the message says that the format of the CV will change next year since it was not good, so all the work will basically be for nothing.

So here is my conclusion and tip : bureaucracy is nonsense and don't assume otherwise ; just set yourself some time on the deadline to do the required work, whatever it might become at that time (and it might disappear).

Project: Binaural cues and spatial hearing in ecological environments

I previously laid out a few ideas for future research on spatial hearing:

  1. The ecological situation and the computational problem.
  2. Tuning curves.
  3. The coding problem.

This year I wrote a grant that addresses the first point. My project was rather straightforward:

  1. To make binaural recordings with ear mics in real environments, with real sound sources (actual sounding objects) placed at predetermined positions. This way we obtain distributions of binaural cues conditioned on source direction, capturing the variability due to context.
  2. To measure human localization performance in those situations.
  3. To try to see if a Bayesian model can account for these results, and possibly previous psychophysical results.

The project was preselected but unfortunately not funded. I probably won't resubmit it next year, except perhaps with a collaboration. So here it is for everyone to read: my grant application, "Binaural cues and spatial hearing in ecological environments". If you like it, if you want to do these recordings and experiments, please do so. I am interested in the results, but I'm happy if someone else does it. Please contact me if you would like to set up a collaboration, or discuss the project. I am especially interested in the theoretical analysis (ie the third part of the project). Our experience in the lab is primarily on the theoretical side, but also in signal analysis, and we have done a number of binaural recordings too and some psychophysics.