Connectomics bashing (I) - Can we understand the brain by measuring its structure?

I have been asked to debate on the subject of connectomics next summer, against a supporter of the connectomics enterprise. I am starting this series of posts to develop some arguments.

Connectomics is the systematic, large-scale mapping of the connections in the nervous system (the “connectome”), and their analysis. It involves the development of new technologies and a considerable amount of financial and human resources. Let me clarify briefly my personal position. Recently there was a debate involving Sebastian Seung, an advocate of connectomics, and Tony Movshon, a skeptical of the whole entreprise. I would qualify myself also as a skeptical of connectomics, but it turns out that I ended up leaning towards Seung. Seung considers that we won't be able to understand much about the brain if we don't even know how neurons are wired. Movshon argues that detailed information about the connectome is not necessary, because in his view only statistical information is relevant as only that information is reproducible across individuals. Simply put, Movshon is a computationalist in the vein of David Marr (“implementation doesn't matter”), while Seung is a classical connectionnist. I certainly strongly disagree with Movshon in his view that detailed structure is noise or irrelevant. But I am also skeptical about Seung's hopes. Seung considers that studying the connectome will give us the keys to the computations that the brain is doing. I will explain in this post why I don't think this will happen. But before, I just want to mention that I do not consider the connectomics enterprise a useless enterprise. It probably will bring a useful resource. It might even bring some conceptual changes, for example by exposing the limits of classical connectionnism. What I disagree with is the general claim that knowing the detailed structure of the brain is the main limiting factor in our basic understanding of the brain.

To a large extent, my criticism is the criticism of overly reductionist accounts of biological systems. So I will start this series of posts with this general question: can we understand the brain by measuring its structure? (replace “understand” by “simulate” for a criticism of the Human Brain Project).

In his book, Seung sums up the connectionnist view in this sentence: “You are your connectome”. Indeed, in connectionnism, memories and computations (input-ouput operations) are stored in the pattern of connections between neurons, and their strength. There is indeed quite a bit of evidence for activity-dependent changes in synaptic connections, in relationship with learning. There is also a large body of theoretical work that develops that idea. But of course, we also know other aspects of structure in neural networks that change and have a subsequent effect on their dynamics. For example, the expression of ionic channels. Or their phosphorylation. Or changes in the morphology of dendrites and axons, or in the conduction delays. There are also ephaptic connections, that do not involve synapses at all. There might be long-term changes in glia too. Essentially, every bit of structure in the brain seems to be plastic. Of course this is not news to connectionnists. The counter-argument is that connections are what matter most.

In a similar vein, supporters of bottom-up approaches to simulations of the brain would argue: if you simulate the most important aspects of the brain with enough detail, then there is no reason why your simulation should not resemble the activity of the brain. This reductionist logic seems superficially convincing. If one points out that the simulation misses a crucial element, then it could simply be replied that the element can just be incorporated into the simulation. Problem solved.

The general problem is that any description of structure involves ignoring a number of elements. Thus the reductionist logic implicit assumes that the whole (=brain) can be approximated by an appropriate subset of structural elements. As you add more elements, the approximation “converges” to the real thing.

Now I will give a simple example that exposes the fallacy. Consider a membrane, made of a structure of lipids. Or just consider a table tennis ball. The membrane is topologically equivalent to a sphere, in that it defines an inside and an outside. In particular, it traps molecules inside it (assuming the membrane is not permeable to it). Make a hole and the set of lipids loses this property. Now the problem is: if you consider a subset of the membrane, one that comprises almost all of the membrane, then that gives you something that is not a membrane. As long there is a hole, it is not a membrane. A membrane is not defined by the fact that it is composed of lipids; it is not even defined by the fact that it is composed of elements that are bound to their neighbors: it is defined by the global property that it has no hole. In that sense, you cannot approximate a membrane by a subset of a membrane. To understand the membrane functionally, you need to understand its global logic and how the elements participate to that logic. Therefore, if the relationship between brain structure and brain function is of this kind, then measuring its structure will not be sufficient. It will in fact miss the important stuff, which is the organizational logic (say, the plasticity rules or developmental rules).

Secondly, to understand what a membrane is, you would probably not find it very useful to measure every element of structure with an electron microscope. Rather, you might perhaps try to see what happens if you make a hole in the structure. Or you might try to build a membrane with some other type of element. Or you might study very small membranes. In this example, everything is important in the sense that you cannot remove something and leave the rest unchanged without making a hole, but nothing is important in particular, in the sense that all elements are identical and you could make a membrane with any subset of the elements by changing the structure a little bit (ie binding elements around a hole together). Maybe ephaptic interactions are of this kind: they might have no specific function, but certainly if you remove them without adapting the rest of the structure, the neural networks will have different dynamics, perhaps introducing some pathological misbalance.

Thirdly, to relate to Movshon's argument: that a diversity of structures have the property to be a membrane (e.g. of different sizes or made of a different basic element) does not mean that structure is random. Actually quite the opposite is true here: structure is highly organized at a detailed level, otherwise there would be holes. It just means that function results from the organization of the structure, from relations between structural elements, and those relations could be instantiated in different ways.

Now is it the case that the problem of the connectome is of this kind? A classical counter-argument to connectomics is the case of C Elegans. That worm has 307 neurons and its connectome has been completely mapped in the 80s. While it's certainly a useful resource in that field, it is fair to stay that we still don't understand much how that nervous system works. One classical reply is that in C. Elegans, it is notoriously difficult to do electrophysiology and so we don't even know the sign of the connections (excitation or inhibition). Fair enough. But let us say we do know the sign. Would that solve the problem? Let us move the mammalian cortex. A pyramidal cell has on the order of 10,000 synapses. Is it possible to infer the dynamics of a cortical network from the connections? In fact, it has been shown that in pyramidal cells, there is a tight dynamic balance between excitatory input and inhibitory input, and without it neurons would fire regularly, instead of irregularly as they do. It has also been found that there are mechanisms that maintain this balance (synaptic and homeostatic plasticity). But by just looking at the connectome, you would not see that. You would see that a pyramidal cell has 4 times more excitatory inputs than inhibitory inputs. If you tried to measure synaptic strengths, this would also not be sufficient, because you need a very high precision in the balance, which needs to be achieved for in vivo situation that takes into account all elements of structure. This is just an example, but the point is that these principles of organization do not emerge from just looking at the structure (and in this case it would not even help), but by directly investigating those principles as such.

In summary, the limiting factor in our understanding of the brain is not so much a more detailed description of structure, but rather a deeper understanding of the organizational principles of the nervous system, and this won't be provided by electron microscopy.

Notes on consciousness. (VI) The hard problem of consciousness explained with sex

By using the word “sex” in the title, I am obviously trying to increase the number of my readers. But not only. I noticed that many people, in particular scientists, do not seem to get why there is a problem at all. Philosophers like to refer to the experience of color, and the question is then: why is it that red feels like red, and not like something else, and why does it feel like anything at all? Why is it not just information as it is for a computer, why does the experience have a particular quality? Somehow this example does not speak to everyone, so I came with another example: why is it that sex is associated with pleasure, in particular, why is male ejaculation generally associated with orgasm?

You will hear several kinds of explanation. One might tell you: your body secretes some particular molecules at that moment, or some particular neuron gets excited, and that's what produces the sensation of pleasure. But of course, this explanation just pushes the problem a little further, by replacing “ejaculation” by “molecule” or “neuron”. Why is it that a particular molecule produces pleasure? Certainly, when it is not in the brain, the molecule does not provide pleasure to a Petri dish.

Another kind of explaining away is the functionalist or behaviorist view: if everything appears the same from an external observer point of view, then it is the same; there is nothing more than what can be seen. In other words, male orgasm is the act of ejaculating, end of the story. But sex does not fit so well with that view. First, it is well known that female orgasm can be faked; second, male orgasm can be dissociated from ejaculation (in tantric sex).

And finally there is the evolutionary explanation: we feel pleasure because it motivates us for sex, which is necessary for reproduction. But the logic is flawed: we only need a mechanism to make us have sex, but there is no intrinsic reason why that mechanism should be accompanied with any feeling. Why are we not reproducing machines without feelings?

Here comes the example. The axolotl is a kind of salamander that retains its larval traits throughout its adult life. It's basically a cute baby salamander. Nevertheless, it is able to reproduce. The way it reproduces is interesting (probably not different from other salamanders, but cuter). The male lays a sort of jelly full of spermatozoids on the floor. And then later the female comes and inserts the jelly in her belly. After a few days, the female produces eggs.

Now the big question: does the male axolotl have an orgasm when it lays its eggs on the floor?

Notes on consciousness. (V) 4 key questions about consciousness and the mind-body problem

It is fair to say that we have little idea about how neural activity gives rise to consciousness, and about the relationship between neural activity and conscious states (i.e., what you are experiencing). This is the mind-body problem. In my opinion, there has been relatively little fundamental progress on this question because it has been addressed mainly within the computationalist framework (ie in terms of information processing), which is very inappropriate for this question (this is partly Chalmers' criticism). So below I am listing a number of unanswered questions on this matter, which I believe requires a very different kind of approach. First of all, let me remark that because being conscious is always being conscious of something, understanding consciousness is largely about understanding perception at the phenomenal level (perception in the broadest sense, e.g., perceiving your thoughts).

1) How can perception be stable?

Why is it that a pure tone feels like a stable percept when 1) the acoustic wave is time-varying, 2) the activity of neurons everywhere in the brain is dynamic? The same can be said of all senses; in vision, the eyes move at high frequency even when fixating an object, and there is no visual percept if they are forced to be still. More generally: if there is a mapping between states of the brain and percepts, then why is it that percepts are not changing all the time?

A thought experiment. Imagine the state of the brain is held fixed. Someone scratches her nose and time is stopped. Would you still experience something? Any conscious experience seems to require a change, not just a state. This suggests that the relevant mapping is actually not from brain states to percepts, but from brain activity to percepts. This immediately raises a problem, because a conscious state can be defined at any point in time, but it is not immediate that brain activity can (as this would reduce activity to state). This is not a fatal problem, though, for there is a precedent in physics: a gas is composed of individual particles, but the pressure of a gas at a given instant cannot be defined as a function of the state of the particles at that moment, because pressure corresponds to the force exerted by the particles impacting a surface. It might be that the relation between neural activity and conscious states is of a similar kind as the relation between mechanics and thermodynamics.

Two more thoughts experiments. 1) Record the firing of all neurons in the brain, then play them on a set of unconnected light diodes, does that set feel the same experience? 2) (adapted from Chalmers) Replace randomly every other neuron in the brain by an artificial neuron that interacts with other neurons in exactly same way as the neuron it replaces, would there be a conscious experience? My personal answers would be: (1) no and (2) yes, and this suggests to me that the right substrate to look at is not neural activity as a state (e.g. firing rates of all neurons) but neural activity as an interaction between neurons.

 

2) What is time for a conscious brain?

A fundamental property of consciousness is its unity: a single conscious entity sees, hears and thinks. If visual and auditory areas where independent and, say, control speech, then one conscious entity would report visual experience and another conscious entity would report auditory experience. It could not be a single conscious entity since the two relevant parts are physically disconnected. Thus the unity of consciousness requires an interdependence between all the elements that compose it. This is, as I understand it, the issue that is addressed by a number of biological theories of consciousness, for example Edelman's “reentrant loops” or Tononi's integrated information theory.

However, as far as I know, there is another crucial aspect to this problem, which is the unity of consciousness, or lack of it, in time. There is no general unity of consciousness across time: two things that happen at, say, 1 minute of interval produce distinct percepts, not a single one. Clearly, consciousness is dynamic. But the big question is: how can there be a unique conscious state at any given moment in time when all the elements of the conscious network interact with some delay (since they are physical elements), typically of a few milliseconds? And what is time for such a network? Imagine there is a (physical) visual event at time t1 and an auditory event at time t2. At what time do they occur for the network, as they are sensed at different times by all its elements?Why is it that electricity changes on a millisecond timescale in the brain but conscious states seem to change at a much slower rate?

 

3) How can there be an intrinsic relation between neural activity and percepts?

Why is it that a particular pattern of neural activity produces the experience of redness? Most biological explanations are of this kind: I experience redness because when some red object is presented, neurons fire in that specific way. This is the coding perspective. The problem in the coding perspective is of course: who decodes the code? Ultimately, this kind of explanation is strongly dualist: it is implicitly assumed that, at some point, neural activity is transformed into the redness experience by some undetermined process that must be of a very different nature.

I would like to point out that proposals in which perception lies in the interaction between the organism and the environment (e.g. the sensorimotor theory) do not solve this problem either. I can close my eyes and imagine something red. It could be that redness corresponds to a particular way in which visual inputs change when I move my eyes or the surface, which I am anticipating or imagining, but this does not explain what is intrinsically red about the pattern of neural activity now. If we cannot explain it without referring to what happened before, then we are denying that the pattern of neural activity itself determines experience, and again this is a strong dualist view.

An experiment of thought. Consider two salamanders, and each of them has only one neuron, which is both a sensory neuron and motor neuron; say, its firing produces a particular movement. The salamanders are very similar, but their visual receptors are tuned to different wavelengths. In the first salamander, the neuron reacts to red stimuli; in the second salamander, the neuron reacts to blue stimuli. What might happen in terms of visual experience when the neuron fires? Does the first salamander see red and the other see blue? If we think that neural activity alone determines experience, then in fact the two salamanders should experience exactly the same thing – and this is also independent of the sensorimotor contingencies in this case.

 

4) What is the relationship between the structure of experience and the structure of neural activity?

Subjective experience is highly structured. There might be some dispute about how rich it actually is, but it is at least as rich as what you can describe with words. A striking fact about language is that the meaning of sentences is not only implied by the words but also by the relations between them, i.e., the syntax. For example, a visual scene is composed of objects with spatial relations between them, and with attributes (a red car in front of a small house). In fact, there must be more to it than syntax, there must also be semantics: if neural activity completely determines subjective experience, it must not only specify that there is a car, but also what a car is. A useful notion in psychology of perception is the concept of “affordance” introduced by James Gibson: the affordance of an object is what it allows you to do (e.g. a car affords driving). Affordances are potentialities of interaction, and they gives some meaning (rather than labels) to perceptual objects. This brings an inferential structure to experience (if I did that, this would happen).

This stands in sharp contrast with the central perceptual concept in neuroscience, the notion that “cell assemblies” represent particular percepts. A cell assembly is simply a set of neurons, and their co-activation represents a particular percept (say, a particular face). Let us say that one neuron represents “red”, another represents “car”, then the assembly of the two neurons represents the red car. The problem with this concept is that it is very poorly structured. It cannot represent relations between objects, for example. This type of representation is known as the “bag-of-words” model in language processing: a text is represented by its set of words, without any syntactic relationship; clearly, the meaning of the text is quite degraded. The concept of cell assembly is simply too unstructured to represent experience.

If we are looking for a mapping between neural activity and percepts, then 1) we must find a way to define some structure on neural activity, and 2) the mapping must preserve that structure (in mathematical terms, we are looking for a morphism, not a simple mapping).

I can summarize this discussion by pointing out that to make progress on the mind-body problem, there are two crucial steps: 1) to understand the articulation between physical time and the time of consciousness, 2) to understand the articulation between the structure of neural activity and the structure of phenomenal experience.

Notes on consciousness. (IV) The phenomenal content of neural activity

This post is about the mind-body problem. Specifically, what is the relationship between the activity of the brain and the phenomenal content of conscious experience? It is generally thought that experience is somehow produced by the electrical activity of neurons. The caricatural example of this idea is the concept of the “grandmother cell”: a neuron lights up when you think of your grandmother, or conversely the activation of that neuron triggers the experience of, say, the vision of your grandmother's face. The less caricatural version is the concept of cell assemblies, where the single cell is replaced by a set of neurons. There are variations around this theme, but basically, the idea is that subjective experience is produced by the electrical activity of neurons. There actually is some experimental evidence for this idea, coming from the electrical stimulation of the brain of epileptic patients (read any book by Oliver Sacks). Electrical stimulation is used to locate the epileptic focus in those patients, and depending on where the electrode is in the brain, electrical stimulation can trigger various types of subjective experiences. Epileptic seizures themselves can produce such experiences, for example auditory experiences of hearing specific musics. Migraines can also trigger perceptual experiences (called “aura”), in particular visual hallucinations. So there is some support for the idea of a causal relationship between neural activity and subjective experience.

The obvious question, of course, is: why? At this moment, I have no idea why neural activity should produce any conscious experience at all. We do not believe that the activity of the stomach causes any subjective experience for the stomach, or the activity of any set of cells, including cardiac cells, which also have an electrical activity (but of course, maybe we are wrong to hold this belief).

I propose to start with a slighly more specific question: why does neural activity cause subjective experience of a particular quality? Any conscious experience is an experience of something (a property called intentionality in philosophy), for example the vision of your grandmother's face. Why is it that a particular spatio-temporal pattern of activity in a neural network produces, for that neural network, the experience of seeing a face? One type of answer is to say that this particular pattern has been associated with the actual visual stimulus of the face, ie, it “encodes” the face, and so the meaning of those neurons lighting up is the presence of that visual stimulus. This is essentially the “neural coding” perspective. But there is a big logical problem here. What if the visual stimulus is not present, but the neurons that “encode” the face light up either naturally (memory, dream) or by electrode stimulation? Why would that produce a visual experience rather than anything else? If experience is produced by neural activity alone, then it should not matter what external stimulus might cause those neurons to fire, or what happened in the past to those neurons, or even what world the neurons live in, but only which neurons fire now. Which neurons fire now should entirely determine, by itself, the content of subjective experience. Again the problem with the neural coding perspective is that it is essentially dualist: at some stage, there is some other undefined process that “reads the code” and produces subjective experience. The problem we face here is that the firing of neurons itself must intrinsically specify the experience of seeing a face, independent of the existence of an outside world.

I will try to be more specific, with a very simple example. Imagine there is just one neuron, and two stimuli in the world, A and B. Now suppose, by conditioning or even simply by anatomical assumption, that stimulus A makes the neuron fire. A neural coder would say: this neuron codes for stimulus A, and therefore this neuron's firing causes the experience of A. But you could also assume a different situation, maybe a different organism or the same organism conditioned in a different way, where stimulus B, and not A, makes the neuron fire. If neural activity is what causes subjective experience, then this neuron's firing should produce exactly the same experience in both cases, even though different stimuli cause them to fire. This example can be vastly generalized, and the implication is that any two patterns of neural activity that are identical up to a permutation of neurons should produce the same subjective experience for that set of neurons.

As if all this were not puzzling enough, I will now end on a disturbing experiment of thought. Imagine we measure the entire pattern of neural activity of someone experiencing the vision of his grandmother. Then we build a set of blinking red lights, one for each neuron, programmed so as to light up at the same time as the neurons did. The red lights don't even need to be connected to each other. The electrical activity of this set of lights is thus the same as the activity of the neural network. Therefore, by the postulate that electrical activity is what causes subjective experience, the set of lights should experience the sight of the grandmother, with the impression of being the grandson. Would it?