I have been asked to debate on the subject of connectomics next summer, against a supporter of the connectomics enterprise. I am starting this series of posts to develop some arguments.
Connectomics is the systematic, large-scale mapping of the connections in the nervous system (the “connectome”), and their analysis. It involves the development of new technologies and a considerable amount of financial and human resources. Let me clarify briefly my personal position. Recently there was a debate involving Sebastian Seung, an advocate of connectomics, and Tony Movshon, a skeptical of the whole entreprise. I would qualify myself also as a skeptical of connectomics, but it turns out that I ended up leaning towards Seung. Seung considers that we won't be able to understand much about the brain if we don't even know how neurons are wired. Movshon argues that detailed information about the connectome is not necessary, because in his view only statistical information is relevant as only that information is reproducible across individuals. Simply put, Movshon is a computationalist in the vein of David Marr (“implementation doesn't matter”), while Seung is a classical connectionnist. I certainly strongly disagree with Movshon in his view that detailed structure is noise or irrelevant. But I am also skeptical about Seung's hopes. Seung considers that studying the connectome will give us the keys to the computations that the brain is doing. I will explain in this post why I don't think this will happen. But before, I just want to mention that I do not consider the connectomics enterprise a useless enterprise. It probably will bring a useful resource. It might even bring some conceptual changes, for example by exposing the limits of classical connectionnism. What I disagree with is the general claim that knowing the detailed structure of the brain is the main limiting factor in our basic understanding of the brain.
To a large extent, my criticism is the criticism of overly reductionist accounts of biological systems. So I will start this series of posts with this general question: can we understand the brain by measuring its structure? (replace “understand” by “simulate” for a criticism of the Human Brain Project).
In his book, Seung sums up the connectionnist view in this sentence: “You are your connectome”. Indeed, in connectionnism, memories and computations (input-ouput operations) are stored in the pattern of connections between neurons, and their strength. There is indeed quite a bit of evidence for activity-dependent changes in synaptic connections, in relationship with learning. There is also a large body of theoretical work that develops that idea. But of course, we also know other aspects of structure in neural networks that change and have a subsequent effect on their dynamics. For example, the expression of ionic channels. Or their phosphorylation. Or changes in the morphology of dendrites and axons, or in the conduction delays. There are also ephaptic connections, that do not involve synapses at all. There might be long-term changes in glia too. Essentially, every bit of structure in the brain seems to be plastic. Of course this is not news to connectionnists. The counter-argument is that connections are what matter most.
In a similar vein, supporters of bottom-up approaches to simulations of the brain would argue: if you simulate the most important aspects of the brain with enough detail, then there is no reason why your simulation should not resemble the activity of the brain. This reductionist logic seems superficially convincing. If one points out that the simulation misses a crucial element, then it could simply be replied that the element can just be incorporated into the simulation. Problem solved.
The general problem is that any description of structure involves ignoring a number of elements. Thus the reductionist logic implicit assumes that the whole (=brain) can be approximated by an appropriate subset of structural elements. As you add more elements, the approximation “converges” to the real thing.
Now I will give a simple example that exposes the fallacy. Consider a membrane, made of a structure of lipids. Or just consider a table tennis ball. The membrane is topologically equivalent to a sphere, in that it defines an inside and an outside. In particular, it traps molecules inside it (assuming the membrane is not permeable to it). Make a hole and the set of lipids loses this property. Now the problem is: if you consider a subset of the membrane, one that comprises almost all of the membrane, then that gives you something that is not a membrane. As long there is a hole, it is not a membrane. A membrane is not defined by the fact that it is composed of lipids; it is not even defined by the fact that it is composed of elements that are bound to their neighbors: it is defined by the global property that it has no hole. In that sense, you cannot approximate a membrane by a subset of a membrane. To understand the membrane functionally, you need to understand its global logic and how the elements participate to that logic. Therefore, if the relationship between brain structure and brain function is of this kind, then measuring its structure will not be sufficient. It will in fact miss the important stuff, which is the organizational logic (say, the plasticity rules or developmental rules).
Secondly, to understand what a membrane is, you would probably not find it very useful to measure every element of structure with an electron microscope. Rather, you might perhaps try to see what happens if you make a hole in the structure. Or you might try to build a membrane with some other type of element. Or you might study very small membranes. In this example, everything is important in the sense that you cannot remove something and leave the rest unchanged without making a hole, but nothing is important in particular, in the sense that all elements are identical and you could make a membrane with any subset of the elements by changing the structure a little bit (ie binding elements around a hole together). Maybe ephaptic interactions are of this kind: they might have no specific function, but certainly if you remove them without adapting the rest of the structure, the neural networks will have different dynamics, perhaps introducing some pathological misbalance.
Thirdly, to relate to Movshon's argument: that a diversity of structures have the property to be a membrane (e.g. of different sizes or made of a different basic element) does not mean that structure is random. Actually quite the opposite is true here: structure is highly organized at a detailed level, otherwise there would be holes. It just means that function results from the organization of the structure, from relations between structural elements, and those relations could be instantiated in different ways.
Now is it the case that the problem of the connectome is of this kind? A classical counter-argument to connectomics is the case of C Elegans. That worm has 307 neurons and its connectome has been completely mapped in the 80s. While it's certainly a useful resource in that field, it is fair to stay that we still don't understand much how that nervous system works. One classical reply is that in C. Elegans, it is notoriously difficult to do electrophysiology and so we don't even know the sign of the connections (excitation or inhibition). Fair enough. But let us say we do know the sign. Would that solve the problem? Let us move the mammalian cortex. A pyramidal cell has on the order of 10,000 synapses. Is it possible to infer the dynamics of a cortical network from the connections? In fact, it has been shown that in pyramidal cells, there is a tight dynamic balance between excitatory input and inhibitory input, and without it neurons would fire regularly, instead of irregularly as they do. It has also been found that there are mechanisms that maintain this balance (synaptic and homeostatic plasticity). But by just looking at the connectome, you would not see that. You would see that a pyramidal cell has 4 times more excitatory inputs than inhibitory inputs. If you tried to measure synaptic strengths, this would also not be sufficient, because you need a very high precision in the balance, which needs to be achieved for in vivo situation that takes into account all elements of structure. This is just an example, but the point is that these principles of organization do not emerge from just looking at the structure (and in this case it would not even help), but by directly investigating those principles as such.
In summary, the limiting factor in our understanding of the brain is not so much a more detailed description of structure, but rather a deeper understanding of the organizational principles of the nervous system, and this won't be provided by electron microscopy.