Back from a panel discussion on connectomics

I just came back from a panel discussion on connectomics between Moritz Helmstaedter and myself, in the Donders summer school on neural metrics. I will share my presentation when I figure out how to upload a 50 MB file on this website! (there are a few videos) In my presentation, I essentially point out that connectivity, especially anatomical connectivity (as opposed to functional connectivity), generally tells you very little about function. In particular, it doesn't help you distinguish between general theories of nervous function (say, liquid state machines or attractor networks), because those theories could easily accommodate very different types of connectivity (as long as, say, network connectivity is recurrent).

What came up in reaction to that remark is classical Popperianism. That is, the notion that a scientific theory should aim for critical experiments, experiments that could immediately disqualify that theory. So theories of brain function ought to make falsifiable predictions about connectivity, and if they don't they hardly qualify as scientific theories (the term “laziness” was used).

I have two remarks. First of all, the idea that scientific theories are theories that are falsifiable by critical experiments, and that scientific progress is essentially the performing of those experiments, is an idea that dates back from Popper's 1934 book and a few things have been written since then. Apparently many scientists are philosophically stuck in the 1930s. Thomas Kuhn's historical analysis shows that it is rarely the case that science progresses in this way. Of course it happens sometimes, but it's not the generic case. There are good reasons for that, which have been analyzed by philosophers such as Imre Lakatos. The basic remark is that a scientific theory is one that is falsifiable (the “demarcation criterion”), yes, but in practice it is also one that is falsified. There are always countless observations that do not fit the theoretical framework and those are just ignored or the theory is amended with ad hoc assumptions, which might later be explained in a more satisfactory way (e.g. the feather falls more slowly than the hammer because of some other force, let's call it “friction”). So it is very rare than a single experiment can discard a broad theory, because the outcome can often be accomodated by the theory. This can seem like a flaw in the scientific discovery process, but it's not: it's unavoidable if we are dealing with the complexity of nature; an experimental outcome can be negative because the theory is wrong, or because, say, there might be a new planet that we didn't know about (“let's call it Pluto”). This is why science progresses through the long-term interaction of competing theories (what Lakatos calls “research programs”), and this is why insisting that scientific theories should produce critical experiments is a fundamental epistemological error. Anyone who has spent a little time in research must have noticed that most hypothesis-driven papers actually test positive predictions of theories, the success of which they interpret as support for those theories.

The second remark is that, nonetheless, there is a bit of truth in the claim that theories of neural network function are difficult to confront with experiments. Certainly they are not very mature. I wouldn't say it is out of laziness, though. It is simply a very difficult task to build meaningful theories of the brain! But it is absolutely not true that they are not constrained enough because of the lack of data. Not only are they constrained, but I do not know of any such theory that is not immediately falsified by countless observations. There is not a single model of brain funtion that comes close to accounting for the complexity of animal behavior, let alone of physiological properties. How many theories in systems neuroscience are actually about systems, i.e. about how an organism might interact with an ecological environment, as opposed to describing responses of some neurons to some stimuli, interpreted as “code”? The biggest challenge is not to distinguish between different theories that would all account for current data (none does), but to build at least one that could qualify as a quantitative theory of brain function.

Importantly, if this diagnosis is correct, then our efforts should rather be spent on developing theories (by this I mean broad, ambitious, theories) than on producing yet more data when we have no theoretical framework to make use of them. This will be difficult as long as the field lives in the 1930s when it comes to epistemology, because any step towards an ambitious theory will be a theory that is falsified by current data, especially if we produce much more data. Can you make a scientific career by publishing theories that are empirically wrong (but interesting)? As provocative as it might sound, I believe you should be able to, if we ever want to make progress on the theory of brain function – isn't that the goal of neuroscience?

Connectomics bashing (III) - Connectomics vs genomics

In my first post, I argued that 1) to understand a complex organized system, one needs to understand the organization rather than have a detailed description of the structure, 2) an incomplete or imprecise measurement of structure, as is necessarily the case with any physical measurement, does not in general convey the functionality of the system. In his reply, Bernhard Englitz rightly argued that the knowledge of the connectome is nonetheless useful. I do not disagree with that claim, and I do believe that connectomics is certainly a great neuroanatomical tool. So the question is: what can and what cannot offer the knowledge of the connectome?

First of all, a quick comment. Bernhard complained that connectomics is often portrayed as an extreme claim. I was only referring to published texts by supporters of connectomics. One is Seung's book, Connectome. Arguably, this is a popular science book so I agree that the claims in the book may be overstated for the sake of communication. But there are also articles in peer-reviewed journals, which I think make quite similar claims, for example “Structural neurobiology: missing link to a mechanistic understanding of neural computation”, Nature Rev Neuro (2012), by Denk, Briggman and Helmstaedter. The basic claim in that text and in Seung's book is that there is a strong relationship between structure (specifically, connectome) and function, and implicitly that connectionism provides an adequate account of neural function. For example, both the paper above and Seung's book envisage the reading of memories from the connectome as a distinct possibility (I wouldn't say that they make a strong claim, but they certainly consider it as a reasonable possibility). I wouldn't necessarily qualify this view as extreme, but I simply do not think that the structure-function relationship is so strong in this case.

In this post, I want to address a particular comparison that is often used to emphasize the potential of connectomics: the comparison between connectomics and genomics, in particular the human genome project. This comparison is also used by other large initiatives, for example the Human Brain Project. There is no doubt that the human genome project was useful, and that sequencing entire genomes is a very useful tool in genomics. But what about the relationship between structure and function? How do you know, for example, the function of a particular gene that you have sequenced? You would know by manipulating the gene (e.g. knock-out) and looking at functional changes in the organism; or you would examine a large database of genomes and look for correlates between that gene and function (for example pathologies). In both cases, function is observed in the organism, it is not derived from the knowledge of the genome. As far as I know, we don't know how to predict the function of a gene from its DNA sequence. So even if there were a one-to-one relationship between structure (DNA) and function, the knowledge of the structure would not tell us what that relationship is and so it would not tell us the function. In addition to this, we know that the relationship between structure and function is not one-to-one in the case of the genome: this is what epigenetics is all about (and so the example of genomics is completely in line with the arguments of my first post).

So, if the structure-function relationship is similarly strong in connectomics as in genomics, then 1) the connectome itself will provide little direct insight into brain function, 2) insight might come from correlating connectomes and the function of brains (in a yet to be specified way), 3) the connectome will not determine brain function. In particular, point (3) makes it quite unlikely that memories can be inferred from the connectome. I would also like to point out that a complete comparison with genomics regarding point (2) requires the possibility not only to measure the connectome but also to finely manipulate the connectome and observe functional changes in living organisms. I do not see how current connectomics technologies (electron microscopy) could make it possible. There is a critical limitation, at least for the foreseeable future, which is that the connectome can only be measured on dead organisms, in contrast with DNA sequencing, which greatly limits the possibilities of connectome manipulation or diagnosis based on (detailed) connectome.

Finally I want to point out that the structure-function relationship is likely to be weaker in connectomics than in genomics. First, there is a fundamental difference: the DNA is a discrete structure (4 bases), the connectome is not, if you consider synapse strength. So it should be possible to exactly measure the graph of connectivity in the same way as you can sequence DNA, but measuring the extended connectome (with synaptic strength or delays) can only be measured with limited precision. A second, probably more serious difference, is that while there is a concept of gene that has some functional unity and correspond to a well-identified portion of the genome, there is no equivalent concept in the connectome. In genomics, one can knock-out a gene or look for structure-function correlates for different versions of the same gene. In connectomics, there is in general no similarly well-defined portion of the connectome that can be identified across individuals. This might be partially possible when considering connections made onto well-identified sensors and effectors (say, in the retina), but comparing cortical connectomes across individuals is another story.

So connectomics suffers from the same problems about structure-function relationship as genomics, but quite a bit worse. Again I am not saying that it is not a useful set of tools. But it is just this: additional measurement tools in the hands of neuroscientists, not fundamental data that would specify or in general even suggest brain function. One example where it might be quite useful is in finding new markers of neurological disorders. It has been hypothesized that some neurological diseases are “connectopathies”, ie are due to abnormal connections. Here as in genomics, one could compare the connectomes of subjects that have suffered from those diseases and of control subjects and perhaps identify systematic differences. Whether those differences are correlates of the disease or have a more causal role in the disease, such an identification could certainly help understand the origins and mechanisms of the pathologies. This is a completely different claim than saying that brain function can be understood from the connectome: in this case brain function is observed at the functional level, and simply correlated with some particular element of structure, it is not the structure itself that informs us of function.

In summary: 1) the structure-function relationship is not that strong in genomics and is certainly weaker in connectomics, and more importantly the structure-function relationship is unknown, 2) connectomics is more limited as a set of tools than genomics, 3) what we should expect from connectomics is not so much the possibility to infer brain function from connectome as to correlate some aspects of brain function or dysfunction with connectomes.

Connectomics bashing (II) - Invited post by Bernhard Englitz

My friend and colleague Bernhard Englitz reacted to my previous post on connectomics. What follows is his defense of connectomics:

The connectomics viewpoint is often portrayed as an extreme claim, namely that knowing the connectome (with or without weights) is the sole key to understanding the brain. Certainly, some of its proponents are guilty of generating this viewpoint, however, from a plain funding level, it can be understood why claims are exaggerated (the Human Brain Project being a good example where extreme claims can lead to extreme payoffs). I contend however, that even the strongest proponents of connectomics are far from believing their claims. Much rather I claim that the value of the connectome would not lie in a final explanation, but a solid basis for further exploration.

Across all domains of life, availability of data/resources enables new studies, the genome probably being the best example in this respect. The availability of the genome did not solve many fundamental questions that existed before, but it opened the space for new question, such as systematic interorganism comparison, genome wide searches for homology etc. The value of the availability of one or multiple genomes in these particular questions was that one could suddenly make certain statements with safety, rather than with a large amount of speculation.

Also, proponents of connectomics do not emphasize the value of the structural connectome, because they think that functional data/neural activity are unimportant, but they are simply not as available with the current technology. Science has always been opportunistic and profited from new techniques to address or open certain questions from a new perspective (which does not exclude profiting from ideas equally). Blockface imaging is available and can be applied today, whereas a lot of functional methods are on the verge of scaling to large systems, but not achieving a similar resolution or range as methods in connectomics.

However, new methods always come at a cost, which was again demonstrated well by the human genome projects, whose cost must seem like a gigantuan waste of money in retrospect, given the steep decline in sequencing costs. From my perspective, drastic shifts in the funding landscape are maybe the biggest dangers to fear, in the light of the potential of a much lower price tag just a few years from now. On the other hand, methods development and scientific discovery often go hand in hand, and some of the saving are only realized by public investment, thus creating a market for new techonologies. Hence, hardly anyone doubts that the availability of connectomical data would be useful for a range of questions, the main question is ‘now’ or ‘tomorrow’ (although this decision is clearly complex).

But what are the domains in which the availability of a connectome could really make a difference in neuroscience? Before addressing this question I would like to caution against the use of analogies, since the differences are subtle and may not transfer. Taking Romain’s example of the membrane, it is clear that the question posed about the membrane’s basic nature does not profit from the knowledge of all lipids and proteins that form the membrane, but many essential questions relating to compartmentalization of a cell, trafficking, expression of proteins, would profit. Undoubtedly we would have a much harder time thinking about the space of possibilities in a cell without precise knowledge of the cell’s structure. I interpret Romain’s example to illustrate that ‘certain questions’ will not profite from overly detailed knowledge, and in some cases too much data can even inhibit conceptual development.

However, I contend that many questions in neuroscience are of a different kind and will profit strongly from the availability of a connectome. These questions reach from subcellular structure to network structure. For example, current modeling approaches of cortical neurons are usually based on detailed reconstructed cells, or simplified estimates of individual cells. Both approaches - the first to mimick reality , the second to make it minimal and thus tractable - suffer from the lack of the RIGHT reductions and the RIGHT level of detail. Connectomics can provide a tool to arrive at this middle ground, by offering the possibility to quantify typical structures, their relative occurrence and variability. Such a family of model neurons, will be free from the particularities of the single precise neuron, and will be less dependent on the creative reduction previously performed, based on small data-sets (clearly this has been realized before, and pre-connectomics attempts have tried to close this gap, but a publicly available connectome would open the possibilities to put a wide range of modelling efforts on safer ground, rather than single labs, making insular, small, guarded advances). Similar arguments hold for the statistics of connectivity in neural networks, where we are currently witnessing a wide range of ‘random-connected’ network studies, which may ultimately be of little use, depending on how well their assumptions are matched. Again, to emphasize, the goal has to be finding the right statistics, not replicating the exact structure.

In a nutshell, I think increased accountability and the arbitrariness of assumptions could be widely reduced in this case, thus opening a new era of midlevel modelling, i.e. quite the opposite of what the Human Brain Project is trying to achieve, and Romain is criticizing.

Connectomics bashing (I) - Can we understand the brain by measuring its structure?

I have been asked to debate on the subject of connectomics next summer, against a supporter of the connectomics enterprise. I am starting this series of posts to develop some arguments.

Connectomics is the systematic, large-scale mapping of the connections in the nervous system (the “connectome”), and their analysis. It involves the development of new technologies and a considerable amount of financial and human resources. Let me clarify briefly my personal position. Recently there was a debate involving Sebastian Seung, an advocate of connectomics, and Tony Movshon, a skeptical of the whole entreprise. I would qualify myself also as a skeptical of connectomics, but it turns out that I ended up leaning towards Seung. Seung considers that we won't be able to understand much about the brain if we don't even know how neurons are wired. Movshon argues that detailed information about the connectome is not necessary, because in his view only statistical information is relevant as only that information is reproducible across individuals. Simply put, Movshon is a computationalist in the vein of David Marr (“implementation doesn't matter”), while Seung is a classical connectionnist. I certainly strongly disagree with Movshon in his view that detailed structure is noise or irrelevant. But I am also skeptical about Seung's hopes. Seung considers that studying the connectome will give us the keys to the computations that the brain is doing. I will explain in this post why I don't think this will happen. But before, I just want to mention that I do not consider the connectomics enterprise a useless enterprise. It probably will bring a useful resource. It might even bring some conceptual changes, for example by exposing the limits of classical connectionnism. What I disagree with is the general claim that knowing the detailed structure of the brain is the main limiting factor in our basic understanding of the brain.

To a large extent, my criticism is the criticism of overly reductionist accounts of biological systems. So I will start this series of posts with this general question: can we understand the brain by measuring its structure? (replace “understand” by “simulate” for a criticism of the Human Brain Project).

In his book, Seung sums up the connectionnist view in this sentence: “You are your connectome”. Indeed, in connectionnism, memories and computations (input-ouput operations) are stored in the pattern of connections between neurons, and their strength. There is indeed quite a bit of evidence for activity-dependent changes in synaptic connections, in relationship with learning. There is also a large body of theoretical work that develops that idea. But of course, we also know other aspects of structure in neural networks that change and have a subsequent effect on their dynamics. For example, the expression of ionic channels. Or their phosphorylation. Or changes in the morphology of dendrites and axons, or in the conduction delays. There are also ephaptic connections, that do not involve synapses at all. There might be long-term changes in glia too. Essentially, every bit of structure in the brain seems to be plastic. Of course this is not news to connectionnists. The counter-argument is that connections are what matter most.

In a similar vein, supporters of bottom-up approaches to simulations of the brain would argue: if you simulate the most important aspects of the brain with enough detail, then there is no reason why your simulation should not resemble the activity of the brain. This reductionist logic seems superficially convincing. If one points out that the simulation misses a crucial element, then it could simply be replied that the element can just be incorporated into the simulation. Problem solved.

The general problem is that any description of structure involves ignoring a number of elements. Thus the reductionist logic implicit assumes that the whole (=brain) can be approximated by an appropriate subset of structural elements. As you add more elements, the approximation “converges” to the real thing.

Now I will give a simple example that exposes the fallacy. Consider a membrane, made of a structure of lipids. Or just consider a table tennis ball. The membrane is topologically equivalent to a sphere, in that it defines an inside and an outside. In particular, it traps molecules inside it (assuming the membrane is not permeable to it). Make a hole and the set of lipids loses this property. Now the problem is: if you consider a subset of the membrane, one that comprises almost all of the membrane, then that gives you something that is not a membrane. As long there is a hole, it is not a membrane. A membrane is not defined by the fact that it is composed of lipids; it is not even defined by the fact that it is composed of elements that are bound to their neighbors: it is defined by the global property that it has no hole. In that sense, you cannot approximate a membrane by a subset of a membrane. To understand the membrane functionally, you need to understand its global logic and how the elements participate to that logic. Therefore, if the relationship between brain structure and brain function is of this kind, then measuring its structure will not be sufficient. It will in fact miss the important stuff, which is the organizational logic (say, the plasticity rules or developmental rules).

Secondly, to understand what a membrane is, you would probably not find it very useful to measure every element of structure with an electron microscope. Rather, you might perhaps try to see what happens if you make a hole in the structure. Or you might try to build a membrane with some other type of element. Or you might study very small membranes. In this example, everything is important in the sense that you cannot remove something and leave the rest unchanged without making a hole, but nothing is important in particular, in the sense that all elements are identical and you could make a membrane with any subset of the elements by changing the structure a little bit (ie binding elements around a hole together). Maybe ephaptic interactions are of this kind: they might have no specific function, but certainly if you remove them without adapting the rest of the structure, the neural networks will have different dynamics, perhaps introducing some pathological misbalance.

Thirdly, to relate to Movshon's argument: that a diversity of structures have the property to be a membrane (e.g. of different sizes or made of a different basic element) does not mean that structure is random. Actually quite the opposite is true here: structure is highly organized at a detailed level, otherwise there would be holes. It just means that function results from the organization of the structure, from relations between structural elements, and those relations could be instantiated in different ways.

Now is it the case that the problem of the connectome is of this kind? A classical counter-argument to connectomics is the case of C Elegans. That worm has 307 neurons and its connectome has been completely mapped in the 80s. While it's certainly a useful resource in that field, it is fair to stay that we still don't understand much how that nervous system works. One classical reply is that in C. Elegans, it is notoriously difficult to do electrophysiology and so we don't even know the sign of the connections (excitation or inhibition). Fair enough. But let us say we do know the sign. Would that solve the problem? Let us move the mammalian cortex. A pyramidal cell has on the order of 10,000 synapses. Is it possible to infer the dynamics of a cortical network from the connections? In fact, it has been shown that in pyramidal cells, there is a tight dynamic balance between excitatory input and inhibitory input, and without it neurons would fire regularly, instead of irregularly as they do. It has also been found that there are mechanisms that maintain this balance (synaptic and homeostatic plasticity). But by just looking at the connectome, you would not see that. You would see that a pyramidal cell has 4 times more excitatory inputs than inhibitory inputs. If you tried to measure synaptic strengths, this would also not be sufficient, because you need a very high precision in the balance, which needs to be achieved for in vivo situation that takes into account all elements of structure. This is just an example, but the point is that these principles of organization do not emerge from just looking at the structure (and in this case it would not even help), but by directly investigating those principles as such.

In summary, the limiting factor in our understanding of the brain is not so much a more detailed description of structure, but rather a deeper understanding of the organizational principles of the nervous system, and this won't be provided by electron microscopy.