Connectomics bashing (II) - Invited post by Bernhard Englitz

My friend and colleague Bernhard Englitz reacted to my previous post on connectomics. What follows is his defense of connectomics:

The connectomics viewpoint is often portrayed as an extreme claim, namely that knowing the connectome (with or without weights) is the sole key to understanding the brain. Certainly, some of its proponents are guilty of generating this viewpoint, however, from a plain funding level, it can be understood why claims are exaggerated (the Human Brain Project being a good example where extreme claims can lead to extreme payoffs). I contend however, that even the strongest proponents of connectomics are far from believing their claims. Much rather I claim that the value of the connectome would not lie in a final explanation, but a solid basis for further exploration.

Across all domains of life, availability of data/resources enables new studies, the genome probably being the best example in this respect. The availability of the genome did not solve many fundamental questions that existed before, but it opened the space for new question, such as systematic interorganism comparison, genome wide searches for homology etc. The value of the availability of one or multiple genomes in these particular questions was that one could suddenly make certain statements with safety, rather than with a large amount of speculation.

Also, proponents of connectomics do not emphasize the value of the structural connectome, because they think that functional data/neural activity are unimportant, but they are simply not as available with the current technology. Science has always been opportunistic and profited from new techniques to address or open certain questions from a new perspective (which does not exclude profiting from ideas equally). Blockface imaging is available and can be applied today, whereas a lot of functional methods are on the verge of scaling to large systems, but not achieving a similar resolution or range as methods in connectomics.

However, new methods always come at a cost, which was again demonstrated well by the human genome projects, whose cost must seem like a gigantuan waste of money in retrospect, given the steep decline in sequencing costs. From my perspective, drastic shifts in the funding landscape are maybe the biggest dangers to fear, in the light of the potential of a much lower price tag just a few years from now. On the other hand, methods development and scientific discovery often go hand in hand, and some of the saving are only realized by public investment, thus creating a market for new techonologies. Hence, hardly anyone doubts that the availability of connectomical data would be useful for a range of questions, the main question is ‘now’ or ‘tomorrow’ (although this decision is clearly complex).

But what are the domains in which the availability of a connectome could really make a difference in neuroscience? Before addressing this question I would like to caution against the use of analogies, since the differences are subtle and may not transfer. Taking Romain’s example of the membrane, it is clear that the question posed about the membrane’s basic nature does not profit from the knowledge of all lipids and proteins that form the membrane, but many essential questions relating to compartmentalization of a cell, trafficking, expression of proteins, would profit. Undoubtedly we would have a much harder time thinking about the space of possibilities in a cell without precise knowledge of the cell’s structure. I interpret Romain’s example to illustrate that ‘certain questions’ will not profite from overly detailed knowledge, and in some cases too much data can even inhibit conceptual development.

However, I contend that many questions in neuroscience are of a different kind and will profit strongly from the availability of a connectome. These questions reach from subcellular structure to network structure. For example, current modeling approaches of cortical neurons are usually based on detailed reconstructed cells, or simplified estimates of individual cells. Both approaches - the first to mimick reality , the second to make it minimal and thus tractable - suffer from the lack of the RIGHT reductions and the RIGHT level of detail. Connectomics can provide a tool to arrive at this middle ground, by offering the possibility to quantify typical structures, their relative occurrence and variability. Such a family of model neurons, will be free from the particularities of the single precise neuron, and will be less dependent on the creative reduction previously performed, based on small data-sets (clearly this has been realized before, and pre-connectomics attempts have tried to close this gap, but a publicly available connectome would open the possibilities to put a wide range of modelling efforts on safer ground, rather than single labs, making insular, small, guarded advances). Similar arguments hold for the statistics of connectivity in neural networks, where we are currently witnessing a wide range of ‘random-connected’ network studies, which may ultimately be of little use, depending on how well their assumptions are matched. Again, to emphasize, the goal has to be finding the right statistics, not replicating the exact structure.

In a nutshell, I think increased accountability and the arbitrariness of assumptions could be widely reduced in this case, thus opening a new era of midlevel modelling, i.e. quite the opposite of what the Human Brain Project is trying to achieve, and Romain is criticizing.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *