I just came back from a panel discussion on connectomics between Moritz Helmstaedter and myself, in the Donders summer school on neural metrics. I will share my presentation when I figure out how to upload a 50 MB file on this website! (there are a few videos) In my presentation, I essentially point out that connectivity, especially anatomical connectivity (as opposed to functional connectivity), generally tells you very little about function. In particular, it doesn't help you distinguish between general theories of nervous function (say, liquid state machines or attractor networks), because those theories could easily accommodate very different types of connectivity (as long as, say, network connectivity is recurrent).
What came up in reaction to that remark is classical Popperianism. That is, the notion that a scientific theory should aim for critical experiments, experiments that could immediately disqualify that theory. So theories of brain function ought to make falsifiable predictions about connectivity, and if they don't they hardly qualify as scientific theories (the term “laziness” was used).
I have two remarks. First of all, the idea that scientific theories are theories that are falsifiable by critical experiments, and that scientific progress is essentially the performing of those experiments, is an idea that dates back from Popper's 1934 book and a few things have been written since then. Apparently many scientists are philosophically stuck in the 1930s. Thomas Kuhn's historical analysis shows that it is rarely the case that science progresses in this way. Of course it happens sometimes, but it's not the generic case. There are good reasons for that, which have been analyzed by philosophers such as Imre Lakatos. The basic remark is that a scientific theory is one that is falsifiable (the “demarcation criterion”), yes, but in practice it is also one that is falsified. There are always countless observations that do not fit the theoretical framework and those are just ignored or the theory is amended with ad hoc assumptions, which might later be explained in a more satisfactory way (e.g. the feather falls more slowly than the hammer because of some other force, let's call it “friction”). So it is very rare than a single experiment can discard a broad theory, because the outcome can often be accomodated by the theory. This can seem like a flaw in the scientific discovery process, but it's not: it's unavoidable if we are dealing with the complexity of nature; an experimental outcome can be negative because the theory is wrong, or because, say, there might be a new planet that we didn't know about (“let's call it Pluto”). This is why science progresses through the long-term interaction of competing theories (what Lakatos calls “research programs”), and this is why insisting that scientific theories should produce critical experiments is a fundamental epistemological error. Anyone who has spent a little time in research must have noticed that most hypothesis-driven papers actually test positive predictions of theories, the success of which they interpret as support for those theories.
The second remark is that, nonetheless, there is a bit of truth in the claim that theories of neural network function are difficult to confront with experiments. Certainly they are not very mature. I wouldn't say it is out of laziness, though. It is simply a very difficult task to build meaningful theories of the brain! But it is absolutely not true that they are not constrained enough because of the lack of data. Not only are they constrained, but I do not know of any such theory that is not immediately falsified by countless observations. There is not a single model of brain funtion that comes close to accounting for the complexity of animal behavior, let alone of physiological properties. How many theories in systems neuroscience are actually about systems, i.e. about how an organism might interact with an ecological environment, as opposed to describing responses of some neurons to some stimuli, interpreted as “code”? The biggest challenge is not to distinguish between different theories that would all account for current data (none does), but to build at least one that could qualify as a quantitative theory of brain function.
Importantly, if this diagnosis is correct, then our efforts should rather be spent on developing theories (by this I mean broad, ambitious, theories) than on producing yet more data when we have no theoretical framework to make use of them. This will be difficult as long as the field lives in the 1930s when it comes to epistemology, because any step towards an ambitious theory will be a theory that is falsified by current data, especially if we produce much more data. Can you make a scientific career by publishing theories that are empirically wrong (but interesting)? As provocative as it might sound, I believe you should be able to, if we ever want to make progress on the theory of brain function – isn't that the goal of neuroscience?
Hi Romain, it's a very nice review, I really enjoyed reading it!
Yet, I would like to add some ideas regarding this
"Importantly, if this diagnosis is correct, then our efforts should rather be spent on developing theories (by this I mean broad, ambitious, theories) than on producing yet more data when we have no theoretical framework to make use of them."
I guess for the theoreticians this idea sounds very attractive that we have to create more theories to explain how the brain works. Especially when it comes to the large-scale ideas regarding connection between the brain and the environment. But it seems to me that in neuroscience from the historical point of view the experimental technics were one of the main drivers of the discoveries.The pioneering works of Ramon Cajal in neuroanatomy, patch-clamp recordings developed in the 1980s and MRI developed in 1992, after introducing new tools there were always a lot of new discoveries made after that. There is already a field that does not deal with the brain measurements such a psychology, which has a lot of different theories about how does the mind work. Yet, many of them are hardly falsifiable. Do we want the theoretical neuroscience to become something like that?
Actually, my question is a bit provocative, how many significant discoveries were done in neuroscience only due to the advancements of a theory?
On the other hand, probably we do not have good theories so far and that is why there are not so many.
Hi Anatoly!
My reply comes a little late, but anyway.
You are raising a classic debate in epistemology, I think, which is the relative contribution of technological vs. theoretical developments. It's a complicated debate, but I can comment on some of your specific examples. I don't know fMRI enough to comment on it, but for patch-clamp, the point is that the technique was actually developed to test a theory, the theory that membrane currents are produced by channels, which open discretely. That theory had existed for quite a while, and the technique was developed because of it. So yes, the technique then allowed lots of new stuff but that's really an example of theory-driven development rather than data-driven (such as connectomics). In general, there is an interaction between theory and experiment, not a one-way relation. So for me, thinking that good theory is going to magically pop up from a pile of data is very naive, if you have no idea what you are going to do with the data.
Regarding psychology, I think you might be referring to psychoanalytic theories or that kind of theory that has very weak empirical grounding. But experimental psychology is a huge field. There are pretty good theories there, with serious empirical grounding. You could have a look at Daniel Kahneman's book for example, which is a nice read (thinking fast and slow).
But anyway, my point is not that we should develop theories independently of data. Quite the contrary in fact. I'm pointing out that current theories are already contradicted by current data, so adding more data is not going to help in that respect. What we should rather do at this point is try to make theories that are consistent with the data we already have, and that's theoretical work.