I just came back from a panel discussion on connectomics between Moritz Helmstaedter and myself, in the Donders summer school on neural metrics. I will share my presentation when I figure out how to upload a 50 MB file on this website! (there are a few videos) In my presentation, I essentially point out that connectivity, especially anatomical connectivity (as opposed to functional connectivity), generally tells you very little about function. In particular, it doesn't help you distinguish between general theories of nervous function (say, liquid state machines or attractor networks), because those theories could easily accommodate very different types of connectivity (as long as, say, network connectivity is recurrent).
What came up in reaction to that remark is classical Popperianism. That is, the notion that a scientific theory should aim for critical experiments, experiments that could immediately disqualify that theory. So theories of brain function ought to make falsifiable predictions about connectivity, and if they don't they hardly qualify as scientific theories (the term “laziness” was used).
I have two remarks. First of all, the idea that scientific theories are theories that are falsifiable by critical experiments, and that scientific progress is essentially the performing of those experiments, is an idea that dates back from Popper's 1934 book and a few things have been written since then. Apparently many scientists are philosophically stuck in the 1930s. Thomas Kuhn's historical analysis shows that it is rarely the case that science progresses in this way. Of course it happens sometimes, but it's not the generic case. There are good reasons for that, which have been analyzed by philosophers such as Imre Lakatos. The basic remark is that a scientific theory is one that is falsifiable (the “demarcation criterion”), yes, but in practice it is also one that is falsified. There are always countless observations that do not fit the theoretical framework and those are just ignored or the theory is amended with ad hoc assumptions, which might later be explained in a more satisfactory way (e.g. the feather falls more slowly than the hammer because of some other force, let's call it “friction”). So it is very rare than a single experiment can discard a broad theory, because the outcome can often be accomodated by the theory. This can seem like a flaw in the scientific discovery process, but it's not: it's unavoidable if we are dealing with the complexity of nature; an experimental outcome can be negative because the theory is wrong, or because, say, there might be a new planet that we didn't know about (“let's call it Pluto”). This is why science progresses through the long-term interaction of competing theories (what Lakatos calls “research programs”), and this is why insisting that scientific theories should produce critical experiments is a fundamental epistemological error. Anyone who has spent a little time in research must have noticed that most hypothesis-driven papers actually test positive predictions of theories, the success of which they interpret as support for those theories.
The second remark is that, nonetheless, there is a bit of truth in the claim that theories of neural network function are difficult to confront with experiments. Certainly they are not very mature. I wouldn't say it is out of laziness, though. It is simply a very difficult task to build meaningful theories of the brain! But it is absolutely not true that they are not constrained enough because of the lack of data. Not only are they constrained, but I do not know of any such theory that is not immediately falsified by countless observations. There is not a single model of brain funtion that comes close to accounting for the complexity of animal behavior, let alone of physiological properties. How many theories in systems neuroscience are actually about systems, i.e. about how an organism might interact with an ecological environment, as opposed to describing responses of some neurons to some stimuli, interpreted as “code”? The biggest challenge is not to distinguish between different theories that would all account for current data (none does), but to build at least one that could qualify as a quantitative theory of brain function.
Importantly, if this diagnosis is correct, then our efforts should rather be spent on developing theories (by this I mean broad, ambitious, theories) than on producing yet more data when we have no theoretical framework to make use of them. This will be difficult as long as the field lives in the 1930s when it comes to epistemology, because any step towards an ambitious theory will be a theory that is falsified by current data, especially if we produce much more data. Can you make a scientific career by publishing theories that are empirically wrong (but interesting)? As provocative as it might sound, I believe you should be able to, if we ever want to make progress on the theory of brain function – isn't that the goal of neuroscience?