I would like to propose another way to describe the epistemological relationships between computational and experimental neuroscience. In acoustics, there is a methodology known as “analysis-synthesis” of sounds (Risset & Wessel, 1982) to understand what makes the quality (or “timbre”) of a sound (see in particular Gaver (1993), “How do we hear in the world?”). A first step is to examine the sound by various methods, for example acoustic analysis (look at the spectrum, the temporal envelope, etc), and try to extract the salient features. A second step consists in synthesizing sounds that display these features. One then listen to these sounds to evaluate whether they successfully reproduce the quality of the original sounds. This evaluation step can be made objective with psychoacoustic experiments. The results of the synthesis step then inform the analysis, which can then focus on those aspects that were not correctly captured, and the procedure goes through a new iteration. The analysis can also be guided by physical analysis, i.e., by theory. For example, the perceived size of a sounding object should be related to the resonant frequencies, whose wavelengths correspond to the dimensions of the object. The type of material (wood, metal) should be related to the decay rate of the temporal envelope. By these principles, it is possible to synthesize convincing sounds of impacts on a wood plate, for example.
There is a direct analogy with the relationship between computational and experimental neuroscience. Experimental neuroscience aims at identifying various aspects of the nervous system that seem significant: this is the analysis step. The object of experiments is a fully functional organism, or a piece of it. The empirical findings are considered significant in relationship with the theory of the moment (perhaps in analogy with physical analysis in acoustics), and with the chosen method of analysis (type of measurement and experimental protocol). By themselves, they only indicate what might contribute to the function of the organism, and more importantly how it contributes to it. For example, if the attack of a piano sound is removed, it doesn’t sound like a piano anymore, so the attack is important to the quality of the piano sound. In the same way, lesion studies inform us of what parts of the brain are critical for a given function, but this doesn’t tell us how exactly those parts contribute to the function. Computational neuroscience, then, can be viewed as the synthesis step. Starting from nothing (i.e., not a fully functional organism), one tries to build a drastically simplified system, informed by the analysis step. But the goal is not to reproduce all the pieces of empirical data that were used to inform the system. The goal is to reproduce the function of the organism. In analogy with sound: the goal is not to reproduce detailed aspects of the spectrum, but rather that the synthesized signal sounds good. If the function is not correctly reproduced, then maybe the features identified by the analysis step were not the most relevant ones. In this way the synthesis step informs the analysis step.
This analogy highlights a few important epistemological specificities of computational neuroscience. Most importantly, computational neuroscience is primarily about explaining the function, and only secondarily the empirical data. Empirical experiments on the auditory system of the barn owl aim at explaining how the barn owl catches a mouse in the dark. Computational studies also aim at explaining how the barn owl catches a mouse in the dark, not at reproducing the results of the empirical experiments. Another way to put it: the data to be explained by the theory are not only what is explicitly stated in the Results section, but also the other empirical piece of evidence that is implicitly stated in the Methods or the Introduction section, that is, that before the experiment, the barn owl was a fully functional living organism able to catch a prey in the dark. Secondly, computational neuroscience, as a synthetic approach, aims at a simple, conceptually meaningful, description of the system. Realism is in the function (how the signal sounds), not in the amount of decoration aimed at mimicking pieces of empirical data.
This discussion also brings support to the criticism of epistemic reductionism. Imagine we can measure all the components of the brain, and put them together in a realistic simulation of the brain (which already implies some form of methodological reductionism). This would correspond to fully analyzing the spectrum of a sound, recording it in complete details, and then playing it back. What is learned about what makes the quality of the sound? A second point is methodological: suppose we collect all necessary data about the brain, but from different individual brains, and perhaps a bunch of related species like mice. Would the result sound like a piano, or would it sound like a cacophony of different pianos and a violin?