In the previous posts, I have strongly insisted on the epistemological notion that theory precedes empirical observation, in the sense that experiments are designed to test theories. I insisted on this point because computational neuroscience seems to be understood by many scientists through the prism of naive inductivism: the view that theory derives more or less directly from observation (you make experimental measurements, and then you “make a model” from them). I do not need to insist again on why this view is flawed in many ways. But of course it would be absurd to advocate the opposite view, that is, that observation cannot provide knowledge unless it is designed to test a theory. This post is meant to nuance my previous arguments.
In fact, historically, science has progressed by two very different means: one is the introduction of radically new perspectives (“paradigm shifts”), another one is the development of new tools. A salient example in neuroscience is the development of patch-clamp, which allows recording currents flowing through single ionic channels. The technique led to the Nobel Prize of Neher and Sakmann in 1991. The discoveries they made with this technique were not revolutionary in Kuhn’s sense, that is, they did not fundamentally contradict the prevailing views and it was not a conceptual change of paradigm. It was already thought since the times of Hodgkin and Huxley that membrane currents came from the flow of ions through channels in the membrane, even though they could not directly observe it at the time. But still, the ability to make observations that were not possible before led to considerable new knowledge, for example the fact that channel opening is binary and stochastic.
At the present time, many think that the discoverers of optogenetics are on the shortlist to get the Nobel Prize in the coming years. Optogenetics is a very recent technique in which channelrhodopsin, a light-activated channel, is expressed in the membrane of target neurons through genetic manipulation. Using lasers, one can then control the firing of neurons in vivo at a millisecond timescale. It allows probing the causal role of different neurons in behavior, while most previous techniques, which relied mostly on recordings, could only measure correlates of behavior. Although it is probably too early to see it clearly, I anticipate that the technique will trigger not only new empirical knowledge, but also conceptually new theories. Indeed, there is a strong bias on the development of theories by what can be experimentally tested and observed. For example, many current theories in neuroscience focus on the “neural code”, that is, how neurons “represent” different types of information. This is an observer-centric view, which in my opinion stems from the fact that our current empirical view of the brain comes from recordings and imaging: we observe responses to stimuli. The neural coding view is a perspective that one has to adopt to explain such experimental data, rather than a hypothesis on what neurons do. But once we switch to different types of experimental data, in which we observe the effect of neural firing, rather than what they “encode”, not only does it become unnecessary to adopt the stimulus-response perspective, but in fact one has to adopt the opposite perspective to explain the experimental data: neurons act on postsynaptic neurons with spikes, rather than observe the firing of presynaptic neurons. This is a conceptual change of perspective, but one that is triggered by a new experimental technique. Note that it still requires the development of these new theories: by itself, the change in perspective is not a theory. But the new technique is responsible for this development in a sociological/historical sense.
Another way in which I anticipate new theories will arise from empirical observations is in the understanding of dendritic function. Almost all theories in computational neuroscience, at least those that address the functional or network level, are based on a view of synaptic integration based on isopotential neurons. That is, it is assumed that the location of synapses on the dendritic tree shapes postsynaptic potentials and perhaps total conductance, but that is otherwise irrelevant to synaptic integration. This is not exactly a hypothesis, because we know that it is not true, but rather a methodological assumption, an approximation. Why do we make this assumption if we know it is not true? Simply because removing this assumption does not give us an alternative theory, it leaves us with nothing: there are so many possibilities in which dendritic integration might work, we do not know where to start. But this will change (and certainly started changing in recent years) once we have a better general idea of how synapses are distributed on the dendritic tree, and perhaps the mechanisms by which this distribution arises. Indeed, one thing at least is clear from recent experimental work: this distribution is not random at all, and obeys different rules for excitation and inhibition. In other words: even though theory does not derive from observations, it needs a starting point, and therefore observations are critical.