What is computational neuroscience? Simply put, it is the field that is concerned with how the brain computes. The word “compute” is not necessarily an analogy with the computer, and it must be understood in a broad sense. It simply refers to the operations that must be carried out to perform cognitive functions (walking, recognizing a face, speaking). Put this way, it might seem that this is pretty much the entire field of neuroscience. What distinguishes computational neuroscience, then, is that this field seeks a mechanistic understanding of these operations, to the point that they could potentially be simulated on a computer. Note that this means neither that computational neuroscience is mostly about simulating the brain, nor that the brain is thought of as a computer. It simply refers to the materialistic assumption that, if all the laws that underlie cognition are known in details, then it should be possible to artificially reproduce them (assuming sufficient equipment).
Another related terminology is “theoretical neuroscience”. This is somewhat broader than computational neuroscience, and is probably an analogy to theoretical physics, a branch of physics that relies heavily on mathematical models. Theoretical neuroscience is not necessarily concerned with computation, at least not directly. One example could be the demonstration that action potential velocity is proportional to diameter in myelinated axons, and to the square root of the diameter in unmyelinated axons. This demonstration uses cable theory, a biophysical theory describing the propagation of electrical activity in axons and dendrites.
“Quantitative neuroscience” also refers to the use of quantitative mathematical models as a tool to understand brain function or dynamics, but the substitution of “quantitative” for “theoretical” suggests that the field is more concerned with data analysis (as opposed to theories of how the brain works).
Finally, “neural modeling” is concerned with the use of quantitative neural models, in general biophysical models. The terminology suggests a data-driven approach, i.e., building models of neural networks from experimental measurements, based on existing theories. This is why I am somewhat uneasy with this terminology, for epistemological reasons. The data-driven approach implicitly assumes that it is possible and meaningful to build a functioning neural network from a set of measurements alone. This raises two critical issues. One is that it is based on what Francisco Varela called “neurophysiological subjectivism” (see this related post), the idea that perception is the result of neural network dynamics. Neurophysiological subjectivism is problematic because (in particular) it fails to fully recognize the defining property of living beings, which is teleonomy (in other words, function). Living organisms are constrained on one hand by their physical substrate, but on the other hand this substrate is tightly constrained by evolution – this is precisely what makes them living beings and not just spin glasses. The data-driven approach only considers the constraints deriving from measurements, not the functional constraints, but this essentially amounts to denying the fact that the object of study is part of a living being. Alternatively, it assumes that measurements are sufficiently constraining that function is entirely implied, which seems naive.
The second major issue with the data-driven approach is that it has a strong flavor of inductivism. That is, it implicitly assumes that a functioning model is directly implied by a finite set of measurements. But inductivism is a philosophical error, for there are an infinite number of theories (or “models”) consistent with any finite set of observations (an error pointed out by Hume, for example). In fact, Popper and his followers also noted that inductivism commits another philosophical error, which is to think that there is such a thing as a “pure observation”. Experimental results are always to be interpreted in a specific theoretical context (a.k.a. the “Methods” section). One does not “measure” a model. One performs a specific experiment and observes the outcome with tools, which are themselves based on currently accepted theories. In other words, an experimental result is the answer to a specific question. But the type of question is not “What is the time constant of the model?”, but rather “What exponential function can I best fit to the electrical response of this neuron to a current pulse?”. Measurements may then provide constraints on possible models, but they never imply a model. In addition, as I noted above, physical constraints (implied by measurements) are only one side of the story, functional constraints are the other side. Neglecting this other side means studying a “soup of neurons”, not the brain.
In summary, it is often stated or implied that “realistic” models are those that are based on measurements: this is 1) an inductivist mistake, 2) a tragic disregard of what defines living beings, i.e. functional constraints.
I will end this post by asking a question: what is a better description of the brain? A soup of “realistic” neurons or a more conceptual mechanistic description of how interacting neurons support cognitive functions?
Perhaps we want neuron soup, since we already understand conceptually how neurons support cognitive function?
I think the answer also depends on the level of accuracy one wants, since "cognitive functions" are emergent concepts like "atoms", which have only a limited regime of validity, compared respectively to neurons or to electrons and protons. A simple and beautiful theory is like being able to operate near a fixed point of a renormalization flow. At low energies, non-renormalizable theories look like renormalizable theories, and we can use the latter. But at high energies, we need non renormalizable thoeries to describe things like massive neutrinos.
Ping : What is computational neuroscience (XXV) – Are there biological models in computational neuroscience? | Romain Brette