What is computational neuroscience? (XVII) What is wrong with computational neuroscience?

Computational neuroscience is the field that aims at explaining the neural mechanisms that underlie cognitive abilities, by developing quantitative models of neural mechanisms that are able to display these cognitive abilities. It can be seen as the “synthetic” approach to neuroscience. On one hand, it is widely believed that a better understanding of “how the brain does it” should allow us to design machines that can outperform the best computer programs we currently have, in tasks such as recognizing visual objects or understanding speech. On the other hand, there is also a broad recognition in the field that the best algorithms for such tasks are always to be found in computer science (e.g. machine learning), because these algorithms are specifically developed for these tasks, without the “burden” of having to explain biology (for example, support vector machines or hidden markov chains). In fact, part of the work done in computational neuroscience aims at connecting biological mechanisms with preexisting computer algorithms (e.g. seeing synaptic plasticity as a biological implementation of ICA). Given this, the belief that better algorithms will somehow arise from a better understanding of biology seems rather magical.

What is wrong here is that, while it is proposed that new generation computers should take their inspiration from brains, the entire field of computational neuroscience seems to invert this proposition and to take the computer as a model of the brain. I believe there are two main flaws with the computer analogy: 1) the lack of an environment, 2) the idea that there is a preexisting plan of the brain.

 

1) The lack of an environment

Neural models that address cognitive abilities (e.g. perception) are generally developed under the input-output paradigm: feed data in (an image), get results out (label). This paradigm, inspired by the computer, is also the basis of many experiments (present stimulus, observe behavior/neural activity). It follows that such models do not interact with an environment. In contrast with this typical setting, in a behaving animal, sensory inputs are determined both by the outside world and by the actions of the animal in the world. The relationship between “inputs” and “outputs” is not causal but circular, and the environment is what links the outputs to the inputs. In addition, the “environment” of neural models is often only an abstract idealization, often inspired by a specific controlled lab experiment. As a result, such models may be able to reproduce results of controlled experimental situations, but it is not so clear that they have any explanatory value for ecological situations, or that they can be considered as models of a biological organism.

A corollary of the absence of environment is the lack of autonomy. Such neural models do not display any cognitive abilities since they cannot “do” anything. Instead, the assessment of model performance must rely on the intervention of the external observer, as in the coding paradigm: models are designed so as to “encode” features in the world, meaning that the external observer, not the organism, decodes the activity of the model. This weakness is an inevitable consequence of the strong separation between perception and action, as the “output” of a sensory system is only meaningful in the context of the actions that it drives. This issue again comes from the computer analogy, in which the output of a program is meaningful only because an external observer gives it a meaning.

These criticisms are in fact very similar to those expressed against traditional artificial intelligence in the 80s, which have given rise in particular to the field of behavior-based robotics. But they do not seem to have made their way in computational neuroscience.

 

2) The plan of the brain

There is another criticism of the computer analogy, which has to do with the idea that the brain has been engineered by evolution in the same way as a computer is engineered. A computer has a program that has been written so as to fulfill a function, and the brain has a structure that has evolved so as to fulfill a function. So in virtually all neuron models, there are a number of parameters (for example time constants) whose values are either chosen “because it works”, or because of some measurements. It is then assumed that these values are somehow “set by evolution”. But genes do not encode parameter values. They specify proteins that interact with other chemical substances. The “parameter values”, or more generally the structure of the brain, result from all these interactions, in the body and with the environment.

The structure of the brain is highly dynamic, most obviously during development but also in adulthood. Synaptic connections are plastic, in strength, structure, conduction delay. But almost everything else is plastic as well, the density and location of ionic channels, the morphology of dendrites, the properties of channels. Activity can even determine whether a neuron becomes excitatory or inhibitory. Therefore what the genes specify is not the structure, but an organization of processes that collectively determine the structure. Humberto Maturana pointed out that what characterizes a life form is not its structure, which is highly dynamic, but its self-sustaining organization. This is a fundamental distinction between engineered things and living things.

 

A different approach to computational neuroscience could take biological organisms, rather than the computer, as models. The first point is that neural models must be embedded in an environment and interact with them, so that the external observer is not part of the cognitive process. This implies in particular that perceptual systems cannot be studied as isolated modules. The second point is to focus on organizational mechanisms that guarantee sustainability in an unknown environment, rather than on a structure that specifies a particular input-output function.