I have devoted some effort to critically analyze the methodological and conceptual foundations of theoretical and computational neuroscience (or in fact neuroscience in general). This has led me to address epistemological questions such as: what is a realistic model? What is a theory of brain function supposed to prove and how do you prove it? What do we mean by modeling “perception”? This naturally also connects to questions in philosophy of mind (see this post about why scientists should care about philosophical questions). My conclusion, so far, is that the said foundations are rather fragile and that to make progress, we should look at models of sensorimotor systems for which function can be defined without the need for an external observer to interpret neural activities (see my ongoing research on the swimming neuron).
I am planning to write a book on the epistemology of neuroscience (especially theoretical neuroscience), for which I have put some content on my blog in the series What is computational neuroscience? Meanwhile, I have worked on a few specific topics.
What is a realistic model? I have addressed this question in the specific context of single-compartment models of spike initiation (2), where I show that the most realistic model turns out to be the simplest one (integrate-and-fire). I argue that a model can be assessed over two different dimensions: empirical content, which is the set of predictions that it can make (e.g. the Hodgkin-Huxley model makes prediction about the sodium current, not the integrate-and-fire model); and empirical accuracy, which is whether those predictions match empirical data. Oddly enough, it is often implicitly assumed that the more detailed a model is (more empirical content), the more realistic it is, forgetting that the detail must also be right. This is related with the common practice in computational neuroscience to build what I have called “Frankenstein models”.
It is customary in neuroscience to describe correlations between stimulus properties and neural activity as “codes”, within the broader computer metaphor of the brain (9). While this might seem at first like an innocuous choice of words to describe experimental results, we realize that it is in fact the conceptual basis of dominant theories of brain function, in which coding terms are used metaphorically as in “the brain reads neural codes” (in reading, you choose to read at your own pace, and what you read is not changed by the act of reading; hence this is a metaphorical description of brain activity). I have written a detailed critique of the neural coding metaphor, with forthcoming commentaries from many people (6), and my response (7). The essay includes several aspects that I previously touched upon in a few papers (see also a related series of posts on my blog).
The first aspect is invariance (1). When we say “this neuron encodes sound location”, we mean that there is a way to infer sound location from that neuron’s activity. But the technical basis of this kind of statement rarely supports this interpretation. Very often, one measures a “tuning curve”, how the neuron’s response varies when the location of a sound stimulus is varied experimentally. But this is just a measure of sensitivity; what should be demonstrated instead is that the putative code does not vary with any other aspect than sound location. Otherwise, we might as well say that the auditory nerve encodes sound location, or even the sound encodes sound location.
The second aspect is complexity (5). Propositions such as “this neuron encodes sound location” hide the fact that they are established in very specific contexts, namely in experimental contexts where there is a sound source, of which just one property is varied. But what happens with this neuron if there are two sound sources, or no sound source, or the sound of wind? It is as if perception consisted in filling forms with properties, but we forget that the forms and their meaning must also be provided for each situation. This corresponds to object formation and scene analysis, two major psychological processes that are absent from the coding framework.
The third aspect is causality (3). A major question in neuroscience is whether the brain “uses a rate code” or a “spike timing code”. But posing the question in correlational terms is misleading. One might say for example that the heart encodes running speed in its rate, which is the firing rate of myocytes: this is clear from the fact that rate covaries with running speed and spike timing is not reproducible between trials. Yet it is the precise temporal coordination of myocyte firing that makes the heart beat. The confusion comes from stating the problem in terms of correlation instead of causation. What has causal powers in the nervous system is the individual spike, not firing rates which are statistics.
Related to the misuse of the concept of information in the neural coding literature, I wrote a short criticism of Integrated Information Theory (8), a major theory of consciousness.
Perception and subjective physics
In computational neuroscience, the problem of perception is generally framed in a representational-encoding framework, where the goal for the sensory system (David Marr's “computational level”) is to “compute” objective properties of the world, for example the location of a sound source. But there is no place in the brain where that quantity exists; there is only neural activity interpreted by an external observer in terms of an objective physical property. So in standard computational frameworks, the critical step is not done by the neural system but by the observer. This is related to classical criticisms of the computer metaphor of the brain (see e.g. Hubert Dreyfus).
I have written a long essay where I try to redefine the “computational level” in a way that makes more sense from the point of view of the organism (4). I propose that what is to be perceived or represented is not the objective properties of the world but the “subjective physics” of the world, i.e., the set of laws that govern the sensory signals and their relations with actions (in relation with the views of Gibson and O'Regan), and I describe subjective physics in some detail.
There is also some material on my blog about the perception of sound.
Relevant publications (chronological order):
- Brette R (2010) On the interpretation of sensitivity analyses of neural responses.
- Brette R (2015). What Is the Most Realistic Single-Compartment Model of Spike Initiation?
- Brette R (2015). Philosophy of the spike: rate-based vs. spike-based theories of the brain.
- Brette R (2016). Subjective physics. (Now a chapter in Closed Loop Neuroscience, El Hady (ed), Academic Press.)
- Brette R (2018). The world is complex, not just noisy.
- Brette R (2019). Is coding a relevant metaphor for the brain? (see my also blog posts on neural coding)
- Brette R (2019). Neural coding: the bureaucratic model of the brain. (Response to commentaries on Is coding a relevant metaphor for the brain?)
- Brette R (2021). Does the present moment depend on the moments not lived?
- Brette R (2022). Brains as computers: metaphor, analogy, theory or fact?