I have an interest in epistemology, a branch of philosophy that deals with questions such as: what is knowledge? How do we acquire it? What is a good theory and how do you prove it? What is a good scientific question? Many scientists fear the word “philosophy”, claiming that scientists know how to do science (birds don't need to be taught how to fly!). Yet anyone who has been reviewed must have realized that there is apparently no consensus on such issues. Maybe scientists know how to do science, but they apparently do not agree with each other. So it is apparently not that obvious and yet foundational for any kind of scientific work. I am planning to write a book on the epistemology of neuroscience (especially theoretical neuroscience), for which I have put some content on my blog in the series What is computational neuroscience? Meanwhile, I have worked on a few specific topics.
What is a realistic model? I have addressed this question in the specific context of single-compartment models of spike initiation (3), where I show that the most realistic model turns out to be the simplest one (integrate-and-fire). I argue that a model can be assessed over two different dimensions: empirical content, which is the set of predictions that it can make (e.g. the Hodgkin-Huxley model makes prediction about the sodium current, not the integrate-and-fire model); and empirical accuracy, which is whether those predictions match empirical data. Oddly enough, it is often implicitly assumed that the more detailed a model is (more empirical content), the more realistic it is, forgetting that the detail must also be right. This is related with the common practice in computational neuroscience to build what I have called “Frankenstein models”.
The metaphor of coding is widely used in neuroscience: what is the neural code for X? (orientation, sound location, faces...). This concept is my opinion highly problematic, as it casts the problem of cognition in the conceptual framework of communication, which in most cases (i.e. beyond sensory transduction) is not appropriate (5). In neuroscience, the coding concept is used in the sense of Shannon, that is, a code word is an element of a set of possible words, and coding is mapping one set (e.g. a stimulus set) to another (e.g. neural activity). This is a rather restricted conceptual framework for cognition, in particular perception (and certainly not the only theoretical concept of information, see e.g. Kolmogorov complexity). Is picking one element of a set a good model of perception?
I have started by addressing a simple methodological issue related to the concept of “tuning curve” (1). One can always measure the tuning curve of a neuron to any kind of parameterized stimulus, and if it has a maximum then we tend to say that the neuron is “tuned” to that stimulus dimension (say orientation), but in reality all we have observed is that the neuron's firing rate are sensitive to the parameter we have varied. But what if we measure the tuning curve again to the same dimension, but with a stimulus that has been modified in another dimension (say brightness), and the tuning curve is different? A traditional interpretation is that the tuning is “context-dependent”. But really what we observe is that the neuron response is sensitive to two dimensions. When we say that the neuron “codes” for orientation / sound location / etc, we implicitly mean that it specifically codes for that aspect, and therefore what is implied is that the response is to some extent invariant to other dimensions. In other words, there is a difference between sensitivity and information, which is obscured by the use of tuning curves.
A deeper question in neuroscience is “the nature of the neural code”: spikes or firing rates? I wrote an essay where I argue that casting this issue as a problem of coding is misleading (4); the BOLD signal also “encodes” stimuli but we wouldn't say that brain function is based on the BOLD signal. What matters is whether spikes or rates have a causal role in neural activity, and when the question is reformulated in this way, it gets very difficult to defend the rate-based view. There is also a related series of posts on my blog.
Perception and subjective physics
In computational neuroscience, the problem of perception is generally framed in a representational framework, where the goal for the sensory system (David Marr's “computational level”) is to “compute” objective properties of the world, for example the location of a sound source. But there is no place in the brain where that quantity exists; there is only neural activity interpreted by an external observer in terms of an objective physical property. So in standard computational frameworks, the critical step is not done by the neural system but by the observer. This is related to classical criticisms of the computer metaphor of the brain (see e.g. Hubert Dreyfus).
I have written a long essay where I try to redefine the “computational level” in a way that makes more sense from the point of view of the organism (2). I propose that what is to perceived or represented is not the objective properties of the world but the “subjective physics” of the world, i.e., the set of laws that govern the sensory signals and their relations with actions (in relation with the views of Gibson and O'Regan), and I describe subjective physics in some detail. The essay was too long for any journal so it is currently only on Arxiv; however it will soon be published as a book chapter.
There is also some material on my blog about the perception of sound.
Relevant publications (chronological order):
- Brette R (2010) On the interpretation of sensitivity analyses of neural responses.
- Brette R (2013). Subjective physics.
- Brette R (2015). What Is the Most Realistic Single-Compartment Model of Spike Initiation?
- Brette R (2015). Philosophy of the spike: rate-based vs. spike-based theories of the brain.
- Brette R (2017). Is coding a relevant metaphor for the brain? (see my also blog posts on neural coding)