Je suis directeur de recherche en neurosciences théoriques et computationnelles à l’Institut de la Vision où je dirige l’équipe de Neurosciences computationnelles des systèmes sensoriels. Je m’intéresse à la modélisation neuronale de la perception et à l’initiation des potentiels d’action.
I am a research director in computational and theoretical neuroscience in the Vision Institute, where I lead the Computational neuroscience of sensory systems group. My main interest is neural modeling of perception and spike initiation.
Please don’t hesitate to contact me if you are interested in working in my group. I am also an editor for the Springer Series in Computational Neuroscience: contact me if you would like to write a book.
- Brette R (2015). What Is the Most Realistic Single-Compartment Model of Spike Initiation? PLoS Comput Biol. 2015 Apr 9;11(4):e1004114.
- Laudanski J, Zheng Y, Brette R (2014). A structural theory of pitch. eNeuro. DOI: 10.1523/ENEURO.0033-14.2014
- Brette R and Destexhe A, eds (2012). Handbook of Neural Activity Mesurement. Cambridge University Press.
- Brette R (2012). Computing with neural synchrony. PLoS Comp Biol. 8(6): e1002561. doi:10.1371/journal.pcbi.1002561. (code)
- Goodman, D. and R. Brette (2009). The Brian simulator. Front Neurosci doi:10.3389/neuro.01.026.2009.
I am a computational and theoretical neuroscientist. On the methodological side, it means I develop and study mathematical models of the nervous system, and it often involves simulations on computers. On the conceptual side, computational and theoretical neuroscience are two slightly different fields. Theoretical neuroscience is concerned with the establishment of general principles governing the function of the brain. To give an example, a rather successful theoretical principle is the idea that biological organisms tend to minimize the consumption of resources (in particular energy). Computational neuroscience deals with how the brain “computes”, a questionable term perhaps, but in the broadest sense it simply addresses the question of how behavior and subjective experience may arise mechanistically from the activity of the nervous system. You may want to read a series of blog posts I wrote on the epistemology of theoretical and computational neuroscience.
I am currently interested in three broad topics: 1) the initiation of spikes (theoretical neuroscience); 2) sensory systems (computational neuroscience); 3) simulation technology (neuroinformatics). I have also worked on the design of intracellular recording techniques.
Vertebrate neurons communicate mainly by stereotypical electrical impulses called action potentials or “spikes” (see this series of posts on firing rate vs. spike timing). Thus a critical question is how neurons transform input signals into spike trains. At a general level, this is of course well known: sodium channels open when the membrane potential exceeds a threshold. But there are many subleties. For example, the spike threshold depends on previous activity, on multiple timescales, with changes mediated by ionic channel properties (on a short timescale) and by structural changes (on a long timescale). Space also plays a critical role: I have recently shown theoretically that the axonal initiation of spikes makes sodium channels open as a discrete function of somatic voltage, effectively making the integrate-and-fire model much more realistic than previously thought. There are also many unanswered questions, for example: how are the various ionic channels coordinated (in properties and in spatial distribution) so that spike initiation is functional and efficient? how is spike initiation modulated by activity on the long term? what is the function of the various types of channels in the axonal initial segment? For theoretical neuroscience, this is largely unexplored territory.
I am interested in how sensory systems work in ecological environments. Unlike lab environments, ecological environments are never simple (simple organisms also do not live in simple environments). Thus computational neuroscience should explain how sensory systems function in natural environments, and not only explain lab experiments done in restricted configurations. I have worked in particular on sound localization, where acoustical cues are complex in ecological environments (in particular reflections and diffraction by the head). I have developed a computational theory in which the basis of perception is sensory laws, rather than sets of elementary sensations. Computationally, this approach solves the invariance problem (ie, the problem that the same property can appear in many different ways on the sensory organs) by selecting temporal invariants as the basis of computation. It is directly related with the notion of “invariant structure” developed by psychologist James Gibson. It also connects with a recurring theme in psychology and philosophy of perception, where it is argued that percepts are determined by relationships between sensory signals rather than by absolute signals (e.g. Gestalt psychology, Merleau-Ponty‘s phenomenology of behavior and perception, and O’Regan‘s sensorimotor theory). Physiologically, I have proposed that these relationships between sensory signals are reflected in the relationships between the timings of spikes, i.e., in neural synchrony that is tuned to specific sensory laws. I have proposed the concept of “synchrony receptive field” to describe the set of sensory signals that elicit synchronous firing in a given set of neurons.
Currently, I am trying to go beyond standard paradigms in computational neuroscience that I find unsatisfying, in fact the very notion that neurons “compute” or “code”. Those analogies indeed carry the view of the nervous system as a factory that transforms given inputs (sensory signals) into an output (neural activity, behavior). But the relationship is circular in general: sensory signals influence motor responses, and motor responses also directly influence sensory signals (e.g. eye movements). I have a written a paper named “Subjective physics” on the nature of information in sensorimotor systems, which partly addresses this issue. The analogies also imply a static view in which neural activity is the product of a process (what the nervous system-factory produces from the sensory data). But seeing neural activity as a product inevitably results in the homunculus fallacy: who reads neural activity? In fact neural activity is not a product, it is the process itself. That is, the nervous system is a dynamic system interacting with its environment. This view is not specifically mine, but it is largely underdeveloped in “computational” neuroscience. For this reason, I am currently interested in complete sensorimotor systems as a way to avoid these conceptual problems.
In 2008, I started the Brian simulator with Dan Goodman (postdoc at the time and now lecturer in Imperial College, UK). It is a simulator for spiking neural networks written in Python. The focus is on flexibility and ease of use, which has made it a highly popular tool in neuroscience. All models are directly specified by users with their equations – there are no predefined models, which has many benefits. It is also possible to simulate multicompartemental models. With the new version (2.0) relying on code generation, Brian is now also fast and we are currently working on running it on multiple types of hardware (collaborations are welcome).