I am a research director with INSERM, in the Institute of Intelligent Systems and Robotics in Paris, France. I am not so sure what my field is exactly. These days I am a biophysicist, microbiologist (specifically protozoologist) and philosopher, but my core expertise is theoretical neuroscience. I have been trained in mathematics, computer science and physics (in Ecole Normale Supérieure in Lyon). The first computer I programmed was an Alice 32 (32 KB of memory and programs on a tape!), so I guess we can say I am a computer geek. I did a Master in neural network theory at King’s College back in 1999 (not very different from current machine learning, but no GPU), then another Master in computer vision and learning theory, in Paris. Then I did a PhD in mathematical neuroscience (in Yves Burnod’s team in Université Pierre et Marie Curie, Paris; now part of Sorbonne Université), on the dynamics of spiking neuron models (dynamical systems theory). I found that the robust spike timing of neurons could be understood as a sort of attractor, not just “low noise” (Brette & Guigon, 2003). It is during this thesis that I got convinced of the significance of the event-based nature of neural interaction (see my later “Philosophy of the spike” paper), and also of the dynamical framework of cognition.
After my PhD, I went to work for a few months in Alain Destexhe’s team in Gif-sur-Yvette (south of Paris) on the dynamic clamp technique. I worked closely with an experimental neurophysiologist (Zuzanna Piwkowska) and learned a lot about electrophysiology and amplifiers. I developed a technique to improve the quality of electrical measurements (Brette et al., 2008). I then did a short postdoc in 2005 in Wulfram Gerstner’s lab in EPFL, where I created a simple neuron model, the adaptive exponential integrate-and-fire model, which can be parameterized to reproduce various neural firing modes (bursting, adapting, etc.).
I then started as an assistant professor position at the computer science department of Ecole Normale Supérieure in Paris. There I worked on simulation techniques, especially event-based. After I recruited Dan Goodman for a postdoc, we decided to make a little toolbox to simulate spiking neural networks. We started from some code I had previously written in Scilab for teaching (supplement of my 2007 review paper on simulation), which used vector-based calculus to make it possible to use a script language without sacrificing too much efficiency. I had just discovered Python and we started writing the neural network simulator Brian (Goodman & Brette, 2008). The software is still widely used (although completely rewritten with code generation (Stimberg et al., 2019)) ; the lead developer is now Marcel Stimberg (who started a postdoc with me in 2011 and is now a research engineer in my institute). In 2023, we got a prize for this software from the French Ministry of Research (“Prix science ouverte du logiciel libre de la recherche”).
Around the same time, I got interested in the biophysics of spike initiation. With my first PhD student, Jonathan Platkiewicz, we tried to understand why the spike threshold of neurons varies in vivo, and we showed that it is an adaptive phenomenon linked to the inactivation of the sodium channel, which results in neurons acting as coincidence detectors (not, as is usually assumed, as a summation device). We did this with biophysical modeling, but later we also collaborated with a neurophysiologist (Jose Peña) to test it experimentally (Fontaine et al., 2014).
Around 2008, I moved to the cognitive science department of ENS, in the audition team. I was nominated as a junior member of Institut universitaire de France, and I got an ERC Starting Grant, to work on auditory modeling, especially binaural sound localization. The general idea was that neural synchrony could be used by the nervous system as a matching principle to identify laws in the sensory flow (Brette 2012). The idea connects coincidence detection properties of neurons with spike timing reliability, and relates to Gibson’s “invariant structure” in psychology of perception. We showed that the properties of binaural neurons could be explained by their tuning to the complex laws of sound diffraction by the head (Bénichoux et al., 2015). We also worked on pitch perception, and did some auditory psychophysics as well (we showed for example that low-frequency tones sound lower when they are played louder). For this auditory work, I got an award in 2017 from Fondation pour l’Audition (foundation for hearing).
I moved to the Vision Institute in 2014, to lead a computational neuroscience team. I continued working on sensory modeling, and I also continued working on the biophysics of spike initiation. I came to realize that spike initiation in vertebrate neurons is actually quite different from the textbook picture, which comes from the pioneering work on the giant squid axon in the 1950s. This is because spikes initiate in a tiny region of the axon packed with ionic channels next to the cell body, called the axon initial segment. I developed a biophysical theory of this specific situation, which I called resistive coupling theory (Brette, 2013 ; with a different name then!). One counter-intuitive implication of the theory is that the neuron becomes more excitable if the initial segment moves away from the cell body (Goethals & Brette, 2020), which we confirmed experimentally with my experimental collaborator Dominique Debanne (Fékété et al., 2021).
I started to think that it would be nice to do some neurophysiology experiments ourselves, and I set up a patch-clamp rig, so we became a partly experimental lab. I thought a nice little warm-up exercise would be to do a Hodgkin-Huxley model of Paramecium, a unicellular protist that figures in the introduction of Hille’s biophysics book. It took about 5 years to do this (Elices et al., 2023), but in the meantime, my PhD student Sarah Goethals did some very nice (and difficult!) patch-clamp recordings in the tiny granule cells of the cerebellum (in her thesis only) and (with collaboration within the Vision Institute) in the retina, combined with biophysical modeling (Goethals et al., 2021). From this work, we concluded that the position of the initial segment is not tuned for excitability (which is/was the conventional wisdom) but rather for transmitting spikes to the cell body.
From 2013, I started writing a blog. I had become increasingly uncomfortable with some of the core concepts of computational neuroscience, such as “codes” and “information”. It felt as if the standard systems neuroscience paper was more a rhetorical exercise in casting data into a rather vague conceptual framework than actual modeling. I decided to take one day a week to just think about these epistemological issues, and I would write my thoughts on the blog. This developed into several epistemological papers, in particular one questioning the relevance of the concept of “neural code” (Brette, 2019 ; version without the comments on Biorxiv). I have developed this work into a book “The Brain, in Theory” (2026), which also critically addresses other concepts, such as computation, information, optimization, prediction.
These reflections convinced me of the critical importance of embodiment when studying the nervous system; that is, it makes little sense to study cognition (how you get to know about the outside world) by just looking at neurons, disconnected from the body and the environment. I took some interest in the motor system, thanks to my PhD student Charlotte Le Mouel, who showed that posture is involved not just in balance but also in anticipation, such as the posture of the runner in starting blocks (Le Mouel & Brette, 2017).
I looked for a way to study the physiological basis of cognition at the level of the organism, but all model organisms for this kind of work are already very complex (like C. elegans). I stumbled on a gold mine: it turns out that in the 1970s, neuroscientists were not shy about sticking electrodes into all sorts of living organisms, and hundreds of papers were published on the electrophysiology of the unicellular ciliate Paramecium, which controls its motility with action potentials (hence the nickname “swimming neuron”). I read everything (see my review), and started a research line on the neuroscience of Paramecium.
As I started to diverge quite considerably from the Vision Institute’s main theme, in 2023 I moved to the Institute of Intelligent Systems and Robotics (still in Paris), where I set up an experimental lab. In addition to modeling, we do behavioral analysis, genetics, electrophysiology, various kinds of imaging. We built a model of Paramecium’s action potential coupled to its motility, making it a whole-organism swimming model (Elices et al., 2023). Currently, we have just found out that this swimming neuron engages in sophisticated foraging behavior, featuring memory, decision making and anticipation, and we are trying to understand how, with theory and experiments.
