Sensory systems

I am interested in how sensory systems work in ecological environments, and in particular in the perception of space, which is shared across almost all sensory modalities (including pain). Unlike lab environments, ecological environments are never simple (simple organisms also do not live in simple environments). My work starts from the view that perception relies on the identification and manipulation of models of the world, understood as relations between observables (ie, not necessarily generative models), where observables are sensory signals. I call these perceiver-oriented models “Subjective physics”. This view connects with major theories in psychology (Gestalt psychology, Gibson's ecological approach, O'Regan's sensorimotor theory), philosophy of mind (Poincaré, Merleau-Ponty) and linguistics (Lakoff).

You can have a look at a presentation I almost gave in the Champalimaud Institute: “An ecological approach to neural computation”.

Ecological acoustics

Our first goal is to characterize the structure of sounds in natural environments (see also my series of blog posts “What is sound” and my paper on “subjective physics”). Animals and humans use binaural cues to locate sound souces in space, in particular interaural time differences (ITD). These are often modelled as a fixed difference of propagation delays between the two ears, but this is not accurate: because of diffraction, it is known that ITDs depend on sound frequency. We have quantified this property with binaural recordings on stuffed animals, humans and physical models (17,19, 21). We then found out that the frequency dependence of ITD, measured in acoustics, matches the frequency dependence of the preferred ITD of binaural neurons (measured electrophysiologically) (19). This suggests that binaural neurons are tuned to acoustical features of ecological environments, rather than to fixed interaural delays, and we proposed that this is achieved by a combination of cochlear disparity (binaural neurons receiving inputs from mismatches places on the two cochleae) and axonal delay mismatches.

Another source of complication in real environments is reflections. It is often assumed that echoes are somehow suppressed by the auditory system. But this is not possible for early reflections, such as reflections on the ground (a type of reflections that is always part of our environment). Instead, these produce interferences that change the binaural cues, compared to the anechoic case, which we have quantified in detail (14).

Neural models of sensory systems

My hypothesis is that sensory relations, which constitute subjective models of the world, are identified as temporal invariants in the sensory flow (ie relations that are satisfied over a contiguous period of time), which is most directly connected with Gibson's notion of “invariant structure”. Physiologically, I have proposed that relations between sensory signals are reflected in the relations between the timings of spikes, i.e., in neural synchrony that is tuned to specific sensory models. I have proposed the concept of “synchrony receptive field” to describe the set of sensory signals that elicit synchronous firing in a given set of neurons, together with neural network models that can identify sensory models based on selective synchrony (13) (see my comments on the difference between correlation and synchrony).

This link with neural synchrony is motivated by the observation that neurons are extremely sensitive to coincidences in their inputs (11) and respond to time-varying inputs with precisely timed spikes, which I showed is a general property of spiking models (2) (I had previously developed a mathematical theory of one-dimensional integrate-and-fire models driven by time-varying inputs (3,4), including periodic inputs (1,3)). More generally, I defend the view that neural computation and dynamics cannot be adequately modelled by firing rates, and rather rely on the coordination of spikes (see this review (20) and my blog posts on this debate). See also my presentation (in French) at the Collège de France on time in neural computation.

Sound localization

The synchrony receptive field of two monaural neurons on opposite sides (left/right) is a spatial field, and detecting these location-specific synchrony patterns forms the basis of a simple and accurate model of sound localization (6). In this model, sound location is indicated by the activation of a specific assembly of binaural neurons, and the mapping from assembly to location can be determined by Hebbian learning (7). This hypothesis implies that the filtering properties of monaural inputs to a binaural neuron are precisely matched: this has been confirmed in the barn owl, and can emerge through spike-timing-dependent plasticity (10).

A prediction of the theory is that binaural neurons are tuned to sound location but not necessarily to interaural time difference (ITD), since ITD depends on sound frequency (17). Specifically, it predicts that, for a given neuron, preferred ITD should vary with frequency in the same way as ITD varies with frequency for the preferred location. This is what we found by comparing acoustical and electrophysiological recordings (19). We proposed that this is achieved by coincidence detection between fibers originated from mismatched places on the two cochleae, in combination with axonal delay mismatches. We have also shown previously that the dependence of preferred ITD on preferred frequency across neurons may arise during the development of the circuit, because of the temporal correlations in their inputs (8).

A popular alternative theory postulates that in mammals, sound location is encoded only by the average activity of binaural neurons in each hemisphere. The arguments in favor of this theory are based on an analysis of neuron sensitivity, that is, how neural responses change when the ITD changes. But in realistic situations, I have argued that information cannot be equated with sensitivity: in fact, to encode auditory information in complex environments, neurons should have diverse properties and this diversity should be exploited (5). Specifically, we have shown that the poor performance of the hemispheric model in complex situations (sound diffraction, variable sound spectrum) is incompatible with behavioral performance (16).

Pitch

We have also applied the synchrony receptive field theory (13) to pitch (the perception of how low or high a musical tone is), where we propose that pitch is the perceptual correlate of the regularity structure of the basilar membrane vibration (18). The theory explains in particular why there are perceptual differences between resolved and unresolved harmonic sounds. In agreement with the theory, we have shown that the pitch of low frequency tones decreases when level increases (22).

Tools

On the practical side, to design auditory models, we developed an auditory toolbox for the Brian simulator (9) (see also the page on simulation).

One problematic issue when implementing spiking models of auditory function is that the timing of spikes depends on input level. I showed how to solve this issue with an adaptive threshold (12), and we used this model to predict in vivo responses of bushy cells at various levels (15).

Relevant publications (chronological order):

  1. Brette, R. (2003). Rotation numbers of discontinuous orientation-preserving circle maps.
  2. Brette, R. and E. Guigon (2003). Reliability of spike timing is a general property of spiking model neurons.
  3. Brette, R. (2004). Dynamics of one-dimensional spiking neuron models.
  4. Brette, R. (2008). The Cauchy problem for one-dimensional spiking neuron models.
  5. Brette R (2010) On the interpretation of sensitivity analyses of neural responses.
  6. Goodman DF and R Brette (2010). Spike-timing-based computation in sound localization.
  7. Goodman DF and R Brette (2010). Learning to localise sounds with spiking neural networks.
  8. Fontaine B and Brette R (2011). Neural development of binaural tuning through Hebbian learning predicts frequency-dependent best delays. (Supplementary material).
  9. Fontaine B, Goodman DFM, Benichoux F, Brette R (2011). Brian Hears: online auditory processing using vectorisation over channels.
  10. Fischer BJ, Steinberg LJ, Fontaine B, Brette R, Peña JL (2011).Effect of instantaneous frequency glides on ITD processing by auditory coincidence detectors.
  11. Rossant C, Leijon S, Magnusson AK, Brette R (2011).Sensitivity of noisy neurons to coincident inputs.
  12. Brette R (2012). Spiking models for level-invariant encoding.
  13. Brette R (2012). Computing with neural synchrony. (code)
  14. Gourévitch B and Brette R (2012). The impact of early reflections on binaural cues.
  15. Fontaine B, Benichoux V, Joris PX and Brette R (2013).Predicting spike timing in highly synchronous auditory neurons at different sound levels.
  16. Goodman DFM, Benichoux V, Brette R (2013). Decoding neural responses to temporal cues for sound localization.
  17. Rébillat M*, Benichoux V*, Otani M, Keriven R, Brette R (2014). Estimation of the low-frequency components of the head-related transfer functions of animals from photographs.
  18. Laudanski J, Zheng Y, Brette R (2014). A structural theory of pitch.
  19. Bénichoux V, Fontaine B, Karino S, Franken TP, Joris PX*, Brette R* (2015). Neural tuning matches frequency-dependent time differences between the ears.
  20. Brette R (2015). Philosophy of the spike: rate-based vs. spike-based theories of the brain.
  21. Bénichoux V, Rébillat M, Brette R. On the variation of interaural time differences with frequency.
  22. Zheng Y and Brette R (2017). On the relation between pitch and level.