I finish this overview of synchrony-based theories with my recent proposal (PLoS Comp Biol 2012). In the next posts, I will discuss theories based on asynchrony. In the theories I have described so far, the starting point is a code based on spike timing, in general a spatiotemporal pattern of spikes assumed to represent some sensory input. But the connection between the sensory input and the spike pattern is not addressed, or at least not considered as a central issue. My proposition connects spike-based computation with the psychological theory of James Gibson, specifically the notion of structural invariant. Gibson starts his book “An ecological approach to visual perception” by criticizing the idea that perception is the process of inferring the objective properties of the world from ambiguous patterns of sensory data, as is often postulated. Indeed, since perception is the source of all knowledge, it is inconsistent to view the objective properties of the world as preexisting to perception. But how then can one know anything about the world?
I will rephrase Gibson’s thinking in a different way by using the dictionary analogy. Inferring the objective world from an image or some sensory data is like looking in a dictionary for the translation of a word in one’s native language. In fact, this is precisely what is generally meant by the “neural coding” metaphor. But this cannot be used to understand a new word in one’s own native language. Instead one uses a different kind of dictionary, in which the word is defined in relationship with other words. Thus the definition of objects in the world is relational, not inferential. Inference can only be secondary, since one must first know what is to be inferred.
How does this relate to perception? Gibson argues that information about the world is present in the invariant structure of sensory inputs, that is, in properties of sensory inputs (relationships) that persist through time, which is to say: the laws that sensory inputs follow. More precisely, invariant structure is a relationship that is invariant with respect to some change. The notion has been extended to sensorimotor relationships by Kevin O’Regan. Two simple examples in hearing are in pitch perception and sound localization. Sounds that evoke a pitch are generally periodic. Periodicity is a relationship on the sensory input, i.e., S(t+T)=S(t) for all times (T is the period and S(t) is the acoustical pressure), and it is precisely this relationship, rather than the spectrum of the sound, which defines the pitch (for example pitch is unchanged if the fundamental frequency is missing). This relationship is not spatial because it is unaffected by movements. In the same way, a sound source produces two acoustical waves at the two ears that have specific relationships, for example (if sound diffraction is neglected) the wave at the contralateral ear is a delayed version of the wave at the ipsilateral ear. This relationship is spatial because it is affected by one’s movements. Besides, there is a systematic relationship between interaural delay and head position that is isomorphic to the source direction. Therefore this relationship can be identified to the source direction, without the need for an externally defined notion of physical angle. When a sound is presented in a noisy environment, the direction has to be inferred since it is ambiguous, but what is inferred is the relationship that defines the direction.
How does this relate to synchrony? Simply put, synchrony is a relationship defined through time, so it qualifies as invariant structure. In my paper, I show how this relationship between spike timings can correspond to a relationship between sensory inputs by introducing the concept of “synchrony receptive field” (SRF). The SRF of a given pair of neurons is the set of sensory signals that elicit synchronous spiking in the two neurons (it can be extended to a group of neurons). Suppose the two neurons receive different versions of the sensory signal S: F(S) and G(S). Then assuming a deterministic mapping from signals to spikes, synchrony reflects the relationship F(S) = G(S), a relationship defined on the sensory inputs. Therefore, across the neural population, synchrony patterns reflect the set of relationships on the sensory inputs, and neurons that respond to coincidences signal these relationships.
This mechanism can be used practically to recognize relationships, for example: detecting an odor defined by ratios of receptor affinities, estimating the pitch of a sound, estimating the location of a sound source (Goodman and Brette, PLoS CB 2010), estimating binocular disparities. The key computational benefit is that it solves the difficult problem of invariance, that is, the fact that objects of perception are invariant under many different perspectives. For example a face can be seen under different angles, or the same sound source can produce different sounds at the same location. To be more precise, the problem is dissolved by this approach rather than solved. Indeed, the key insight is that invariance is only a problem for an inferential process. When the objects to be perceived are defined instead by relationships, then there is no invariance problem, since a relationship is itself an invariant. For example, periodicity is a relationship that is invariant to the spectrum of a sound.
The theory connects to the other spike-based theories I mentioned previously. Indeed sensory relationships are reflected by synchrony between neurons (or relative spike timing, considering conduction delays), and it addresses the problem of binding in the same way as synfire chains: sensory signals that are not temporally coherent, and therefore not originating from the same object, cannot produce synchronous firing. It also connects with the polychronous theory of working memory: the spike pattern that is stored in that theory corresponds here to a sensory relationship. This makes it is possible to store sensory relationships in the form of spike timing relationships, without the need for an explicit conversion to a “rate code”.
On the empirical side, the theory relies on the fact that neurons operate in a fluctuation-driven regime, in which excitation and inhibition are approximately balanced (or inhibition dominant), as empirically observed. But shouldn’t this theory predict widespread synchrony in neural populations, unlike what is observed in the brain? In fact it should not. First of all, synchrony is only informative if it is a rare event. This is precisely what is captured by the concept of synchrony receptive field: synchrony occurs only for specific sensory signals (or more precisely, sensory relationships). Even though I did not include it in the paper, it would actually make sense that correlations that are not stimulus-specific (i.e., those that can be predicted) are minimized as much as possible. This would support the idea that recurrent inhibition is tuned to cancel excitatory correlations (see my previous post), which would produce weak correlations on average.