Rate vs. timing (XII) Flavors of spike-based theories (2) Binding by synchrony

Before I discuss polychronization, I would like to complement the previous post with a brief discussion of the binding problem, and the idea of binding by synchrony. Synfire chains address this issue (see in particular Bienenstock (1994), “A model of neocortex”), but they are not the only such proposition based on spike timing. In the 1980s, Christoph von der Malsburg developed a theory of neural correlations as a binding mechanism (1981, “The correlation theory of brain function”). He wrote a very clear review in a special issue of Neuron on the binding problem (1999).

In classical neural network theory, objects are represented by specific assemblies of neurons firing together. A problem arises when two objects are simultaneously presented, the “superposition catastrophe”: now two assemblies are simultaneously active, and it may be ambiguous whether the entire set of neurons corresponds to one big assembly or to two smaller assemblies, and if there are two assemblies it may be unclear which neurons belong together. There is a specific example due to Rosenblatt (1961, “Principles of neurodynamics”): suppose there are two neurons coding for shape, one for squares and another for triangles, and two neurons coding for location, one for top and another for bottom. This type of coding scheme, which von der Malsburg calls “coarse coding”, is efficient when there are many possible dimensions, because the number of possible combinations increases exponentially with the number of dimensions. But in the classical neural network framework, it fails when a square and a triangle are simultaneously presented: indeed all four neurons are activated, and it is impossible to tell whether the square is above the triangle.

The proposed solution to the superposition catastrophe is to use spike timing as a signature of objects. In this example, neurons coding for the same object would fire at the same time, so that the two neural assemblies can be distinguished unambiguously. von der Malsburg mentions in this review that the source of synchrony can be external (as in my paper on computing with synchrony), or internal – one proposition being that binding is mediated by transient gamma oscillations (see Wolf Singer’s work). But he also warns that it may be difficult to find experimental evidence of such a mechanism, because in his mind synchrony should be transient since it has to be dynamic, and should involve many neurons so as to immediately impact postsynaptic neurons (this is related to my post on the difference between synchrony and correlation). Thus to observe such transient distributed synchrony requires to simultaneously record a large number of neurons, and perhaps to know which of these neurons are relevant.

He postulates that binding requires a fast synaptic plasticity mechanism, “dynamic link matching”, the ability to quickly form connections between neurons coding for different features. The idea applies to the binding problem, i.e., representing several objects at the same time, but also to the compositionality problem, which is slightly different and perhaps more general. A good example, perhaps, is working memory for melodies. A melody is a sequence of musical notes with specific pitch and duration. One can hear a melody for the first time and repeat it (e.g. sing it). If the melody is really novel, this ability cannot rely on “combination cells”, it really is the particular sequence of notes that must be kept in memory. This example instantiates both the binding problem and the compositionality problem. There is a binding problem because each note is defined by two properties, pitch and duration, that have to be bound together. There is a compositionality problem because the notes must be organized in the correct order, and not just as a set of notes. So what needs to be represented is not just a set of features, but the organization (the links) between these features, the knowledge of which note goes after each note. In mathematical terms, one needs to represent an oriented graph between the individual notes. For this problem, classical connectionism seems insufficient – for example neural models of working memory based on attractors.

To summarize this post and the previous post, in these approaches, spike-based theories were introduced to address two shortcomings of standard rate-based neural network theories: the binding problem and the compositionality problem. Therefore, if contenders of these theories still need to find empirical evidence, contenders of rate-based theories also still need to respond to these criticisms – compositionality being probably the most difficult problem.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *