Correlation vs. synchrony

What is the difference between neural correlation and neural synchrony? As I am interested in the role of synchrony in neural computation, I often hear the question. I will try to give a few answers here.

A simple answer is: it’s a question of timescale. That is, synchrony is correlation at a fine timescale, or more precisely, at a timescale shorter than the integration time constant of the neuron. In this sense, the term synchrony implicitly acknowledges that there is an observer of these correlations. This usage is consistent with the fact that neurons are very sensitive to the relative timing of their inputs within their integration time constant (see our recent paper in J Neurosci on the subject).

However, although I have been satisfied with this simple answer in the past, I now feel that it misses the point. I think the distinction rather has to do with the distinction between the two main theories of neural computation, rate-based theories vs. spike-based theories. The term “correlation” is often used in the context of rate-based theories, whereas the term “synchrony” is used in general in the context of spike-based theories (as in my recent paper on computing with neural synchrony). The difference is substantial, and it does not really have to do with the timescale. A correlation is an average, just as a firing rate is an average. Therefore, by using the term correlation, one implicitly assumes that the quantities of interest are averages. In this view, correlations are generally seen as modulating input-output properties of neurons, in a rate-based framework, rather than being the substance of computation. But when using the term synchrony, one does not necessarily refer to an average, simply to the fact that two spikes occur at a similar time. For example, in my recent paper on computing with neural synchrony, I view coincidence detection as the detection of a rare event, that is, a synchrony event that is unlikely to occur by chance. If one takes this view further, then meaningful synchrony is in fact transient, and therefore the concept cannot be well captured by an average, i.e., by correlation.

The distinction might not be entirely obvious, so I will give a simple example here. Consider two Poisson inputs with rate F. Consider one spike from neuron A. The probability that neuron B spikes within time T after this spike can be calculated (integral of an exponential distribution), and for small T it is essentially proportional to T (and to F squared). If T is very small and the two inputs are independent, this event will almost never happen. So if it does happen, even just once, then it is unexpected and therefore meaningful, since it means that the assumption of independence was probably wrong. In a way, a coincidence detector can be seen as a statistical test: it tests the coincidence of input spikes against the null hypothesis, which is that inputs are independent. A single synchrony event can make this test fail, and so the concept cannot be fully captured by correlation, which is an average.

To summarize, synchrony is not about determinism vs. stochasticity, it is not about correlation on a very fine timescale, or about very strong correlation, it is about relative timing in individual spiking events, and about how likely such an event is likely to occur by chance under an independence hypothesis.

What is sound? (III) Spatial hearing

In my previous post, I argued that the spatial notion of shape is a secondary property of sounds that can only be acquired through other sensory modalities. This happens even though sounds contain highly structured information about shape, because this structure does not relate to self-generated movements. One may then wonder whether the notion of auditory space in general, for example the spatial location of sound source, is also secondary. One may postulate that the spatial content of auditory spatial cues is only acquired by their contingency with visual spatial cues. In fact, this idea is supported by an intriguing study showing a very strong correlation across species between visual acuity in the fovea and auditory spatial acuity (Heffner & Heffner, 1992 , Fig. 6). More precisely, the authors show that sound localization acuity is better predicted by visual acuity than by acoustical factors (essentially, interaural distance). In our interpretation, animals have poor sound localization acuity not so much because they lack the physiological mechanisms to correctly analyze spatial information, but because in the absence of precise vision, auditory spatial cues cannot acquire precise spatial content. This does not imply that the auditory system of these animals cannot decode these spatial cues, but only they cannot make sense of them. [Update: the results in Heffner & Heffner are in fact more subtle, see a more recent post]

This being said, there is in fact some intrinsic spatial content in sounds, which I will describe now. When a sound is produced, it arrives first at the ear closer to the source, then at the other ear. The intensity will also be higher at the first ear. This is the binaural structure of sounds produced by a single source, and captured by two ears that are spatially separated. This is similar to stereoscopic vision. But observe one difference: in vision, as Gibson noted, having two eyes is essentially the same thing as having one eye, combined with lateral head movements; in hearing, this is not the same because of the non-persistent nature of sounds. If one turns the head to sample another part of the “acoustic array” (in analogy with Gibson’s optic array), the sound field will have changed already (and possibly faded out), so the spatial structure will not be directly captured in the same way. Thus, to capture spatial structure in sound, it is crucial that acoustic signals are simultaneously captured at different locations.

This binaural structure in sounds is often described as “spatial cues” (binaural cues). Quantitatively, there is a relationship between the spatial location of the source and binaural structure, e.g. the interaural time difference (ITD). However, these “cues” are not intrinsically spatial in the sense that they are not defined in relationship to self-generated movements. For example, what is the spatial meaning of an ITD of 100 µs? Intrinsically, there is none. As discussed above, one way for spatial cues to acquire spatial content is by association, i.e., with the spatial content of another modality (vision). But now I will consider the effects of self-generated movements, that is, what is intrinsically spatial in sounds.

When the head turns, the binaural structure changes in specific ways. That is, there is a sensorimotor structure that gives spatial content to binaural structure. More precisely, two different binaural structures can be related to each other by a specific movement. But an important distinction with vision must be made. Because of the non-persistent nature of sounds, the relationship is not between movements and sensory signals, it is between movements and the structure of sensory signals. It is not possible to predict the auditory signals from auditory signals captured before a specific movement. For one thing, there might be no sound produced after the movement. What is predictable is the binaural structure of the sound, if indeed a sound is produced by a source that has a persistent location. If the location of the source is persistent, then the binaural structure is persistent, but not the auditory signals themselves.

Another point we notice is that this provides only a relative sense of space. That is, one can that say whether a sound source is 20° left of another sound source, but it does not produce an absolute egocentric notion of space. What is lacking is a reference point. I will propose two ways to solve this problem.

What is special, for example, about a source that is in front of the observer? Gibson noted that, in vision, the direction in which the optic flow is constant indicates the direction of movement. Similarly, when one moves in the direction of a sound source, the direction of that sound source is unchanged, and therefore the binaural structure of sound is unchanged. In other words, the direction of a sound source in front is the direction of a self-generated movement that would leave the binaural structure unchanged (we could also extend this definition to the monaural spectral information). In fact the binaural structure can depend on distance, when the source is near, but this is a minor point because we can simply state that we are considering the direction that makes binaural structure minimally changed (see also the second way below). One problem with this, however, is that moving to and moving away from a source both satisfy this definition. Although these two cases can be distinguished by head movements, this definition does not make a distinction between what is moving closer and what is moving further away from the source. One obvious remark is that moving to a source increases the intensity of the sound. The notion of intensity here should be understood as a change in information content. In the same way as in vision where moving to an object increases the level of visual detail, moving to a sound source increases the signal-to-noise ratio, and therefore the level of auditory detail available. This makes sense independently of the perceptual notion of loudness – in fact it is rather related to the notion of intelligibility (a side note: this is consistent with the fact that an auditory cue to distance is the ratio between direct sound energy and reverberated energy). Of course again, because sounds are not persistent, the notion of change in level is weak. One needs to assume that the intensity of the sound persists. However, I do not think this is a critical problem, for even if intensity is variable, what is needed is only to observe how intensity at the ear correlates with self-generated movements. This is possible because self-generated movements are (or at least can be) independent of the intensity variations of the sound.

This indeed seems to provide some intrinsic spatial content to sounds. But we note that it is quite indirect (compared to vision), and made more evident by the fact that sounds are not persistent. There is another, more direct, way in which sounds can acquire spatial content: by the active production of sounds. For example, one can produce sounds by hitting objects. This provides a direct link between the spatial location of the object, relative to the body, and the auditory structure of the sound. Even though sounds are not persistent, they can be repeated. But we note that this can only apply to objects that are within reach.

This discussion shows that while there is no intrinsic spatial content about shape in sounds, there is intrinsic spatial content about source location. This seems to stand in contradiction with the discussion at the beginning of this post, in which I pointed out that spatial auditory acuity seems to be well predicted across species by visual acuity, suggesting that spatial content is acquired. Here is a possible way to reconcile these two viewpoints. In vision, an object at a specific direction relative to the observer will project light rays in that direction to the retina, which will be captured by specific photoreceptors. Therefore, there is little ambiguity in vision about spatial location. However, in hearing, this is completely different. Sounds coming from a particular direction are not captured by a specific receptor. Information about direction is in the structure of the signals captured at the two ears. The difficulty is that this structure depends on the direction of the sound source but also on other uncontrolled factors. For example, reflections, in particular early reflections, modify the binaural cues (Gourévitch & Brette 2012). These effects are deterministic but situation-dependent. This implies that there is no fixed mapping from binaural structure to spatial location. This makes the auditory spatial content weaker, even though auditory spatial structure is rich. Because visual location is more invariant, it is perhaps not surprising that it dominates hearing in localization tasks.

What is sound? (II) Sensorimotor contingencies

In a previous post, I emphasized the differences between vision and hearing, from an ecological point of view. Here I want to comment on the sensorimotor theory of perception (O’Regan & Noë 2001) or the enactive approach, applied to sounds. According to this theory, perception is the implicit knowledge of the effects of self-generated movements on sensory signals. Henri Poincaré made this point a long time ago: "To localize an object simply means to represent to oneself the movements that would be necessary to reach it". For example, perceiving the spatial location of an object is knowing the movements that one should do to move to that object, or to grasp it, or to direct its fovea to it.

There are two implicit assumptions here: 1) that there is some persistence in the sensory signals, 2) that the relevant information is spatial in nature. I will start with the issue of persistence. As I previously argued, a defining characteristic of sounds is that they are not persistent, they happen. For example, the sound of someone else hitting an object is transient. One cannot interact with it. So there cannot be any sensorimotor contingency in this experience. It could be argued that one relies on the memory of previous sensorimotor contingencies, that is, the memory of one producing an impact sound. This is a fair remark, I think, but it overestimates the amount of information there is in this contingency. When an impact sound is produced, the only relationships between motor commands and the acoustic signals are the impact timing and the sound level (related to the strength of the impact). But there is much more information in the acoustic signal of an impact sound, because the structure of this signal is related to properties of the sounding object, in particular material and shape (Gaver, 1993). For example, the resonant modes are informative of the shape and the decay rate of these modes indicates the nature of the material (wood, metal, etc), properties that we can very easily identify. So there is informative sensory structure independent of sensorimotor contingencies.

Now I think we are hitting an interesting point. Even though the resonant modes are informative of the shape (the size, essentially) of an object, they cannot provide any perceptual spatial content by themselves. That is, the frequency of a resonant mode is just a number, and a number has no meaning without context. Compare with the notion of object size for the tactile system: the size of a (small) object is the extent to which one must stretch the hand to grasp it. There is no such thing in hearing. There is nothing intrinsically spatial in auditory size, it seems. If one moves and the sound is repeated, the same resonant modes will be excited. Therefore, it seems that auditory shape can only be a derived property. That is, the specific sensory structure of sounds that corresponds to shape acquires perceptual content by association with another sense that has intrinsic spatial content, i.e., visual or tactile. Now we get to Gibson’s notion of invariant structure: auditory size is the structure in the auditory signals that remains the same when other aspects than size change (where the notion of size is not auditory). Here I am imagining that one hears sounds produced by various sources for which the size is known, and one can identify that some auditory structure is the same for all sources that have the same size. Note the important point here: what persists here is not the sensory signals, it is not the relationship between movements and sensory signals, it is not even the relationship between size and sensory signals, it is the relationship between size and the structure of auditory signals, which is a kind of relationship. That is, one cannot predict the auditory signals from the size: one can predict some aspect of the structure of these signals from the size.

Here I have highlighted the fact that the auditory shape of an object is a structure of auditory signals, not a kind of sensorimotor structure. The spatial notion of shape is a secondary property of sounds that can only be acquired through other sensory modalities. But there can also be intrinsic spatial content in sounds, and in my next post, I will discuss spatial hearing.

What is sound? (I) Hearing vs. seeing

What is sound? Physically, sounds are mediated by acoustical waves. But vision is mediated by light waves and yet hearing does not feel like vision. Why is that?

There are two wrong answers to this question. The first one is that the neural structures are different. Sounds are processed in the cochlea and in the auditory cortex, images by the retina and visual cortex. But then why doesn’t a sound evoke some sort of image, like a second visual system? This point of view does not explain much about perception, only about what brain areas “light up” when a specific type of stimulus is presented. The second one is that the physical substrate is different: light waves vs. acoustic waves. This is also a weak answer, for what is fundamentally different between light and acoustic waves that would make them “feel” different?

I believe the ecological approach provides a more satisfying answer. By this, I am referring to the ecological theory of visual perception developed by James Gibson. It emphasizes the structure of sensory signals collected by an observer in an ecological environment. It is also related the sensorimotor account of perception (O’Regan & Noë 2001), which puts the emphasis on the relationship between movements and sensory signals, but I will show below that this emphasis is less relevant in hearing (except in spatial hearing).

I will quickly summarize what is vision in Gibson’s ecological view. Illumination sources (the sun) produce light rays that are reflected by objects. More precisely, light is reflected by the surface of objects with the medium (air, or possibly water). What is available for visual perception are surfaces and their properties (color, texture, shape...). Both the illumination sources and the surfaces in the environment are generally persistent. The observer can move, and this changes the light rays received by the retina. But these changes are highly structured because the surfaces persist, and this structure is informative of the surfaces in the environment. Thus what the visual system perceives is the arrangement and properties of persistent surfaces. Persistence is crucial here, because it allows the observer to use its own movements to learn about the world – in the sensorimotor account of perception, perception is precisely the implicit knowledge of the effect of one’s actions on sensory signals.

On the other hand, sounds are produced by the mechanical vibration of objects. This means that sounds convey information about volumes rather than surfaces. They depend on the shape but also on the material and internal structure of objects. It also means that what is perceived in sounds is the source of the waves rather than their interaction with the environment. Crucially, contrary to vision, the observer cannot directly interact with sound waves, because a sound happens, it is not persistent. An observer can produce a sound wave, for example by hitting an object, but once the sound is produced there is no possible further interaction with it. The observer cannot move to analyze the structure of acoustic signals. The only available information is in the sound signal itself. In this sense, sounds are events.

These ecological observations highlight major differences between vision and hearing, which go beyond the physical basis of these two senses (light waves and acoustic waves). Vision is the perception of persistent surfaces. Hearing is essentially the perception of mechanical events on volumes. These remarks are independent from the fact that vision is mediated by a retina and hearing by a cochlea.

The impact of early reflections on binaural cues

Boris Gourévitch and I have just published a paper in ecological acoustics:

Gourévitch B and Brette R (2012). The impact of early reflections on binaural cues. JASA 132(1):9-27.

This is a rather technical paper in which we investigate how binaural cues (ITDs, ILDs) are modified in an ecological environment in which there are reflections. Indeed most sound localization studies use HRTFs recorded in anechoic conditions, but apart perhaps from flying animals, anechoic conditions are highly unecological. That is, even in free field, there is always at least a ground on which sounds waves reflect. In this paper, we focus on early reflections. In the introduction, we motivate this choice by the fact that the precedence effect (perceptual suppression of echoes) only acts when echoes arrive after a few ms, and therefore early reflections should not be suppressed. Another, perhaps simpler, argument is that in a narrow frequency band, a sound will always interfere with its echo when the echo arrives less than a couple of periods after the direct sound. Therefore, early reflections produce interferences, seen in the binaural cues. An important point is that these are deterministic effects, not variability. In the paper, we analyze these effects quantitatively with models (rigid spheres and sophisticated models of sound absorption by the ground). One implication is that in ecological environments and even with a single sound source and in the absence of noise, there may be very large interaural time differences, which carry spatial information.

 

Computing with neural synchrony (new paper)

It seems that I haven't written a blog entry for three years now. Oops! I figured I could use this blog to announce new papers. Since I have not written here for a while, it will be a long post!

Here is my most recently published paper:

Brette R (2012). Computing with neural synchrony. PLoS Comp Biol. 8(6): e1002561. doi:10.1371/journal.pcbi.1002561.

It is a theoretical paper (no experimental data). I try to address two questions: 1) does synchrony matter?, 2) what is synchrony good for, from a functional point of view?

 

Does synchrony matter?

In fact, we addressed the first question in a paper published last year, combining theory and slice experiments:

Rossant C, Leijon S, Magnusson AK, Brette R (2011). Sensitivity of noisy neurons to coincident inputs. J Neurosci 31(47):17193-17206.

In the PLoS paper above, I complement this analysis with another point of view, so I'll try to summarize the results of both papers here. Synchrony (that is, correlations at a short timescale, comparable to the integration time constant of neurons) has been observed in a number in experimental studies. But some authors would argue that it is a meaningless correlate of neural network dynamics. Our approach is very straightforward: what is, quantitatively, the impact of coincident input spikes on the output of a postsynaptic neuron? Is it a mild modulation, or is it very large? There are different ways to approach the issue, but essentially the results are: extremely large.

One approach is based on the analysis of the impact of correlations on the membrane potential distribution. Neurons generally live (in vivo) in a fluctuation-driven regime: they fire irregularly, in response to fluctuations of their total synaptic input above the spike threshold. In this regime, the output rate depends not only on the mean input, but also on its variance. The mean does not depend on input correlations. But the variance does. When the inputs are independent, the variance scales as N, the number of synapses. When they are not, the variance is the sum of covariances between all pairs of inputs, and there are about N² of them. This means that even if pairwise correlations are of order 1/N, they still have a major impact on the neuron's output. Therefore, for correlations not to matter would require specific cellular or network mechanisms that cancel these correlations at all times.

The second approach shows that even if these correlations are somehow cancelled, the neuron still remains highly sensitive to coincident inputs. Consider background synaptic activity, resulting in a membrane potential distribution peaking below threshold. If this is the sum of N independent inputs (or with cancelled correlations), the standard deviation is of order N^(1/2). Now add p coincident input spikes. You can easily show that these input spikes cause a number of extra output spikes that increases supralinearly with p. In the J Neurosci paper, we calculate it and the formula agrees very well with slice experiments (in a cortical neuron). The calculations show that a very small number of input spikes are enough to produce an output. This is related to the "coincidence advantage" discussed by Moshe Abeles in the 1980s, and a number of experimental papers by Usrey, Alonso and Reid on the LGN->V1 projection. We also show that very tiny modifications in spike timing completely change the output rate, even though pairwise correlations remain very close to zero.

In the PLoS paper I show another point of view, based on signal detection theory. You consider the problem of detecting coincident inputs in a background noise, from just observing the membrane potential. Again, because the signal scales linearly with the number p of coincident spikes but the noise (independent inputs) scales with the square root of the total number N of inputs, even a very small number of coincident input spikes is highly detectable.

Coincidence detection is also enhanced by the fact the spike threshold adapts to the membrane potential. In this paper we explain what it means for synaptic integration and coincidence detection:

Platkiewicz J and Brette R (2011). Impact of Fast Sodium Channel Inactivation on Spike Threshold Dynamics and Synaptic Integration. PLoS Comp Biol 7(5): e1001129. doi:10.1371/journal.pcbi.1001129.

Finally in the synchrony paper, I also show that synchronously activated synapses tend to be potentiated by STDP, and so synchrony has a huge effect both on input-output properties and on learning. This finding about correlated inputs being selected is not new, I just put it in a functional context.

 

What is synchrony good for?

But the more important point in the PLoS paper is about the possible function of synchrony. To be useful, synchrony has to be stimulus-dependent. So I define the "synchrony receptive field" of a given pair or group of neurons as the set of stimuli that produce synchronous responses in these neurons. I give a toy example with duration-selection neurons, but the more interesting stuff is the application to sensory modalities.

The idea is the following. One of the main function of a sensory system is to extract invariant structure in sensory signals. "Structure" simply means something in the sensory signals that distinguish them from random inputs. For example, sounds produced by an actual source in the environment produces two binaural signals that have very precise relationships between them, that depend on the source location. The task of the sensory system is to identify these relationships. "Invariant" refers to the idea that, in a fixed environment, relationships that remain fixed when something else changes (for example the observer's position) tells you something about the environment. For example, in binaural hearing, the source of change is simply the source signal itself, and what persists is the relationship between the two binaural signals, which only depends on the location of the sound source. This idea has been articulated in particular by James Gibson, a visual psychologist who developed an "ecological theory" of perception. From a computational point of view, an interesting thing in this theory is that the problem of invariance is solved from the start: instead of trying to design operations that are invariant to a number of transformations, you trigger these transformations and observe the relationships that persist (e.g. for vision, by moving and looking at the changes in the visual input). Note that this is very different from the standard point of view in most neural network theory, in which the main paradigm is pattern recognition, inspired largely by David Marr's work in vision. Here we do not compare the sensory inputs with previously stored patterns, we look for structure (relationships) in these sensory signals.

What's the relationship with synchrony? The idea is very simple. If there is some structure, that is, some regularity in sensory signals, then neurons with different receptive fields will encode these sensory signals into spike trains, in which sensory structure is seen as synchrony patterns. In the paper, I show that these synchrony patterns are intrinsically invariant to a number of transformations, just because the structure itself is invariant. I then develop mostly a (simplified) example of odor recognition, and I show how a simple network based on these ideas can perform concentration-invariant odor recognition, and how the network can learn to recognize an odor and can generalize across a large range of concentrations.

Two years ago, we applied this principle to binaural hearing:

Goodman DF and R Brette (2010). Spike-timing-based computation in sound localization. PLoS Comp Biol 6(11): e1000993. doi:10.1371/journal.pcbi.1000993.

This was really a "proof-of-principle" study, where we showed how we can use these ideas to identify the location of a sound source using realistic acoustic signals, coming from a virtual auditory environment (with real diffraction properties and natural sounds). We have refined the model and are currently trying to test it with in vivo recordings.

At the end of the synchrony paper, I also mention an application in vision: the synchrony receptive field of two LGN neurons is an edge with a particular orientation. I was very excited to read this week that a group has actually found exactly that, using multiple recordings in the LGN with gratings:

Stanley et al. (2010). Visual Orientation and Directional Selectivity through Thalamic Synchrony. The Journal of Neuroscience, 27 June 2012, 32(26):9073-9088.

And they show that this could provide the contrast-invariance property that is seen in V1 neurons. In fact, during my PhD I worked exactly on this problem, using simple models, and I predicted these findings: it's only in my thesis in French, unfortunately.

 

Code

I have to clean up my code before it is usable, but I will upload it soon to modeldb and Brian examples!

 

Hydraulic neuron model

I recently came across a web page that described a computer working with water instead of electricity. There is a very smart idea of a logical gate with two input jets of water and two outputs: when there is no input water, there is no output; when there is one input, the water flows through ouput #1; when there are two input jets, they collide and the water gets diverted to output #2. Therefore output #1 is a XOR gate and output #2 is an AND gate.

I was wondering how to make a hydraulic model of a neuron (which would be analog rather than digital). It could be an interesting educational tool. You could imagine a container where water would flow from the top, analog to the input current, and the water level would be the membrane potential v. In this simple configuration, it corresponds to the perfect integrator model: dv/dt=I/C, where I is the input flow (in units of volume/time) and C is the area of the container section. I chose C for this parameter because it clearly plays the role of the membrane capacitance.

Now a simple way to implement the current leak is to cut a hole at the bottom of the container. Then the water flow through that hole is proportional to \sqrt{v}, where a is the area of the hole. So we get the following equation:

Cdv/dt=I-k \sqrt{v}

where k is a proportionality factor. If the hole is cut at level v_0 (rather than at the bottom), we obtain:

Cdv/dt=I-k \sqrt{[v-v_0]^+}

which is a nonlinear leaky neuron model ([x]^+=max(0,x)).

The hard problem now is to implement spiking. Here I think we need mechanical elements: when the level reaches some mechanical element at the top, it would trigger the opening of a sink at the bottom, which would remain open as long as water flows through it (or as long the weight of water is above some critical level). Alternatively, when the weight of the water reaches a critical threshold, then the sink at the bottom opens, and it remains open as long as water is flowing (but I am not sure how to implement that property).

Any idea?

Romain

p.s.: to have a linear leaky neuron instead of nonlinear, one idea is to have the area a change with v as \sqrt{v}. To achieve that, one can imagine that the floor is mounted on a spring, so that the area of the hole increases with the weight of the water. If the width of the hole goes as 1/\sqrt{x} (where x is the vertical position on the hole), then the flow through the hole is proportional to v. If we want to avoid the rectification ([v-v_0]^+), i.e., if we want water to flow in when the level is below v_0, then we need to immerse the container in a very large (ideally infinite) container with water level v_0.

The importance of stupidity in scientific research

I recently read a very interesting essay entitled "The importance of stupidity in scientific research" and I wanted to share it with you. The author observes that feeling stupid is an essential aspect of research, because you are supposed to understand something that nobody currently does (as opposed to trying to solve a very difficult exercise). That might be why many very good PhD students feel discouraged, because they don't realize that feeling stupid means you are probably doing something valuable! Here is a quote that inspires me as a PhD supervisor:

We don't do a good enough job of teaching our students how to be productively stupid

Have fun!

Romain