Spontaneous firing in auditory nerve fibers might reflect mechanical noise

One thing has always puzzled me in auditory nerve physiology. Most auditory nerve fibers saturate quickly when sound level is increased. This implies that, at least for tones, firing rate carries no information at high level. But then it is also known that above a few kHz (depending on species), neural firing in response to tones also becomes imprecise (loss of phase locking), so that spike timing does not carry information. So in other words: at high levels, high frequency auditory nerve fibers spend a lot of energy firing spikes that carry close to zero information. It is odd.

Now in a recent study in J Neurosci, Temchin and Ruggero show some data that suggest (although it remains a bit speculative) that spontaneous firing of fibers may reflect mechanical vibrations rather than neural noise. In my mind, this implies that there is shared variability in the firing of neighboring fibers, and therefore, the loss in spike timing precision in high frequency may hide precise relative timing between neighoring fibers, and this would carry information about the temporal fine structure of sounds.

A neurological case supporting the sensorimotor theory of perception

I just read chapter 5 of Oliver Sacks' “The man who mistook his wife for a hat”, “Hands”. He describes the case of a 60 year old lady with cerebral palsy, who could not do anything with her hands, and more interestingly, with a profound impairment of manual perception – inability to recognize any object by touch. Yet her tactile senses were intact, and she actually could move her hands – although she did not use them. Sacks reasoned that perhaps she had not developed her manual abilities by exploration as a child would, because she had been looked after so much due to her congenital disease. Then as a therapeutic experiment, he tricked her into grasping things with her hands (e.g. putting food slightly out of reach). In a matter of weeks, she developed complete functional use of her hands (after 60 years!), and even started to make remarkable sculptures. Simultaneously, she also developed normal tactile perception (instant object recognition).

This is quite an interesting case for psychological theories of perception. As the sensorimotor theory of perception predicts, it shows that normal perception requires action. Normal sensory neural structures do not by themselves give rise to perception.

On the expanding volume of experimental data – (I) So... where is the data?

One of the major motivations of the HBP, as I have heard from Henry Markram, is that we already produce tons of experimental data, and yet we still don't understand much about the brain. So what we need is not more data, but rather to do something with all that data. I don't think that the specific way in which the HBP proposes to deal with the data necessarily follows from this remark, but I do think that the remark is generally correct. There are so many more papers than one can read, and so much more data that one can analyze or comprehend. I would like to give my view on this problem as a theoretical neuroscientist.

First of all, as a theoretical neuroscientist, I have to ask the question that many of my colleagues probably have in mind: yes, there is an enormous amount of data, but where is it? Most theoreticians crave data. They want data. Where is it: in the hard drives of the scientists who produce the data, and they usually stay there. So there is a very simple reason why this enormous amount of data is not exploited: it is not shared in the first place. In my view, this would be the first problem to solve. How is it that the data are not shared? If you carefully read the instructions to authors of journals, you generally find that they explicitly ask the authors to share all the data analyzed in the paper with anyone who would ask. And authors have to sign the statement. I can hear some theoreticians laughing in the back! Let's face it: it almost never happens, unless you personally know the authors, and even then it is complicated. Why? I have heard mainly two reasons. There is the “I have to dig out the data” type of reason. That is, the data lie in a disorganized constellation of files and folders, with custom formats etc: basically, it's a mess. This is probably the same reason why many modelling papers also don't share the code for their models: they are not proud of their code! There are some efforts underway to address this issue by developing standardized formats – for example NeuroML for neuron models, and there's a similar effort for experimental data that the Allen Institute is leading. I doubt that this will entirely solve the problem, but it is something.

In any case, I think this first issue is solvable (see below). The second reason why people don't share their data is much more profound: they want to own the data they have produced. They want to own it because it has value. In the academic world of neuroscience, data is wealth. It takes effort and resources to produce data, and you don't want someone taking your data, analyzing it in a different way and publishing it. Many scientists have in mind a few ideas to use their data in different ways, and so they want to protect it. Yet this attitude directly contradicts the statement they have signed when submitting their papers.

There is an additional factor that has more to do with the epistemological biases of the community. Compare these two abstracts. (1) “We wanted to test hypothesis A, and so we did the following experiment: it works”. Here the authors used data that they previously acquired, and confronted them with some hypothesis. (2) “We wanted to test hypothesis A, and so we checked previously published data: it works”. Here the authors used data that other authors previously acquired, and confronted them with some hypothesis. Exactly the same methodology. But certainly you know that abstract (2) will probably get rejected in the highest impact journals. It is not “new data”. It happened to me a number of times that reviewers complained that we actually used “old data”. Apparently the timestamp of the data was a problem. This might mean two things. One is that the value of a paper is essentially in the experimental data, and so if the raw data is the same, then the paper has no additional value. I find this statement philosophically inept, but in any case it is rather hypocritical as it does not actually distinguish between abstracts (1) and (2). A second interpretation is that data is only meaningful if it was acquired after rather than before a hypothesis was expressed. From a logical standpoint, this is absurd. But I think it stems from the wide-spread Popperian fantasy that science is about testing theories with critical experiments. That is, at a given moment in time, there are a number of theories that are consistent with current facts, and you distinguish between them by a carefully chosen experiment. This is certainly not true from a historical point of view (see Thomas Kuhn), and it is naïve from a philosophical point of view (see Imre Lakatos). I have discussed this elsewhere in detail. More trivially, there is the notion in the mind of many scientists (reviewers) that to have the data beforehand is sort of “cheating”, because then it is trivial to come with a hypothesis or theory that is consistent with it. I would argue that it is trivial only if a small amount of data is considered and the explanation is allowed to be as complex as the data, but otherwise it is not trivial at all – rather, it is actually what science is all about. But in any case, it is again hypocritical because you don't know when you read abstract (1) whether the data was actually produced after or before the hypothesis was expressed.

So what can we do about it? First, there is a trivial way to make a lot of progress on data sharing. Instructions to authors already specify that the data should be public. So let's just stop being hypocritical and make the submission of data mandatory at the time of paper submission, or acceptance. This could be dealt with by journals, or by archiving initiatives (such as Arxiv for preprints or ModelDB for models). I believe this will partly solve the issue of “I have to dig out the data” because authors will know in advance that they will have to submit the data. So they will do what is necessary – let's be honest for a minute: this is probably not the most complicated thing scientists are supposed to deal with. Quite possibly, standardized formats might help, but the first significant step is really to make the data accessible.

For the epistemological issue, the problem is more profound. Scientists have to accept that producing data is not the only valuable thing in science, and that making sense of the data is also valuable. I will address this specific point in the next post of this short series.

Update (20.08.2014). I just realized Neuron actually makes it mandatory that some types of datasets are submitted along with the paper (but not all types).

The challenge of retrograde amnesia to theories of memory

I am reading Oliver Sacks' “The man who mistook his wife for a hat”. On chapter 2, he describes a case of retrograde amnesia. Around 1970, the patient went into an episode of alcoholism, which resulted in the loss of 25 years of his most recent memories (declarative memory). As a result, the patient thought he was 19 and lived in 1945, as if time had stopped. So not only could he not transfer short-term memory to long-term memory, but a large part of his previously stored memory got erased. In addition, it is not a random fraction of his memories that were erased: it was exactly the most recent ones. He seemed to perfectly remember old memories, and have absolutely no memory of the more recent events.

This is quite a challenge for current neural theories of memory. The main theoretical concept about memory is the notion of neural assembly supporting associative memory. Imagine a memory is made of a number of elements that are associated together, then the substrate of this memory is a connected network of neurons that “code” for those elements, in some structure in the brain, with connections to relevant parts of the brain (say, the visual cortex for visual features, etc). This conceptual framework can be extended with sequential activation of neurons. Now in this framework, how do you erase the most recent memories? Note that by “most recent”, I am talking of 25 years, not of short-term memory.

One trivial possibility would be that each memory has a timestamp, encoded as part of the neural assembly supporting that memory. Then some mechanism erases all memories that have a timestamp more recent than a particular date. Why and how this would happen is mysterious. In addition, “reading the timestamp” would entail activating those memories (all of them), which would then need to exist at that time, and then erasing them. It simply sounds absurd. A more plausible explanation is that, for some reason, recent memories are more fragile than old ones. But why is that?

This is a very interesting point, because in current neural theories of memory, it is the old memories that are more fragile than the recent ones. The reason is that memories are imprinted by modifications of synaptic connections according to a Hebbian mechanism (neurons that are co-activated strengthen their connections), and then these connections get degraded over time because of the activation of the same neurons in other contexts, by ongoing activity. So in current theories of memory, memory traces decay over time. But what retrograde amnesia implies is exactly the opposite: memory traces should strengthen over time. How is it possible that memories strengthen over time?

One possibility is that memories are replayed. If you recall a memory, the neurons supporting that memory activate and so the corresponding connections strengthen. But conscious recollection will probably not do the trick, because then there would not be a strict temporal cut-off: i.e., some recent memories might be recalled more often than some older ones. So what seems to be necessary is a continuous subconscious replay of memories, independent of emotional or attentional states. Clearly, this is quite a departure from current neural theories of memory.