Project: Binaural cues and spatial hearing in ecological environments

I previously laid out a few ideas for future research on spatial hearing:

  1. The ecological situation and the computational problem.
  2. Tuning curves.
  3. The coding problem.

This year I wrote a grant that addresses the first point. My project was rather straightforward:

  1. To make binaural recordings with ear mics in real environments, with real sound sources (actual sounding objects) placed at predetermined positions. This way we obtain distributions of binaural cues conditioned on source direction, capturing the variability due to context.
  2. To measure human localization performance in those situations.
  3. To try to see if a Bayesian model can account for these results, and possibly previous psychophysical results.

The project was preselected but unfortunately not funded. I probably won't resubmit it next year, except perhaps with a collaboration. So here it is for everyone to read: my grant application, "Binaural cues and spatial hearing in ecological environments". If you like it, if you want to do these recordings and experiments, please do so. I am interested in the results, but I'm happy if someone else does it. Please contact me if you would like to set up a collaboration, or discuss the project. I am especially interested in the theoretical analysis (ie the third part of the project). Our experience in the lab is primarily on the theoretical side, but also in signal analysis, and we have done a number of binaural recordings too and some psychophysics.

Are journals necessary filters?

In my previous post, I argued that one reason why many people cling to the idea that papers should be formally peer-reviewed before they are published, cited and discussed, despite the fact that this system is a recent historical addition to the scientific enterprise, is a philosophical misunderstanding about the nature of scientific truth. That is, the characteristic of science, as opposed to religion, is that it is never validated; it can and must be criticized. Therefore, no amount of peer reviewing can ever be a stamp of approval for “proven facts”. Instead what we need is public discussion of the science, not stamps of approvals.

In response to that post came up another common reason why many people think it’s important to have journals that select papers after peer-review. The reason is that we are crowded with millions of papers and you can’t read everything, so you need some way to know which paper is important, based on peer-review. So here this is not just about peer-reviewing before publishing, but also about the hierarchy of journals. Journals must do an editorial selection so that you don’t have to waste your time reading low-quality papers, or uninteresting papers. What this means, quite literally, is that you only read papers from “top journals”.

Here I want to show that this argument is untenable, because selecting their readings based on journal names is not what scientists should do or actually do, and because the argument is logically inconsistent.

Why is it logically inconsistent? If the argument is correct, then those papers accepted in lower rank journals should not be read because they are not worth reading. But in that case, why publish them at all? There seems to be no reason for the existence of journals that people do not read because they do not have time to read bad papers. If we argue that those journals should exist because in some cases there are some papers worth reading there, for any sort of reason, then we must admit that we don’t actually use journals as filters, or that we should not use them as filters (see below).

Is it good scientific practice to use journal names as filters? What this implies is that you ignore any paper, including papers in your field and potentially relevant to your own studies, which are not published in “top journals”. So for example, you would not cite a relevant study if it’s not from a top journal. It also means that you don’t check that your own work overlaps other studies. So you potentially take credit for ideas that you were not the first to have. Is this a professional attitude?

If in fact you don’t totally ignore those lower journals, then you don’t actually use journal name as a filter, you actually do look at the content of papers independently of the journal they are published in. Which is my final point: to use journal names as filters is not the normal practice of scientists (or maybe I’m optimistic?). When you look for relevant papers on your topic of interest, you typically do a search (eg pubmed). Do you only consider papers from “top journals”, blindly discarding all others? Of course not. You first look at the titles to see if it might be relevant; then you read the abstract if they are; if the abstract is promising you might open the paper and skim through it, and possibly read it carefully if you think it is worth it. Then you will look at cited papers; or at papers that cite the interesting you just read; or you will read a review; maybe a colleague or your advisor will suggest a few readings. In brief: you do a proper bibliographical search. I cannot believe that any good scientist considers that doing a bibliographical search consists in browsing the table of contents of top journals.

The only case when you do use journal names to select papers to read is indeed when you read tables of contents every month for a few selected journals. How much of this accounts for the papers that you cite? You can get a rough idea of this by looking at the cited half-life of papers or journals. For Cell, it’s about 9 years. I personally also follow new papers on biorxiv using keywords, while most new papers in journals are irrelevant to me because they cover too many topics.

In summary: using journals as filters is not professional because it means poor scholarship and misattribution of credit. Fortunately it’s not what scientists normally do anyway.

One related argument that came out in the discussion of my previous post is that having papers reviewed post-publication could not work because that would be too much work, and consequently most papers would not be reviewed, while at least in the current system every paper is peer reviewed. That is wrong in several ways. First, you can have papers published then peer-reviewed formally and publicly (as in F1000 Research), without this being coupled to editorial selection. Second, if anything, having papers submitted a single time instead of many times to different journals implies that there will be less work for reviewers, not more. Third, what is exactly the advantage of having each paper peer-reviewed if it is argued that those papers should not be read or cited? In the logic where peer review in “good journals” serves as filters for important papers, it makes no difference whether the unimportant papers are peer reviewed or not, so this cannot count as a valid argument against post-publication review.

All this being said, there is still a case for editorial selection after publication, as one of the many ways to discover papers of interest, see for example my free journal of theoretical neuroscience.

The great misunderstanding about peer review and the nature of scientific facts

Last week I organized a workshop on the future of academic publication. My point was that our current system, based on private pre-publication peer review, is archaic. I noted that the way the peer review system is currently organized (where external reviewers judge both the quality of the science and the interest for the journal) represents just a few decades in the history of science. It can hardly qualify as the way science is or should be done. It is a historical feature. For example, only one of Einstein’s papers was formally peer-reviewed; Crick & Watson’s DNA paper was not formally peer-reviewed. Many journals introduced external peer review in the 1960s or 1970s to deal with the growth in the number and variety of submissions (see e.g. Baldwin, 2015); before that, editors would decide whether to publish the papers they received, depending on the number of pages they could print.

Given the possibilities that offers the internet, it seems that there is no reason anymore to couple the two current roles of peer review: editorial selection and scientific discussion. One could simply share their work online, get feedback from the community to discuss the work, and then let people recommend papers to their colleagues and compile all sorts of reader’s digests. No time wasted in multiple submissions, no prestige misattributed to publications in glamour journals, who do not do a better a job than any other journal at pointing errors and frauds. Just the science and the public discussion of science.

But there is a lot of resistance to this idea, namely the idea that papers should be formally approved by peer reviewers before they are published. Because otherwise, so many people claim, the scientific world would be polluted by all sorts of unverified claims. It would not be science anymore, just gossip. I have attributed this attitude to conservatism, first because as noted above this system is a rather recent addition to the scientific enterprise, and second because papers are published before peer review. We call those “preprints”, but really these are scientific papers made public, so by definition they are published. I follow the preprints in my field and I don’t see any particular loss in quality.

However, I think I was missing a key element. The more profound reason why many people, in particular experimental biologists, are so attached to peer review is in my view that they hold naive philosophical views about the notion of truth in science. A paper should be peer-reviewed because otherwise you can’t cite it as a true fact. Peer review validates science, thanks to experts who make sure that the claims of the authors are actually true. Of course it can go wrong and reviewers might miss something, but it is the purpose of peer review. This view is reflected in the tendency, especially in biology journals, to choose titles that look like established truths: “Hunger is controlled by HGRase”, instead of “The molecular control of hunger”. Scientists and journalists can then write revealed truths with a verse reference, such as “Hunger is controlled by HGRase (McDonald et al., 2017)”.

The great misunderstanding is that truth is a notion that applies to logical propositions (for example, mathematical theorems), not to empirical claims. This has been well argued by Popper, for example. Truth is by nature a theoretical concept. Everything said is said with words, and in this sense it always refers to theoretical concepts. One can only judge whether observations are congruent with the meaning attributed to the words, and that meaning necessarily has a theoretical nature. There is no such thing as an “established fact”. This is so even of what we might consider as direct observations. Take for example the claim “The resting potential of neurons is -70 mV”. This is a theoretical statement. Why? First, because to establish it, I have recorded a number of neurons. If you test it, it will be on a different neuron, which I have not measured. So I am making a theoretical claim. Probably, I also tested my neurons with a particular method (not mentioning a particular region and species). But my claim makes no reference to the method by which I have made the inference. That would be the “methods” part of my paper, not the conclusion, and when you cite my paper, you will cite it because of the conclusion, the “established fact”, you will not be referring to the methods, which you consider are the means to establish the fact. It is the role of the reviewers to check the methods, to check that they do establish the fact.

But these are trivial remarks. It is not just that the method matters. The very notion of an observation always implicitly relies on a theoretical background. When I say that the resting potential is -70 mV, I mean that there is a potential difference of -70 mV across the membrane. But that’s not what I measure. I measure the difference in potential between some point outside the cell and the inside of a patch pipette whose solution is in contact with the cell’s inside. So I am assuming the potential is the same in all points of the cytosol, even though I have not tested it. I am also implicitly modeling the cytosol as a solution, even though the reality is more complex than that, given the mass of charged proteins in it. I am assuming that the extracellular potential is constant. I am assuming that my pipette solution reasonably matches the actual cytosol solution, given that “solution” is only a convenient model. I am implicitly making all sorts of theoretical assumptions, which have a lot of empirical support but are still of a theoretical nature.

I have tried with this example to show that even a very simple “fact” is actually a theoretical proposition, with many layers of assumptions. But of course in general, papers typically make claims that rely less firmly on accepted theoretical grounds, since they must be “novel”. So it is never the case that a paper definitely proves its conclusions. Because conclusions have a theoretical nature, all that can be checked is whether observations are consistent with the authors’ interpretation.

So the goal of peer review can’t be to establish the truth. If it were the case, then why would reviewers ever disagree? They disagree because they cannot actually judge whether a claim is true; they can only say whether they are personally convinced. This makes the current peer review system extremely poor, because all the information we get is: two anonymous people were convinced (and maybe others were not, but we’ll never find out). What would be more useful would be to have an open public discussion, with criticisms, qualifications and alternative interpretations fully disclosed for anyone to read and make their own opinion. In such a system, the notion of a stamp of approval on a paper would simply be absurd; why hide the disapprovals? There is the paper, and there is the scientific discussion of the paper, and that is all there needs to be.

There is some concern these days that peer reviewed research is unreliable. Well, science is unreliable. That is almost what defines it: it can be criticized and revised. Seeing peer review as the system that establishes the scientific truth is not only a historical error, it is a great philosophical error, and a dangerous bureaucratic view of science. We don’t need editorial decisions based on peer review. We need free publication (we have it) and we need open scientific discussion (it’s coming). That’s all we need.

What is computational neuroscience? (XXVII) The paradox of the efficient code and the neural Tower of Babel

A pervasive metaphor in neuroscience is the idea that neurons “encode” stuff: some neurons encode pain; others encode the location of a sound; maybe a population of neurons encode some other property of objects. What does this mean? In essence, that there is a correspondence between some objective property and neural activity: when I feel pain, this neuron spikes; or, the image I see is “represented” in the firing of visual cortical neurons. The mapping between the objective properties and neural activity is the “code”. How insightful is this metaphor?

An encoded message is understandable to the extent that the reader knows the code. But the problem with applying this metaphor to the brain is only the encoded message is communicated, not the code, and not the original message. Mathematically, original message = encoded message + code, but only one term is communicated. This could still work if there were a universal code that we could assume all neurons can read, the “language of neurons”, or if somehow some information about the code could be gathered from the encoded messages themselves. Unfortunately, this is in contradiction with the main paradigm in neural coding theory, “efficient coding”.

The efficient coding hypothesis stipulates that neurons encode signals into spike trains in an efficient way, that is, it uses a code such that all redundancy is removed from the original message while preserving information, in the sense that the encoded message can be mapped back to the original message (Barlow, 1961; Simoncelli, 2003). This implies that with a perfectly efficient code, encoded messages are undistinguishable from random. Since the code is determined on the statistics of the inputs and only the encoded messages are communicated, a code is efficient to the extent that it is not understandable by the receiver. This is the paradox of the efficient code.

In the neural coding metaphor, the code is private and specific to each neuron. If we follow this metaphor, this means that all neurons speak a different language, a language that allows expressing concepts very concisely but that no one else can understand. Thus, according to the coding metaphor, the brain is a Tower of Babel.

Can this work?

10 simple rules to format a preprint

Submitting papers to preprint servers (bioRxiv) is finally getting popular in biology. Unfortunately, many of these papers are formatted in a way that is very inconvenient to read, possibly because authors stick to the format asked by journals. Here are 10 simple rules to format your preprints:

  1. Format your preprint in the way you would like to read it. The next rules simply implement this first rule.
  2. Use single spacing. No one is going to write between the lines.
  3. Insert figures and their captions in the text, at the relevant place. It is really annoying when you have to continuously go back and forth between the text and the last pages. Putting figures at the end of the paper and captions yet at another place should be punished.
  4. Don’t forget the supplementary material.
  5. We don’t really need 10 rules. In fact the first rule is just fine.

Technical draft for chapter 5, Propagation of action potentials

I have just uploaded a technical draft on chapter 5 of my book on action potentials: Propagation of action potentials. This draft introduces the cable equation, and how conduction velocity depends on axon diameter in unmyelinated and myelinated axons. There is also a short section on the extracellular potential. There are a few topics I want to add, including branching and determinants of conduction velocity (beyond diameter). There is also (almost) no figure at the moment. Finally, it is likely that the chapter is reorganized for clarity. I wanted to upload this chapter nonetheless so as to move on to the next chapter, on spike initiation with an initial segment.

What is computational neuroscience? (XXVI) Is optimization a good metaphor of evolution?

Is the brain the result of optimization, and if so, what is the optimization criterion? The popular argument in favor of the optimization view goes as follows. The brain is the result of Darwinian evolution, and therefore is optimally adapted to its environment, ensuring maximum survival and reproduction rates. In this view, to understand the brain is primarily to understand what “adapted” means for a brain, that is, what is the criterion to be optimized.

Previously, I have pointed out a few difficulties in optimality arguments used in neuroscience, in particular the problem of specification (what is being optimized) and the fact that evolution is a history-dependent process, unlike a global optimization procedure. An example of this history dependence is the fascinating case of mitochondria. Mitochondria are organelles in all eukaryotes cells that produce most of the cellular energy in the form of ATP. At this date, the main view is that these organelles are a case of symbiosis: mitochondria were once prokaryote cells that have been captured and farmed. This symbiosis has been selected and conserved through evolution, but optimization does not seem to be the most appropriate metaphor in this case.

Nonetheless, the optimization metaphor can be useful when applied to circumscribed problems that a biological organism might face, for example the energy consumption of action potential propagation. We can claim for example that, everything else being equal, an efficient axon is better than an inefficient one (with the caveat that in practice, not everything else can be made equal). But when applied at the scale of an entire organism, the optimization metaphor starts facing more serious difficulties, which I will discuss now.

When considering an entire organism, or perhaps an organ like the brain, then what criterion can we possibly choose? Recently, I started reading “Guitar Zero” by Gary Marcus. The author points out that learning music is difficult, and argues that the brain has evolved for language, not music. This statement is deeply problematic. What does it mean that the brain has evolved for language? Language does not preexist to speakers, so it cannot be that language was an evolutionary (“optimization”) criterion for the brain, unless we have a more religious view of evolution. Rather, evolutionary change can create opportunities, which might be beneficial for the survival of the species, but there is no predetermined optimization criterion.

Another example is the color visual system of bees (see for example Ways of coloring by Thompson et al.). A case can be made that the visual system of bees is adapted to the color of flowers they are interested in. But conversely, the color of flowers is adapted to the visual system of bees. This is a case of co-evolution, where the “optimization criterion” changes during the evolutionary process.

Thus, the optimization criterion does not preexist to the optimization process, and this makes the optimization metaphor weak.

A possible objection is that there is a preexisting optimization criterion, which is survival or reproduction rate. While this might be correct, it makes the optimization metaphor not very useful. In particular, it applies equally to all living species. The point is, there are species and they are different even though the optimization criterion is the same. Not all have a brain. Thus, optimization does not explain why we have a brain. Species that have a brain have different brains. The nervous system of a nematode is not the same as that of a human, even though they are all equally well adapted, and have evolved for exactly the same amount of time. Therefore, the optimization view does not explain why we speak and nematodes don’t, for example.

The problem is that “fitness” is a completely contextual notion, which depends both on the environment and on the species itself. In a previous post where I discussed an “existentialist” view of evolution, I proposed the following thought experiment. Imagine a very ancient Earth with a bunch of living organisms that do not reproduce but can survive for an indefinite amount of time. By definition, they are adapted since they exist. Then at some point, an accident occurs such that one organism starts multiplying. It multiplies until it occupies the entire Earth and resources become scarce. At this point of saturation, organisms start dying. The probability of dying being the same for both non-reproducing organisms and reproducing ones, at some point there will be only reproducing organisms. Thus in this new environment, reproducing organisms are adapted, whereas non-reproducing ones are not. If we look at the history of evolution, we note that the world of species constantly changes. Species do not appear to converge to some optimal state, because as they evolve, the environment changes and so does the notion of fitness.

In summary, the optimization criterion does not preexist to the optimization process, unless we consider a broad existentialist criterion such as survival, but then the optimization metaphor loses its usefulness.

What is computational neuroscience (XXV) - Are there biological models in computational neuroscience?

Computational neuroscience is the science of how the brain “computes”, that is, how the brain performs cognitive functions such as recognizing a face or walking. Here I will argue that most models of cognition developed in the field, especially as regards sensory systems, are actually not biological models but hybrid models consisting of a neural model together with an abstract model.

First of all, many neural models are not meant to be models of cognition. For example, there are models that are developed to explain the irregular spiking of cortical neurons, or oscillations. I will not consider them. According to the definition above, I categorize them in theoretical neuroscience rather than computational neuroscience. Here I consider for example models of perception, memory, motor control.

An example that I know well is the problem of localizing a sound source from timing cues. There are a number of models, including a spiking neuron model that we have developed (Goodman and Brette, 2010). This model takes as input two sound waves, corresponding to the two monaural sounds produced by the sound source, and outputs the estimated direction of the source. But the neural model, of course, does not output a direction. Rather, the output of the neural model is the activity of a layer of neurons. In the model, we consider that direction is encoded by the identity of the maximally active neuron. In another popular model in the field, direction is encoded by the relative total activity of two groups of neurons (see our comparison of models in Goodman et al. 2013). In all models, there is a final step which maps the activity of neurons to estimated sound location, and this step is not a neural model but an abstract model. This causes big epistemological problems when it comes to assessing and comparing the empirical value of models because a crucial part of the models is not physiological. Some argue that neurons are tuned to sound location; others that population activity varies systematically with sound location. Both are right, and thus none of these observations is a decisive argument to discriminate between the models.

The same is seen in other sensory modalities. The output is the identity of a face; or of an odor; etc. The symmetrical situation occurs in motor control models: this time the abstract model is on the side of the input (mapping from spatial position to neural activity or neural input). Memory models face this situation twice, with abstract models both on the input (the thing to be memorized) and the output (the recall).

Fundamentally, this situation has to do with the fact that most models in computational neuroscience take a representational approach: they describe how neural networks represent in their firing some aspect of the external world. The representational approach requires defining a mapping (called the “decoder”) from neural activity to objective properties of objects, and this mapping cannot be part of the neural model. Indeed, sound location is a property of objects and thus does not belong to the domain of neural activity. So no sound localization model can ever be purely neuronal.

Thus to develop biological models, it is necessary to discard the representational approach. Instead of “encoding” things, neurons control the body; neurons are agents (rather than painters in the representational approach). For example, a model of sound localization should be a model of an orientational response, including the motor command. The model explains not how space is “represented”, but how an animal orients its head (for example) to a sound source. When we try to model an actual behavior, we find that the nature of the problem changes quite significantly. For example, because a particular behavior is an event, neural firing must also be seen as events. In this context, counting spikes and looking at the mutual information between the count and some stimulus property is not very meaningful. What matters is the events that the spikes trigger in the targets (muscles or other neurons). The goal is not to represent the sensory signals but to produce an appropriate behavior. One also realizes that the relation between sensory signals and actions is circular, and therefore cannot be adequately described as “processing”: sensory signals make you turn the head, but if you turn the head, the sensory signals change.

Currently, most models of cognition in computational neuroscience are not biological models. They include neuron models together with abstract models, a necessity stemming from the representational approach. To a make biological model requires including a model of the sensorimotor loop. I believe this is the path that the community should take.

New chapter : Excitability of an isopotential membrane

I have just uploaded a new chapter of my book on the theory of action potentials: Excitability of an isopotential membrane. In this chapter, I look mostly at the concept of spike threshold: the different ways to define it, its quantitative relation to different biophysical parameters (eg properties of sodium channels), and the conditions for its existence (eg a sufficient number of channels). This is closely related to my previous work on the threshold equation (Platkiewicz and Brette, 2010). It also contains some unpublished work (in particular updates of the threshold equation).

I am planning to extend this chapter with:

  • A few Brian notebooks.
  • A section on excitability types (Hodgkin classification).
  • Some experimental confirmations of the threshold equation that are under way (you will see in section 4.4.2 that current published experimental data do not allow precise testing of the theory).

I am now planning to work on the chapter on action potential propagation.

All comments are welcome.

Free the journals

These days, academic journals serve two purposes: to organize peer review, and to make an editorial selection. With internet and in particular “preprint” servers (eg biorxiv), journals are no longer necessary for distributing academic papers. It is also worth reminding that the peer review system has not always been organized in the way it is currently organized, ie with several external reviewers, multiple revision rounds, etc. For example, Nature only introduced formal peer review in 1967. Before that, the selection would be done internally by the editor.

These two missions, organizing peer review and making an editorial selection, are currently coupled. But this is neither necessary, nor a good thing. It is obvious that this coupling is not necessary. One could easily imagine a system where papers are submitted to a mega-journal (e.g. PLoS ONE, PeerJ, F1000 Research), which organizes the peer review, and then journals (e.g. Nature or Cell) make their editorial selection based on perceived “impact”, possibly using the reviews. Instead, authors must submit to each journal separately until their paper is accepted, and reviewers are asked both to check the scientific standards, which is the alleged purpose of peer review, and to judge the perceived importance of papers, which is an editorial task. This results in frustration for the authors, unnecessary delays and tremendous waste of resources. Thus, it is not a good thing.

Once a peer review system is organized (eg by mega-journals), the remaining role of journals is then editorial selection, and this could be done separately. Once we realize that, it becomes clear that very little infrastructure should be needed to run a journal. A journal issue is then just a list of papers selected by an editor, put online with the appropriate links and editorial comments. I propose that every scientist, or lab, or possibly group of interest, starts their own journal, which I will call “free journal” (see my related posts here and there). Free journals are not tied to any commercial interest; they are self-managed academic journals. Crucially, the editor makes a personal selection based on her readings, papers that she personally thinks are interesting. This means in particular that, in contrast with most journals, the editor is highly qualified. The selection is meaningful, not a collection of thumbs up/down made by disparate reviewers based on vague criteria (“impact”). It also implies that it is totally acceptable for the editorial selection to include “preprints”: the editor is a peer and therefore any featured paper is by definition peer reviewed (ie, as in Nature before 1967). I have made my own free journal of theoretical neuroscience in this spirit. I have collected some technical advice on how to make one, and I would be happy to receive any suggestion.

The bottomline is: it is very easy to make a free journal. On the technical side, one essentially needs to use a blogging system, eg WordPress. This automatically generates an RSS feed (eg one post per issue). I also put all the comments associated to the selected papers on PubPeer, with a link to the journal; this way, anyone using their plugin automatically sees that the papers have been selected in the journal when looking at them on pubmed, for example. On the editorial side, it actually represents little work, and I believe that this little amount is actually quite interesting work to do. Every scientist reads and annotates papers. All that is required to run a free journal then, is to set half a day in the month to put your notes together into a post. This helps organizing your bibliographical notes, and it is helpful for your colleagues and your students. One could also imagine coupling this writing with the lab’s journal club.

Please leave me a note if you are making your own free journal and I would be happy to link it from my journal.

My free journal: Brette’s free journal of theoretical neuroscience