Technical draft for chapter 5, Propagation of action potentials

I have just uploaded a technical draft on chapter 5 of my book on action potentials: Propagation of action potentials. This draft introduces the cable equation, and how conduction velocity depends on axon diameter in unmyelinated and myelinated axons. There is also a short section on the extracellular potential. There are a few topics I want to add, including branching and determinants of conduction velocity (beyond diameter). There is also (almost) no figure at the moment. Finally, it is likely that the chapter is reorganized for clarity. I wanted to upload this chapter nonetheless so as to move on to the next chapter, on spike initiation with an initial segment.

What is computational neuroscience? (XXVI) Is optimization a good metaphor of evolution?

Is the brain the result of optimization, and if so, what is the optimization criterion? The popular argument in favor of the optimization view goes as follows. The brain is the result of Darwinian evolution, and therefore is optimally adapted to its environment, ensuring maximum survival and reproduction rates. In this view, to understand the brain is primarily to understand what “adapted” means for a brain, that is, what is the criterion to be optimized.

Previously, I have pointed out a few difficulties in optimality arguments used in neuroscience, in particular the problem of specification (what is being optimized) and the fact that evolution is a history-dependent process, unlike a global optimization procedure. An example of this history dependence is the fascinating case of mitochondria. Mitochondria are organelles in all eukaryotes cells that produce most of the cellular energy in the form of ATP. At this date, the main view is that these organelles are a case of symbiosis: mitochondria were once prokaryote cells that have been captured and farmed. This symbiosis has been selected and conserved through evolution, but optimization does not seem to be the most appropriate metaphor in this case.

Nonetheless, the optimization metaphor can be useful when applied to circumscribed problems that a biological organism might face, for example the energy consumption of action potential propagation. We can claim for example that, everything else being equal, an efficient axon is better than an inefficient one (with the caveat that in practice, not everything else can be made equal). But when applied at the scale of an entire organism, the optimization metaphor starts facing more serious difficulties, which I will discuss now.

When considering an entire organism, or perhaps an organ like the brain, then what criterion can we possibly choose? Recently, I started reading “Guitar Zero” by Gary Marcus. The author points out that learning music is difficult, and argues that the brain has evolved for language, not music. This statement is deeply problematic. What does it mean that the brain has evolved for language? Language does not preexist to speakers, so it cannot be that language was an evolutionary (“optimization”) criterion for the brain, unless we have a more religious view of evolution. Rather, evolutionary change can create opportunities, which might be beneficial for the survival of the species, but there is no predetermined optimization criterion.

Another example is the color visual system of bees (see for example Ways of coloring by Thompson et al.). A case can be made that the visual system of bees is adapted to the color of flowers they are interested in. But conversely, the color of flowers is adapted to the visual system of bees. This is a case of co-evolution, where the “optimization criterion” changes during the evolutionary process.

Thus, the optimization criterion does not preexist to the optimization process, and this makes the optimization metaphor weak.

A possible objection is that there is a preexisting optimization criterion, which is survival or reproduction rate. While this might be correct, it makes the optimization metaphor not very useful. In particular, it applies equally to all living species. The point is, there are species and they are different even though the optimization criterion is the same. Not all have a brain. Thus, optimization does not explain why we have a brain. Species that have a brain have different brains. The nervous system of a nematode is not the same as that of a human, even though they are all equally well adapted, and have evolved for exactly the same amount of time. Therefore, the optimization view does not explain why we speak and nematodes don’t, for example.

The problem is that “fitness” is a completely contextual notion, which depends both on the environment and on the species itself. In a previous post where I discussed an “existentialist” view of evolution, I proposed the following thought experiment. Imagine a very ancient Earth with a bunch of living organisms that do not reproduce but can survive for an indefinite amount of time. By definition, they are adapted since they exist. Then at some point, an accident occurs such that one organism starts multiplying. It multiplies until it occupies the entire Earth and resources become scarce. At this point of saturation, organisms start dying. The probability of dying being the same for both non-reproducing organisms and reproducing ones, at some point there will be only reproducing organisms. Thus in this new environment, reproducing organisms are adapted, whereas non-reproducing ones are not. If we look at the history of evolution, we note that the world of species constantly changes. Species do not appear to converge to some optimal state, because as they evolve, the environment changes and so does the notion of fitness.

In summary, the optimization criterion does not preexist to the optimization process, unless we consider a broad existentialist criterion such as survival, but then the optimization metaphor loses its usefulness.

What is computational neuroscience (XXV) - Are there biological models in computational neuroscience?

Computational neuroscience is the science of how the brain “computes”, that is, how the brain performs cognitive functions such as recognizing a face or walking. Here I will argue that most models of cognition developed in the field, especially as regards sensory systems, are actually not biological models but hybrid models consisting of a neural model together with an abstract model.

First of all, many neural models are not meant to be models of cognition. For example, there are models that are developed to explain the irregular spiking of cortical neurons, or oscillations. I will not consider them. According to the definition above, I categorize them in theoretical neuroscience rather than computational neuroscience. Here I consider for example models of perception, memory, motor control.

An example that I know well is the problem of localizing a sound source from timing cues. There are a number of models, including a spiking neuron model that we have developed (Goodman and Brette, 2010). This model takes as input two sound waves, corresponding to the two monaural sounds produced by the sound source, and outputs the estimated direction of the source. But the neural model, of course, does not output a direction. Rather, the output of the neural model is the activity of a layer of neurons. In the model, we consider that direction is encoded by the identity of the maximally active neuron. In another popular model in the field, direction is encoded by the relative total activity of two groups of neurons (see our comparison of models in Goodman et al. 2013). In all models, there is a final step which maps the activity of neurons to estimated sound location, and this step is not a neural model but an abstract model. This causes big epistemological problems when it comes to assessing and comparing the empirical value of models because a crucial part of the models is not physiological. Some argue that neurons are tuned to sound location; others that population activity varies systematically with sound location. Both are right, and thus none of these observations is a decisive argument to discriminate between the models.

The same is seen in other sensory modalities. The output is the identity of a face; or of an odor; etc. The symmetrical situation occurs in motor control models: this time the abstract model is on the side of the input (mapping from spatial position to neural activity or neural input). Memory models face this situation twice, with abstract models both on the input (the thing to be memorized) and the output (the recall).

Fundamentally, this situation has to do with the fact that most models in computational neuroscience take a representational approach: they describe how neural networks represent in their firing some aspect of the external world. The representational approach requires defining a mapping (called the “decoder”) from neural activity to objective properties of objects, and this mapping cannot be part of the neural model. Indeed, sound location is a property of objects and thus does not belong to the domain of neural activity. So no sound localization model can ever be purely neuronal.

Thus to develop biological models, it is necessary to discard the representational approach. Instead of “encoding” things, neurons control the body; neurons are agents (rather than painters in the representational approach). For example, a model of sound localization should be a model of an orientational response, including the motor command. The model explains not how space is “represented”, but how an animal orients its head (for example) to a sound source. When we try to model an actual behavior, we find that the nature of the problem changes quite significantly. For example, because a particular behavior is an event, neural firing must also be seen as events. In this context, counting spikes and looking at the mutual information between the count and some stimulus property is not very meaningful. What matters is the events that the spikes trigger in the targets (muscles or other neurons). The goal is not to represent the sensory signals but to produce an appropriate behavior. One also realizes that the relation between sensory signals and actions is circular, and therefore cannot be adequately described as “processing”: sensory signals make you turn the head, but if you turn the head, the sensory signals change.

Currently, most models of cognition in computational neuroscience are not biological models. They include neuron models together with abstract models, a necessity stemming from the representational approach. To a make biological model requires including a model of the sensorimotor loop. I believe this is the path that the community should take.

New chapter : Excitability of an isopotential membrane

I have just uploaded a new chapter of my book on the theory of action potentials: Excitability of an isopotential membrane. In this chapter, I look mostly at the concept of spike threshold: the different ways to define it, its quantitative relation to different biophysical parameters (eg properties of sodium channels), and the conditions for its existence (eg a sufficient number of channels). This is closely related to my previous work on the threshold equation (Platkiewicz and Brette, 2010). It also contains some unpublished work (in particular updates of the threshold equation).

I am planning to extend this chapter with:

  • A few Brian notebooks.
  • A section on excitability types (Hodgkin classification).
  • Some experimental confirmations of the threshold equation that are under way (you will see in section 4.4.2 that current published experimental data do not allow precise testing of the theory).

I am now planning to work on the chapter on action potential propagation.

All comments are welcome.

Free the journals

These days, academic journals serve two purposes: to organize peer review, and to make an editorial selection. With internet and in particular “preprint” servers (eg biorxiv), journals are no longer necessary for distributing academic papers. It is also worth reminding that the peer review system has not always been organized in the way it is currently organized, ie with several external reviewers, multiple revision rounds, etc. For example, Nature only introduced formal peer review in 1967. Before that, the selection would be done internally by the editor.

These two missions, organizing peer review and making an editorial selection, are currently coupled. But this is neither necessary, nor a good thing. It is obvious that this coupling is not necessary. One could easily imagine a system where papers are submitted to a mega-journal (e.g. PLoS ONE, PeerJ, F1000 Research), which organizes the peer review, and then journals (e.g. Nature or Cell) make their editorial selection based on perceived “impact”, possibly using the reviews. Instead, authors must submit to each journal separately until their paper is accepted, and reviewers are asked both to check the scientific standards, which is the alleged purpose of peer review, and to judge the perceived importance of papers, which is an editorial task. This results in frustration for the authors, unnecessary delays and tremendous waste of resources. Thus, it is not a good thing.

Once a peer review system is organized (eg by mega-journals), the remaining role of journals is then editorial selection, and this could be done separately. Once we realize that, it becomes clear that very little infrastructure should be needed to run a journal. A journal issue is then just a list of papers selected by an editor, put online with the appropriate links and editorial comments. I propose that every scientist, or lab, or possibly group of interest, starts their own journal, which I will call “free journal” (see my related posts here and there). Free journals are not tied to any commercial interest; they are self-managed academic journals. Crucially, the editor makes a personal selection based on her readings, papers that she personally thinks are interesting. This means in particular that, in contrast with most journals, the editor is highly qualified. The selection is meaningful, not a collection of thumbs up/down made by disparate reviewers based on vague criteria (“impact”). It also implies that it is totally acceptable for the editorial selection to include “preprints”: the editor is a peer and therefore any featured paper is by definition peer reviewed (ie, as in Nature before 1967). I have made my own free journal of theoretical neuroscience in this spirit. I have collected some technical advice on how to make one, and I would be happy to receive any suggestion.

The bottomline is: it is very easy to make a free journal. On the technical side, one essentially needs to use a blogging system, eg WordPress. This automatically generates an RSS feed (eg one post per issue). I also put all the comments associated to the selected papers on PubPeer, with a link to the journal; this way, anyone using their plugin automatically sees that the papers have been selected in the journal when looking at them on pubmed, for example. On the editorial side, it actually represents little work, and I believe that this little amount is actually quite interesting work to do. Every scientist reads and annotates papers. All that is required to run a free journal then, is to set half a day in the month to put your notes together into a post. This helps organizing your bibliographical notes, and it is helpful for your colleagues and your students. One could also imagine coupling this writing with the lab’s journal club.

Please leave me a note if you are making your own free journal and I would be happy to link it from my journal.

My free journal: Brette’s free journal of theoretical neuroscience

Update on the book

I am writing a book on the theory of action potentials. As I haven't published any new chapter for about 6 months, I will give a brief update on the progress of the book.

It turns out that writing a book is difficult (!). As I write the book, I get to question some aspects I take for granted and I realize they are actually not that simple. That I have learned a lot about biophysics. In turn, this tends to make the book more complicated, and so it requires some additional digestion work. I also realize that some important questions are just unresolved and require some research work. Finally, it is quite difficult to write a book while continuing the normal course of research. I started with a one hour per day habit, but this may not be optimal; I tend to spend the first half-hour trying to get back to the matter of the book. I am starting a new routine with two mornings twice a week. We will see how it goes!

These last months I have been working on the 4th chapter, on excitability of an isopotential membrane. This will contain in particular some of the material in Platkiewicz & Brette (2010) on the relation between biophysical parameters and threshold. I wanted to use the same theoretical approach to apply it to other aspects of the action potential (speed, peak etc), so that needed some more work. I realized that some approximations we had done could be enhanced, but the math is slightly more complicated. It is a challenge to keep this chapter simple. I also wanted to apply the theory to the original Hodgkin-Huxley model, but unfortunately it works moderately well. One reason is that the model was fitted to the full action potential and not to initiation (as in fact essentially all empirical neuron models). So in particular, H&H experiments show that the Na+ conductance depends exponentially on voltage at low voltage, but their model doesn't (or approximately does with a different factor). Another reason is the K+ channel has less delay than in the actual squid axon (which they acknowledge), so there is some interaction with the initial Na+ current. So I will go with a simpler approach, using a simplified excitability model. Neurons are not isopotential anyway.

I am also planning to reorganize and partly rewrite chapters 2 and 3. I find the current chapter 3 (action potential of an isopotential membrane) a bit too technical. I also want to change chapter 2 (membrane polarization) to talk about alternative theories of the membrane potential (Tasaki, Ling, etc). Then I want to insert a chapter on the ionic channel theory of action potentials, which explains the theory and discusses empirical evidence, before the chapter on the Hodgkin-Huxley model. Generally, I want to simplify the exposition. But given my rather slow progress on the book so far, I will probably postpone this and first write drafts of the following chapters.

Finally, I have worked a bit on energy consumption and pumps, and I found out that the current treatment in the literature is not entirely satisfactory (see for example my comments on a classical paper on the subject). It turns out that it is a pretty complicated problem (especially systems of pumps).

In brief, I am trying to finish a first version of the chapter on excitability of an isopotential membrane, hopefully within one month.

Brette’s micro-journal of theoretical neuroscience - February 2017

This is the first issue of my journal of theoretical neuroscience.

 

Editorial

This month, I have selected 4 papers on spike initiation (1-4), 1 classical paper on the theory of brain energetics (5), and 1 paper on bibliometrics (6). Three of the papers on spike initiation (1-3) have in common that they are about the relation between geometry (morphology of the neuron and spatial distribution of channels) and excitability. Spikes are initiated in a small region called the axon initial segment (AIS), and this region is very close to the soma. Thus there is a discontinuity in both the geometry (big soma, thin axon) and the spatial distribution of channels (lots in the AIS). It has great impact on excitability, but this has not been very deeply explored theoretically. In fact, as I have discussed in a recent review (Brette, 2015), most theory on excitability (dynamical systems theory) has been developed on isopotential models, and so is largely obsolete. So there is much to do on spike initiation theory that takes into account the soma-AIS system.

 

Articles

1. Evans MD, Tufo C, Dumitrescu AS and MS Grubb. (2017). Myosin II activity is required for structural plasticity at the axon initial segment. (Comment on PubPeer)

A number of studies have shown that the AIS can move over hours or days, with various manipulations such as depolarizing the neuron (as in this study) or stimulating it optogenetically. Two open questions: what are the molecular mechanisms involved in this displacement? Is it actually a displacement or is it just that stuff is removed at one end and inserted at the other end? The same lab previously addressed the first question, showing the involvement of somatic L-type calcium channels and calmodulin. This study shows that myosin (the stuff of muscle, except not the type expressed in muscles) is involved, which strongly suggests that it is an actual displacement; this is in line with previous studies showing that dendrites and axons are contractile structures (e.g. Roland et al. (2014)). This and previous studies start to provide building blocks for a model of activity-dependent structural plasticity of the AIS (working on it!).

 

2. Hamada M, Goethals S, de Vries S, Brette R, Kole M (2016). Covariation of axon initial segment location and dendritic tree normalizes the somatic action potential. (Comment on PubPeer)

Full disclosure: I am an author of this paper. In the lab, we are currently interested in the relation between neural geometry and excitability. In particular, what is the electrical impact of the location of the axon initial segment (AIS)? Experimentally, this is a difficult question because manipulations of AIS geometry (distance, length) also induce changes in Nav channel and other channel properties, in particular phosphorylation (Evans et al., 2015). So this is typically a good question for theorists. I have previously shown that moving the AIS away from the soma should make the neuron more excitable (lower spike threshold), everything else being equal (Brette, 2013). Here we look at what happens after axonal spike initiation, when the current enters the soma (I try to avoid the term “backpropagate”, see Telenczuk et al., 2016). The basic insight is simple: when the axonal spike is fully developed, the voltage gradient between soma and start of the AIS should be roughly 100 mV, and so the axonal current into the soma should be roughly 100 mV divided by resistance between soma and AIS, which is proportional to AIS distance. Next, to charge a big somatodendritic compartment, you need a bigger current. So we predict that big neurons should have a more proximal AIS. This is what the data obtained by Kole’s lab show in this paper (along with many other things, as our theoretical work is a small part of the paper – as often, most of the theory ends up in the supplementary).

 

3. Michalikova M, Remme MWH and R Kempter. (2017). Spikelets in Pyramidal Neurons: Action Potentials Initiated in the Axon Initial Segment That Do Not Activate the Soma. (Comment on PubPeer)

Using simulations of detailed models, the authors propose to explain the observation of spikelets in vivo (small all-or-none events) by the failed propagation of axonal spikes to the soma. Under certain circumstances, they show that a spike generated at the distal axonal initiation site may fail to reach the somatic threshold for AP generation, so that only the smaller axonal spike is observed at the soma. This paper provides a nice overview of the topic and I found the study convincing. There is in fact a direct relation to our paper discussed above (Hamada et al., 2016): this study shows how the axonal spike can fail to trigger the somatic spike, which explains why the AIS needs to be placed at the right position to prevent this. One can argue (speculatively) that if AIS position is indeed tuned to produce the right amount of somatic depolarization, then sometimes this should fail and result in a spikelet (algorithm: if no spikelet, move AIS distally; if spikelet, move AIS proximally).

 

4. Mensi S, Hagens P, Gerstner W and C Pozzorini (2016). Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons. (Comment on PubPeer)

I had to love this paper, because the authors basically experimentally confirm every theoretical prediction we had made in our paper on spike threshold adaptation (Platkiewicz and Brette, 2011). Essentially, what we had done is derive the dynamics of spike threshold from the dynamics of Nav channel inactivation. There were a number of non-trivial predictions, such as the shortening of the effective integration time constant, sensitivity to input variance, the specific way in which spike threshold depends on membrane potential, and the interaction between spike-triggered and subthreshold adaptation (that we touched upon in the discussion). This study uses a non-parametric model-fitting approach in cortical slices to empirically derive the dynamics of spike threshold (indirectly, based on responses to fluctuating currents), and the results are completely in line with our theoretical predictions.

 

5. Attwell D and SB Laughlin (2001). An energy budget for signaling in the grey matter of the brain (Comment on PubPeer and Pubmed Commons).

This is an old but important paper on energetics of the brain, in particular: how much does it cost to maintain the resting potential? How much does it cost to propagate a spike? The paper explains some theoretical ideas to do these estimations, and is also a good source for relevant empirical numbers. It is important though to look at follow-up studies, which have addressed some issues, for example action potential efficiency is underestimated in this study. One problem in this study is the estimation of the cost of the resting potential, which I think is just wrong (see my detailed comment on Pubmed Commons; I believe the authors will soon respond). Unfortunately, I think it is really hard to estimate this cost by theoretical means; it would require knowing the permeability at rest to various ions, most importantly in the axon.

 

6. Brembs B, Button K and M Munafò (2013). Deep Impact: Unintended consequences of journal rank. (Comment on PubPeer)

The authors look at the relation between journal rank (derived from impact factor) and various indicators, for example effect sizes reported, statistical power, etc. In summary, they found that the only thing journal rank strongly correlates with is the proportion of retractions and frauds. Another interesting finding is about the predictive power of journal rank on future citations. There is obviously a positive correlation since impact factor measures the number of citations. But it is really quite small (see my post on this). What is most interesting is that the predictive power started increasing in the 1960s, when the impact factor was introduced. This strongly suggests that, rather than being a quality indicator, the impact factor biases the citations of papers (increases the visibility of otherwise equally good papers). This paper also shows evidence of manipulation of impact factors by journals (including Current Biology, whose impact factor went from 7 to 12 after its acquisition by Elsevier), and is generally a good source of references on the subject.

My appeal to PLoS Computational Biology

I recently reported that one of my papers has been rejected by PLoS Computational Biology after 10 months and 4 revisions, on the ground of general interest. This has generated a little buzz. A colleague mentioned it on his blog. As a result, the editor of my paper contacted him directly to tell his version of the story, which my colleague has now reported on his blog.

Unfortunately, the editor’s story is “misleading”, to be polite. It is a shame that the review process is confidential, as it allows the journal to hide what actually happens behind their closed doors. Nevertheless, I have asked the journal for the authorization to publish the content of my appeal and their response, where I explain what happened in more detail (and more accurately). They have accepted. I have removed names of the persons involved. This illustrates one of the flaws of the current peer-review system (see this post for how it could work better).

(Just one note: the editor has apparently told my colleague that the third reviewer was a collaborator, so they could not take into account his review. Well, that’s a lie. I know because he chose to sign his review. The "collaboration" was the scientist sending me published data.)

So here it is.

Appeal

Re: Manuscript PCOMPBIOL-D-16-00007R4

So after 10 months and 4 revisions, our paper has been rejected, following the recommendation of one reviewer, because it is not considered of broad enough interest. I quote from the final editorial message: “We regret that the specific hypothesis that your manuscript is geared to dispute does not reach that level of general interest.”.

These facts being recalled, it should be obvious enough that the editorial process has gone very wrong. There were no more technical criticisms already after revision 2, on July 8th, 4 months ago, and the paper should have been accepted then. I have repeatedly asked the editors to explain why we were required to justify novelty and significance after having been required to do so much work on technical aspects. But the editors have refused to answer this simple query. Frankly, I was expecting a bit more respect for the authors that make this journal, and I do not think that explaining the journal’s policy and the decisions is so much to ask. All I know is Michael Eisen’s view, founding editor of this journal, who has cared to comment “I agree - a paper going out for review should mean it is of interest”.

This editorial process has gone beyond anything I have ever witnessed in my career in terms of absurdity and waste. Why scientists (“peers”) would voluntarily make each other’s life so unnecessarily hard instead of cooperating and debating is beyond my understanding. In the end it appears that the ego of one (important?) reviewer matters more than science, and that is very sad. This being said, I have been notified that appeals are only considered when “a) a reviewer or editor is thought to have made a significant factual error” or “b) his/her objectivity is compromised by a documented competing interest”, and since bureaucracy apparently beats reason and ethics, I will now explain how this applies.

I have already explained at length the factual errors of the first reviewer, who is apparently the only one that is trusted by the editors. This editorial message repeats some of them (no, we are not criticizing simulation results of a particular model, but the biophysical interpretation (what goes on physically), and we did so in several state-of-the-art biophysical models, not one). I will therefore focus on case (b), and attach my previous letter to the editors for reference; please also read the responses to reviewers as regards case (a), in particular to reviewer-editor Dr. YYY who has unfortunately not cared to reply to our point-by-point response that he had required from us. The editorial decisions that have led to rejecting the paper on the basis of general interest after 10 months are so bizarre that I am compelled to question senior editor Dr. YYY’s objectivity – I presume that Dr. XXX, who sent the paper for review in the first place, does consider the paper of interest. The sequence of facts speaks for itself:

- On June 6th (revision #2), the editorial message reads “We understand that Reviewer 2 was very enthusiastic, and Reviewer 3 had relatively minor comments, but we both stress that addressing Reviewer 1's reservations are essential. Indeed, it is only fair to say that it seems to us that it will be challenging to address these comments in the context of the presented results.”. The exclusive reliance on one reviewer and the presumption that we could not address the comments is rather surprising. Nonetheless, the editorial message that followed was exclusively about the match with experimental data, not about interest (“the reviewer's point about (apparently) unrealistic voltage dependencies of the currents […]”). We did successfully address these comments, pointing out that the reviewer had made factual errors (such as misreading the figure he was commenting, and discussing the results of an experimental paper he had not opened).

- On July 8th (revision #3), the editorial message was now asking to explain the novelty compared to what we had done in the past (and published in the same journal), blindly following the 3-sentence report of reviewer #1, and making no mention whatsoever to the fact that we had just answered the major (and flawed) criticisms on experimental observations, which constituted the previous editorial message. At this point we complained that we were asked to justify the novelty of our study 7 months after submission, especially when it was explicit in the introduction; nonetheless, we complied and explained again.

- On August 25th (revision #4), we were appalled to read that, instead of finally accepting the paper, senior editor Dr. YYY decided to nominate himself as a reviewer, admitting that “the latest revision is first one he has had the chance to read”. The report was not an assessment of the novelty of the paper, as would have been logical since this was what the previous editorial message was about. Instead, it was a 6 pages long report full of technical queries, making negative criticisms that, for most of them, had already been addressed in previous reports, and asking for substantial modifications of the paper.

- At this point, I replied to the editorial message and obtained no response; as the message stated “If you would like to discuss anything, please don't hesitate to contact either of us directly”, I emailed Dr. YYY, and he started his response as follows: “To answer your email, allow me to be brief, because this sort of exchange should really be going through the journal, and indeed that will be the case from now on.”. Nonetheless, we exchanged a few emails, in which he offered no explanation; in the end we agreed that I would write a point-by-point response to his six-page review, but not modify the paper. I submitted it, together with a response to the first reviewer, and a letter to the editors, on September 22nd.

- Three weeks later, on October 10th, I received a message where I was asked to edit the letter so that it could be passed on to the reviewers. Apparently the editors had not noticed the response to reviewers. It still took them three weeks to read a letter, which, considering the history of this paper, does not strike me as very respectful. I complained to Dr. YYY, who replied “We believe that you have been adequately notified by the PLoS administrative team concerning the status of your revision.”. I had to exchange several emails with Dr. XXX who realized the error. I received no apology from Dr. YYY.

- On November 11th, I received the reject decision, together with the response of reviewer #1 and, oddly enough, of reviewer #3 to which I had not replied (since there was no remaining comment). He also was surprised, since he wrote “I don’t have the expertise, authority or, honestly, the time to judge whether the new comments from Reviewers 1 & 4 are fair, or whether the authors’ responses have fully addressed them – this is clearly a job for the Editors (although hopefully not for the Editor who just became a Reviewer)”. But, editor-reviewer #4 Dr. YYY did not bother replying to my point-by-point response, which he had explicitly required.

- The final decision comes with excuses that are frankly hard to swallow. One is that the editors had failed to see the word “models” in the title. In 10 months and 4 revisions! Who can seriously believe that? And yes, the paper is about models – it is a computational biology journal (note that we have also successfully related the models to experimental observations, on request of the editors). The other excuse is that an anonymous reviewer (reviewer #2) had a conflict of interest and his reviews had to be dismissed. I am of course fine with that decision (let me simply state for the record that none of reviewers I have suggested are in such a position). But this happened in April, more than 6 months ago. Quite appropriately, the editor Dr. XXX asked for another reviewer who identified himself (Dr. ZZZ). Dr. ZZZ wrote a positive review, and in addition he read our responses to the other reviewers and wrote “the revisions of the manuscript in response to the other reviewers' comments seem entirely appropriate.”. At this point, given that no objection had been raised by any reviewer or editor on methods, results or clarity, the paper should have been accepted. Instead, the editors decided to follow a nonsensical comment from reviewer #1 alone: “unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists”, which was not even consistent with his/her own first positive assessment (“this is an interesting paper”). Given that Dr. XXX sent the paper for review in the first place, this decision must originate from Dr. YYY (who at this point had not read the paper, by his own admission). I am compelled to conclude that Dr. YYY has not been objective, and in fact has been actively blocking our paper. Unfortunately, this is not the first time I witness the questionable attitude of Dr. YYY, as he has recently been a reviewer for an essay I wrote. The review process was extremely long, went over multiple rounds with massive lists of requests, where Dr. YYY basically wanted to rewrite the text to follow his own views and style. During the review process, Dr. YYY contacted me directly by email to discuss the paper, going so far as asking for co-authorship (“Indeed, the level of suggestions are approaching collaboration on this paper- something I would be happy with but I assume is not what you have in mind.”). In the same email, and while the review process was not over, he asked me for an experimental collaboration – which of course I have not followed up. I had to ask the editor to intervene to stop the madness – which he did: “Indeed your paper has been unduly delayed and I have asked the reviewer to answer me within 24 hours.”. I apologize for disclosing these email excerpts, but I have no other choice since I am asked to provide documentation. It is clear that, had I imagined that Dr. YYY could be chosen as a reviewer (which seemed unlikely given his recent track record), I would have opposed him. But I did not anticipate that he would nominate himself, or overthrow the editor’s decision without even reading my paper (by his own admission).

Therefore, I am asking that Dr. YYY is replaced by a new senior editor with a more reasonable attitude.

As far as I can see: 1) the three reviewers were initially positive on the interest of the paper; 2) the editor Dr. XXX, who as far as I can tell is the only scientist involved in this process who is a member of the computational biology community, supported our paper since he sent it for review; 3) one reviewer, who seems to be an experimental electrophysiologist (unfortunately he or she has decided to remain anonymous), reverted his subjective opinion on the paper’s interest after we have pointed out the errors in his/her report, and even then, still judged the paper interesting for the computational neuroscience community. I have failed to see to how the decision is “not trivial to reach”.

Best regards,

Romain Brette

Attached: Letter to the editors from September 16th

 

Letter to the editors, September 16th

Dear Editors,

In the previous revision, I raised serious objections regarding the abusive attitude of reviewer #1. These objections have apparently been completely dismissed, but what I have been most disappointed about is the total lack of response to these objections. I am writing this letter in the hope that this time it will be given some consideration.

This manuscript has been submitted 8 months ago. This is the fourth major revision that we have been asked to make. The responses are now totaling more than 25 pages, much longer than the article itself. We have now entered a phase where a large part of the responses consist in citing previous revisions where the issues have already been addressed. This revision reaches a new level, where a fourth reviewer is added and repeats mostly questions that we have already answered in previous revisions. Why a fourth reviewer is considered necessary after 8 months of revision is not clear, when none of the three reviewers has raised any serious concern.

I have officially asked a detailed explanation for this peculiar decision. The only response I have obtained so far is that there was “a tie” between “conflicting reviews”. So apparently the editorial decision has been based on a vote between reviewers. This is yet what I read on the journal’s website:

If reviewers appear to disagree fundamentally, the editors may choose to share all the reviews with each of the reviewers and by this means elicit additional comment that may help the editors to make a decision. That said, decisions are not necessarily made according to majority rule. Instead, the editors evaluate the recommendations and comments of the reviewers alongside comments by the authors and material that may not have been made available to those reviewers.

If one followed this process, then one would realize that:

- None of the three reviewers has any remaining objection about results, methods, or clarity of the text.

- Reviewer #2 and #3 have an overall very positive assessment of the paper and in particular of its interest. Rev #2: “This is a great revision. The authors have clarified and addressed all my previous concerns. […] I strongly believe the study is publishable as it stands”; Rev# 3: “This is a very clear and logically presented manuscript dealing with a key question in fundamental cellular neuroscience”.

- On his/her first report, reviewer #1 also made a positive assessment of the paper and of its interest: “This is a clearly written manuscript that addresses an interesting question regarding the nature of spike initiation. Specifically, the authors propose a plausible explanation […] This is an interesting paper.”.

- After two rounds of technical revisions, in which we pointed out the reviewer’s errors and to which no objection has been made, reviewer #1 changed his mood and now concludes, without any argument: “unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists” (sic).

- Reviewer #3 has read our responses to the two other reviewers and concluded: “the revisions of the manuscript in response to the other reviewers' comments seem entirely appropriate.” From these facts, it appears clearly that there are in fact 3 convergent reviews. All 3 reviewers have concluded that results and methods are rigorous and the text is well written. All 3 reviewers have found the paper interesting. It might be that reviewer #1 has “voted” negatively; however I would expect the editorial decision to be based on the content of reviews and responses, which in this case is convergent, and not on the mood of one reviewer, which in this case is inconsistent between the reports. It is my understanding that an editorial decision should be based on arguments and facts, not on the reviewer’s emotions.

Nonetheless, we have replied in detail, again, to all criticisms. We have pointed out in particular the factual errors of reviewer #1. To help the editors, we have underlined the important points. We would appreciate if the editors checked for themselves whether reviewer #1 is right or not. We have also replied to reviewer/senior editor Dr YYY, although I deeply regret that this fourth version is “the first one he has had the chance to read”.

Finally, I would like to call your attention on the conclusion of reviewer #1, on which his/her recommendation is based, which requires in my opinion a clarification from the journal: “Finally, now in their third revision, the authors acknowledge that this work strongly builds on the previous resistive-coupling hypothesis, and tests whether this hypothesis is compatible with sharp spike onset (a view they have already proposed), vs the alternative proposed by Yu, of back propagation. This very specific theoretical result I feel is unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists” (Please see also our response, pointing out that the said acknowledgement was clear already in the very first version.)

This recommendation makes some important presumptions about this journal’s editorial views. Therefore I would very much like to know if this journal:

- also considers that proposing a hypothesis is more important than testing one, and that only the former should be published;

- considers that interesting computational neuroscience studies do not belong to this journal. I would also very much like to know if this journal considers that it is ok for a reviewer to ask for substantial technical revisions when he/she has already decided that the paper should not be published anyway. This has been indeed a lot of work for a decision ultimately based on the mood of one reviewer.

As I have argued in this letter, it is very clear that, given the content of the reports of the 3 reviewers and of our responses, this manuscript should have been accepted already. After 8 months and 4 revisions, and no serious objection on the manuscript, I can only hope very much that this journal does not confuse rigorous peer review with author harassment.

Again, I am hoping that this letter will be seriously taken into consideration, and even perhaps responded to.

Best regards,

Romain Brette

 

Response of the editors-in-chief

Dear Dr. Brette,

Thank you for your response to the recent decision on your paper “The Basis of Sharp Spike Onset in Standard Biophysical Models”. The manuscript and your appeal letter have been carefully evaluated by Dr. XXX and the journal’s Editors-in-Chief.

We understand your frustration regarding the length and complexity of the review process, and we would like to apologize for the time taken to reach a final decision.

We would like to provide some further clarification on how the editorial decision was reached. The manuscript addresses the issue - how do cortical neuronal action potentials rise so sharply? – and after an initial evaluation, Dr. XXX found it interesting enough to merit sending out for review, so that the reviewers could assess the technical solidity of the work and the conceptual advance proposed. The paper received mixed reviews, and hence merited a revision. After several rounds of revision, Reviewer 1 remained unconvinced. In order to aid the review process, Dr. YYY volunteered to evaluate the paper in depth, and his opinion concurred with that of Reviewer 1. Dr. XXX also re-read the paper and came to the conclusion that this manuscript is critically close conceptually to the previous PLOS publications - in fact the idea was laid out clearly and beautifully in the 2013 and 2015 PLOS papers. The present manuscript is an implementation of this idea, showing that other biophysically realistic models used to examine the spike sharpness issue show the mechanism that was suggested in the 2013 and 2015 PLOS papers.

We regret that this did not become fully clear before the third revision, and we understand your disappointment with the final outcome.

However, we agree that the findings of the paper are not significant enough for PLOS Computational Biology, and we will not be reconsidering the paper. We are sorry not to be more encouraging, but we hope that you can understand the reasons for this decision.

My new year resolution : to help move science to the post-journal world

I wish I could make the world a better place. I would like to prevent climate change and wars; but that’s not so easy on a personal level. What I can try to do more modestly as a scientist, is to make the scientific world a better place. We have all heard the phrase “publish or perish”. We all complain that careers are made by publishing in “high-impact” journals who favor story-telling and grand claims, and generally select papers arbitrarily (let alone that they do not even predict the impact of papers they publish); a trend that has been increasingly strong and has very negative impact on how research is done, including serious ethical problems. But what do we do concretely about it? For most of us including myself, not much. We keep on submitting to those journals, and we say we have no choice because that is how we are evaluated (for positions or grants). But who evaluates us? Surely there are some political aspects to it (mostly for funding), but the truth is, we are evaluated by peers. In brief, we are inflicting this madness on ourselves.

So: let us stop complaining and try to change the situation. I have previously exposed a vision of how the academic publishing system could look like without journals (by the way, this is not an original thought, fortunately). How to make it happen?

Where we are now

We should be hopeful, because many good things are happening:

  • Preprint servers are getting more and more attention. In biology, a number of journals are now accepting direct submissions from biorxiv, including all the PLoS journals, PNAS, eLife, the Journal of Neuroscience. This ought to boost submissions of preprints.
  • A few journals have started publishing the reviews along with the accepted paper, for example eLife, eNeuro and Nature Communications.
  • More generally, open access to both paper and data is getting more and more common and enforced.
  • A new journal, F1000 Research, now practices post-publication review. The paper is indexed in pubmed once two reviewers have approved it.
  • Very significantly, the Wellcome Trust has opened a journal for its grantees, Wellcome Open Research, based on post-publication review (in partnership with F1000), with this statement “The expectation is that this, and other similar funder platforms that are expected to emerge, will ultimately combine into one central platform that ensures that assessment can only be done at the article level”.
  • Finally: Pubpeer, started just a few years ago. A simple idea: to let anyone comment on any paper, anonymously or not, and let the authors know and respond. You should install their browser plugin. This is an individual initiative but it has already made a big impact, in particular by showing that the “big journals” are not better than the other ones in preventing flaws or frauds. It also addresses the concern that open reviews would be too nice: anyone who finds serious flaws can spot them anonymously and the authors will have to consider them. Pubmed commons is similar, but with signed comments.

What we can do now on a personal level

  1. Put every single paper you write on a “preprint” server before you submit to a journal.
  2. Put all your data online, see eg OpenAIRE.
  3. Remove journal names from the publications in your website. People who care about them will find out anyway.
  4. Start a literature search routine that does not involve looking at tables of contents; a few ideas in this Twitter thread; you could also have an author alert on Google Scholar.
  5. Write comments on Pubpeer; including on “pre-prints”.
  6. Send your papers to a journal with open post-publication review. I know this one is difficult, because the community still cares about impact factors. But at least you can favor those with public reviews (eg eLife, Nature Communications; I would prefer the former as it is non-profit). Instead of sending your papers to Frontiers, send them to F1000 Research; or at least eNeuro.

At the local community level, we can advocate for post-publication review. For example, the INCF has opened a channel on F1000 Research. Maybe we could have a computational neuroscience channel there, sponsored by the OCNS. It is too bad that F1000 Research is for-profit rather than institutional, but currently I do not know of other options.

What we can do on a more global scale

Open post-publication review potentially addresses the issue of recognition, but it does not address the issue of visibility. One concern I have by submitting in F1000 Research (for example), is that my intended readership will not know about it. There are so many papers published each year, one does not even have the time to read the title of all of them. This is one role journals have fulfilled: to select papers worthy of interest for a given community. But since we do not need journals anymore to publish anything, editorial selection and publication need not be coupled anymore. So here is my proposition. We make an independent website which lets any scientist, or possibly any group of scientists, be their own journal. That is, make a selection of papers they find interesting (including preprints). We provide a number of tools to make this as simple as possible: linking to pubpeer and pubmed commons, searching/browsing, alerting authors whose work is selected, email alerts and RSS feeds, etc. Papers are preferentially linked to the preprint if it exists, so as to completely bypass the journals. We could also let authors suggest their own paper for editorial selection. Basically, we provide all the services a journal typically has. This will be made increasingly easier as public open reviews become more widespread. These new “journals” could be run by an individual scientist, or a lab (eg linked to a journal club), or possibly a scientific society or group of scientists. Let us call any of these an “editor”. I would be happy for example to follow the selections of a few authors I respect, and that would be probably more valuable to me that the selection made by any journal, of which very few typically catch my attention in a given table of contents.

I am hoping that it goes as follows:

  1. People start using these individual journals, because it provides relevant information to them.
  2. As a result, papers in less well-known journals and preprints start getting more attention, and more citations.
  3. People take the habit of putting their papers on preprint servers because they get immediate attention.
  4. Editors progressively stop selecting papers published in journals because they have already selected them when they were preprints.
  5. As editors are also committee members, journal names start to matter less in evaluating research.
  6. Traditional journals disappear; instead, we have direct publication (formerly known as preprints) + open public reviews, both anonymous and signed.

How to get it started?

One simple idea to get it started is to make automatic channels for the actual conventional journals. For each journal, we list the table of contents, linked to preprint versions instead of the journal website, and to pubpeer, etc. If it’s convenient, people might start to use it, especially if it allows free access to the papers (legally, since we would use preprints). Then to get people to use the non-conventional channels, we provide suggestions based on content similarity (ie “you may also like...”).

How about this resolution?