Are journals necessary filters?

In my previous post, I argued that one reason why many people cling to the idea that papers should be formally peer-reviewed before they are published, cited and discussed, despite the fact that this system is a recent historical addition to the scientific enterprise, is a philosophical misunderstanding about the nature of scientific truth. That is, the characteristic of science, as opposed to religion, is that it is never validated; it can and must be criticized. Therefore, no amount of peer reviewing can ever be a stamp of approval for “proven facts”. Instead what we need is public discussion of the science, not stamps of approvals.

In response to that post came up another common reason why many people think it’s important to have journals that select papers after peer-review. The reason is that we are crowded with millions of papers and you can’t read everything, so you need some way to know which paper is important, based on peer-review. So here this is not just about peer-reviewing before publishing, but also about the hierarchy of journals. Journals must do an editorial selection so that you don’t have to waste your time reading low-quality papers, or uninteresting papers. What this means, quite literally, is that you only read papers from “top journals”.

Here I want to show that this argument is untenable, because selecting their readings based on journal names is not what scientists should do or actually do, and because the argument is logically inconsistent.

Why is it logically inconsistent? If the argument is correct, then those papers accepted in lower rank journals should not be read because they are not worth reading. But in that case, why publish them at all? There seems to be no reason for the existence of journals that people do not read because they do not have time to read bad papers. If we argue that those journals should exist because in some cases there are some papers worth reading there, for any sort of reason, then we must admit that we don’t actually use journals as filters, or that we should not use them as filters (see below).

Is it good scientific practice to use journal names as filters? What this implies is that you ignore any paper, including papers in your field and potentially relevant to your own studies, which are not published in “top journals”. So for example, you would not cite a relevant study if it’s not from a top journal. It also means that you don’t check that your own work overlaps other studies. So you potentially take credit for ideas that you were not the first to have. Is this a professional attitude?

If in fact you don’t totally ignore those lower journals, then you don’t actually use journal name as a filter, you actually do look at the content of papers independently of the journal they are published in. Which is my final point: to use journal names as filters is not the normal practice of scientists (or maybe I’m optimistic?). When you look for relevant papers on your topic of interest, you typically do a search (eg pubmed). Do you only consider papers from “top journals”, blindly discarding all others? Of course not. You first look at the titles to see if it might be relevant; then you read the abstract if they are; if the abstract is promising you might open the paper and skim through it, and possibly read it carefully if you think it is worth it. Then you will look at cited papers; or at papers that cite the interesting you just read; or you will read a review; maybe a colleague or your advisor will suggest a few readings. In brief: you do a proper bibliographical search. I cannot believe that any good scientist considers that doing a bibliographical search consists in browsing the table of contents of top journals.

The only case when you do use journal names to select papers to read is indeed when you read tables of contents every month for a few selected journals. How much of this accounts for the papers that you cite? You can get a rough idea of this by looking at the cited half-life of papers or journals. For Cell, it’s about 9 years. I personally also follow new papers on biorxiv using keywords, while most new papers in journals are irrelevant to me because they cover too many topics.

In summary: using journals as filters is not professional because it means poor scholarship and misattribution of credit. Fortunately it’s not what scientists normally do anyway.

One related argument that came out in the discussion of my previous post is that having papers reviewed post-publication could not work because that would be too much work, and consequently most papers would not be reviewed, while at least in the current system every paper is peer reviewed. That is wrong in several ways. First, you can have papers published then peer-reviewed formally and publicly (as in F1000 Research), without this being coupled to editorial selection. Second, if anything, having papers submitted a single time instead of many times to different journals implies that there will be less work for reviewers, not more. Third, what is exactly the advantage of having each paper peer-reviewed if it is argued that those papers should not be read or cited? In the logic where peer review in “good journals” serves as filters for important papers, it makes no difference whether the unimportant papers are peer reviewed or not, so this cannot count as a valid argument against post-publication review.

All this being said, there is still a case for editorial selection after publication, as one of the many ways to discover papers of interest, see for example my free journal of theoretical neuroscience.

The great misunderstanding about peer review and the nature of scientific facts

Last week I organized a workshop on the future of academic publication. My point was that our current system, based on private pre-publication peer review, is archaic. I noted that the way the peer review system is currently organized (where external reviewers judge both the quality of the science and the interest for the journal) represents just a few decades in the history of science. It can hardly qualify as the way science is or should be done. It is a historical feature. For example, only one of Einstein’s papers was formally peer-reviewed; Crick & Watson’s DNA paper was not formally peer-reviewed. Many journals introduced external peer review in the 1960s or 1970s to deal with the growth in the number and variety of submissions (see e.g. Baldwin, 2015); before that, editors would decide whether to publish the papers they received, depending on the number of pages they could print.

Given the possibilities that offers the internet, it seems that there is no reason anymore to couple the two current roles of peer review: editorial selection and scientific discussion. One could simply share their work online, get feedback from the community to discuss the work, and then let people recommend papers to their colleagues and compile all sorts of reader’s digests. No time wasted in multiple submissions, no prestige misattributed to publications in glamour journals, who do not do a better a job than any other journal at pointing errors and frauds. Just the science and the public discussion of science.

But there is a lot of resistance to this idea, namely the idea that papers should be formally approved by peer reviewers before they are published. Because otherwise, so many people claim, the scientific world would be polluted by all sorts of unverified claims. It would not be science anymore, just gossip. I have attributed this attitude to conservatism, first because as noted above this system is a rather recent addition to the scientific enterprise, and second because papers are published before peer review. We call those “preprints”, but really these are scientific papers made public, so by definition they are published. I follow the preprints in my field and I don’t see any particular loss in quality.

However, I think I was missing a key element. The more profound reason why many people, in particular experimental biologists, are so attached to peer review is in my view that they hold naive philosophical views about the notion of truth in science. A paper should be peer-reviewed because otherwise you can’t cite it as a true fact. Peer review validates science, thanks to experts who make sure that the claims of the authors are actually true. Of course it can go wrong and reviewers might miss something, but it is the purpose of peer review. This view is reflected in the tendency, especially in biology journals, to choose titles that look like established truths: “Hunger is controlled by HGRase”, instead of “The molecular control of hunger”. Scientists and journalists can then write revealed truths with a verse reference, such as “Hunger is controlled by HGRase (McDonald et al., 2017)”.

The great misunderstanding is that truth is a notion that applies to logical propositions (for example, mathematical theorems), not to empirical claims. This has been well argued by Popper, for example. Truth is by nature a theoretical concept. Everything said is said with words, and in this sense it always refers to theoretical concepts. One can only judge whether observations are congruent with the meaning attributed to the words, and that meaning necessarily has a theoretical nature. There is no such thing as an “established fact”. This is so even of what we might consider as direct observations. Take for example the claim “The resting potential of neurons is -70 mV”. This is a theoretical statement. Why? First, because to establish it, I have recorded a number of neurons. If you test it, it will be on a different neuron, which I have not measured. So I am making a theoretical claim. Probably, I also tested my neurons with a particular method (not mentioning a particular region and species). But my claim makes no reference to the method by which I have made the inference. That would be the “methods” part of my paper, not the conclusion, and when you cite my paper, you will cite it because of the conclusion, the “established fact”, you will not be referring to the methods, which you consider are the means to establish the fact. It is the role of the reviewers to check the methods, to check that they do establish the fact.

But these are trivial remarks. It is not just that the method matters. The very notion of an observation always implicitly relies on a theoretical background. When I say that the resting potential is -70 mV, I mean that there is a potential difference of -70 mV across the membrane. But that’s not what I measure. I measure the difference in potential between some point outside the cell and the inside of a patch pipette whose solution is in contact with the cell’s inside. So I am assuming the potential is the same in all points of the cytosol, even though I have not tested it. I am also implicitly modeling the cytosol as a solution, even though the reality is more complex than that, given the mass of charged proteins in it. I am assuming that the extracellular potential is constant. I am assuming that my pipette solution reasonably matches the actual cytosol solution, given that “solution” is only a convenient model. I am implicitly making all sorts of theoretical assumptions, which have a lot of empirical support but are still of a theoretical nature.

I have tried with this example to show that even a very simple “fact” is actually a theoretical proposition, with many layers of assumptions. But of course in general, papers typically make claims that rely less firmly on accepted theoretical grounds, since they must be “novel”. So it is never the case that a paper definitely proves its conclusions. Because conclusions have a theoretical nature, all that can be checked is whether observations are consistent with the authors’ interpretation.

So the goal of peer review can’t be to establish the truth. If it were the case, then why would reviewers ever disagree? They disagree because they cannot actually judge whether a claim is true; they can only say whether they are personally convinced. This makes the current peer review system extremely poor, because all the information we get is: two anonymous people were convinced (and maybe others were not, but we’ll never find out). What would be more useful would be to have an open public discussion, with criticisms, qualifications and alternative interpretations fully disclosed for anyone to read and make their own opinion. In such a system, the notion of a stamp of approval on a paper would simply be absurd; why hide the disapprovals? There is the paper, and there is the scientific discussion of the paper, and that is all there needs to be.

There is some concern these days that peer reviewed research is unreliable. Well, science is unreliable. That is almost what defines it: it can be criticized and revised. Seeing peer review as the system that establishes the scientific truth is not only a historical error, it is a great philosophical error, and a dangerous bureaucratic view of science. We don’t need editorial decisions based on peer review. We need free publication (we have it) and we need open scientific discussion (it’s coming). That’s all we need.

10 simple rules to format a preprint

Submitting papers to preprint servers (bioRxiv) is finally getting popular in biology. Unfortunately, many of these papers are formatted in a way that is very inconvenient to read, possibly because authors stick to the format asked by journals. Here are 10 simple rules to format your preprints:

  1. Format your preprint in the way you would like to read it. The next rules simply implement this first rule.
  2. Use single spacing. No one is going to write between the lines.
  3. Insert figures and their captions in the text, at the relevant place. It is really annoying when you have to continuously go back and forth between the text and the last pages. Putting figures at the end of the paper and captions yet at another place should be punished.
  4. Don’t forget the supplementary material.
  5. We don’t really need 10 rules. In fact the first rule is just fine.

Free the journals

These days, academic journals serve two purposes: to organize peer review, and to make an editorial selection. With internet and in particular “preprint” servers (eg biorxiv), journals are no longer necessary for distributing academic papers. It is also worth reminding that the peer review system has not always been organized in the way it is currently organized, ie with several external reviewers, multiple revision rounds, etc. For example, Nature only introduced formal peer review in 1967. Before that, the selection would be done internally by the editor.

These two missions, organizing peer review and making an editorial selection, are currently coupled. But this is neither necessary, nor a good thing. It is obvious that this coupling is not necessary. One could easily imagine a system where papers are submitted to a mega-journal (e.g. PLoS ONE, PeerJ, F1000 Research), which organizes the peer review, and then journals (e.g. Nature or Cell) make their editorial selection based on perceived “impact”, possibly using the reviews. Instead, authors must submit to each journal separately until their paper is accepted, and reviewers are asked both to check the scientific standards, which is the alleged purpose of peer review, and to judge the perceived importance of papers, which is an editorial task. This results in frustration for the authors, unnecessary delays and tremendous waste of resources. Thus, it is not a good thing.

Once a peer review system is organized (eg by mega-journals), the remaining role of journals is then editorial selection, and this could be done separately. Once we realize that, it becomes clear that very little infrastructure should be needed to run a journal. A journal issue is then just a list of papers selected by an editor, put online with the appropriate links and editorial comments. I propose that every scientist, or lab, or possibly group of interest, starts their own journal, which I will call “free journal” (see my related posts here and there). Free journals are not tied to any commercial interest; they are self-managed academic journals. Crucially, the editor makes a personal selection based on her readings, papers that she personally thinks are interesting. This means in particular that, in contrast with most journals, the editor is highly qualified. The selection is meaningful, not a collection of thumbs up/down made by disparate reviewers based on vague criteria (“impact”). It also implies that it is totally acceptable for the editorial selection to include “preprints”: the editor is a peer and therefore any featured paper is by definition peer reviewed (ie, as in Nature before 1967). I have made my own free journal of theoretical neuroscience in this spirit. I have collected some technical advice on how to make one, and I would be happy to receive any suggestion.

The bottomline is: it is very easy to make a free journal. On the technical side, one essentially needs to use a blogging system, eg WordPress. This automatically generates an RSS feed (eg one post per issue). I also put all the comments associated to the selected papers on PubPeer, with a link to the journal; this way, anyone using their plugin automatically sees that the papers have been selected in the journal when looking at them on pubmed, for example. On the editorial side, it actually represents little work, and I believe that this little amount is actually quite interesting work to do. Every scientist reads and annotates papers. All that is required to run a free journal then, is to set half a day in the month to put your notes together into a post. This helps organizing your bibliographical notes, and it is helpful for your colleagues and your students. One could also imagine coupling this writing with the lab’s journal club.

Please leave me a note if you are making your own free journal and I would be happy to link it from my journal.

My free journal: Brette’s free journal of theoretical neuroscience

My appeal to PLoS Computational Biology

I recently reported that one of my papers has been rejected by PLoS Computational Biology after 10 months and 4 revisions, on the ground of general interest. This has generated a little buzz. A colleague mentioned it on his blog. As a result, the editor of my paper contacted him directly to tell his version of the story, which my colleague has now reported on his blog.

Unfortunately, the editor’s story is “misleading”, to be polite. It is a shame that the review process is confidential, as it allows the journal to hide what actually happens behind their closed doors. Nevertheless, I have asked the journal for the authorization to publish the content of my appeal and their response, where I explain what happened in more detail (and more accurately). They have accepted. I have removed names of the persons involved. This illustrates one of the flaws of the current peer-review system (see this post for how it could work better).

(Just one note: the editor has apparently told my colleague that the third reviewer was a collaborator, so they could not take into account his review. Well, that’s a lie. I know because he chose to sign his review. The "collaboration" was the scientist sending me published data.)

So here it is.

Appeal

Re: Manuscript PCOMPBIOL-D-16-00007R4

So after 10 months and 4 revisions, our paper has been rejected, following the recommendation of one reviewer, because it is not considered of broad enough interest. I quote from the final editorial message: “We regret that the specific hypothesis that your manuscript is geared to dispute does not reach that level of general interest.”.

These facts being recalled, it should be obvious enough that the editorial process has gone very wrong. There were no more technical criticisms already after revision 2, on July 8th, 4 months ago, and the paper should have been accepted then. I have repeatedly asked the editors to explain why we were required to justify novelty and significance after having been required to do so much work on technical aspects. But the editors have refused to answer this simple query. Frankly, I was expecting a bit more respect for the authors that make this journal, and I do not think that explaining the journal’s policy and the decisions is so much to ask. All I know is Michael Eisen’s view, founding editor of this journal, who has cared to comment “I agree - a paper going out for review should mean it is of interest”.

This editorial process has gone beyond anything I have ever witnessed in my career in terms of absurdity and waste. Why scientists (“peers”) would voluntarily make each other’s life so unnecessarily hard instead of cooperating and debating is beyond my understanding. In the end it appears that the ego of one (important?) reviewer matters more than science, and that is very sad. This being said, I have been notified that appeals are only considered when “a) a reviewer or editor is thought to have made a significant factual error” or “b) his/her objectivity is compromised by a documented competing interest”, and since bureaucracy apparently beats reason and ethics, I will now explain how this applies.

I have already explained at length the factual errors of the first reviewer, who is apparently the only one that is trusted by the editors. This editorial message repeats some of them (no, we are not criticizing simulation results of a particular model, but the biophysical interpretation (what goes on physically), and we did so in several state-of-the-art biophysical models, not one). I will therefore focus on case (b), and attach my previous letter to the editors for reference; please also read the responses to reviewers as regards case (a), in particular to reviewer-editor Dr. YYY who has unfortunately not cared to reply to our point-by-point response that he had required from us. The editorial decisions that have led to rejecting the paper on the basis of general interest after 10 months are so bizarre that I am compelled to question senior editor Dr. YYY’s objectivity – I presume that Dr. XXX, who sent the paper for review in the first place, does consider the paper of interest. The sequence of facts speaks for itself:

- On June 6th (revision #2), the editorial message reads “We understand that Reviewer 2 was very enthusiastic, and Reviewer 3 had relatively minor comments, but we both stress that addressing Reviewer 1's reservations are essential. Indeed, it is only fair to say that it seems to us that it will be challenging to address these comments in the context of the presented results.”. The exclusive reliance on one reviewer and the presumption that we could not address the comments is rather surprising. Nonetheless, the editorial message that followed was exclusively about the match with experimental data, not about interest (“the reviewer's point about (apparently) unrealistic voltage dependencies of the currents […]”). We did successfully address these comments, pointing out that the reviewer had made factual errors (such as misreading the figure he was commenting, and discussing the results of an experimental paper he had not opened).

- On July 8th (revision #3), the editorial message was now asking to explain the novelty compared to what we had done in the past (and published in the same journal), blindly following the 3-sentence report of reviewer #1, and making no mention whatsoever to the fact that we had just answered the major (and flawed) criticisms on experimental observations, which constituted the previous editorial message. At this point we complained that we were asked to justify the novelty of our study 7 months after submission, especially when it was explicit in the introduction; nonetheless, we complied and explained again.

- On August 25th (revision #4), we were appalled to read that, instead of finally accepting the paper, senior editor Dr. YYY decided to nominate himself as a reviewer, admitting that “the latest revision is first one he has had the chance to read”. The report was not an assessment of the novelty of the paper, as would have been logical since this was what the previous editorial message was about. Instead, it was a 6 pages long report full of technical queries, making negative criticisms that, for most of them, had already been addressed in previous reports, and asking for substantial modifications of the paper.

- At this point, I replied to the editorial message and obtained no response; as the message stated “If you would like to discuss anything, please don't hesitate to contact either of us directly”, I emailed Dr. YYY, and he started his response as follows: “To answer your email, allow me to be brief, because this sort of exchange should really be going through the journal, and indeed that will be the case from now on.”. Nonetheless, we exchanged a few emails, in which he offered no explanation; in the end we agreed that I would write a point-by-point response to his six-page review, but not modify the paper. I submitted it, together with a response to the first reviewer, and a letter to the editors, on September 22nd.

- Three weeks later, on October 10th, I received a message where I was asked to edit the letter so that it could be passed on to the reviewers. Apparently the editors had not noticed the response to reviewers. It still took them three weeks to read a letter, which, considering the history of this paper, does not strike me as very respectful. I complained to Dr. YYY, who replied “We believe that you have been adequately notified by the PLoS administrative team concerning the status of your revision.”. I had to exchange several emails with Dr. XXX who realized the error. I received no apology from Dr. YYY.

- On November 11th, I received the reject decision, together with the response of reviewer #1 and, oddly enough, of reviewer #3 to which I had not replied (since there was no remaining comment). He also was surprised, since he wrote “I don’t have the expertise, authority or, honestly, the time to judge whether the new comments from Reviewers 1 & 4 are fair, or whether the authors’ responses have fully addressed them – this is clearly a job for the Editors (although hopefully not for the Editor who just became a Reviewer)”. But, editor-reviewer #4 Dr. YYY did not bother replying to my point-by-point response, which he had explicitly required.

- The final decision comes with excuses that are frankly hard to swallow. One is that the editors had failed to see the word “models” in the title. In 10 months and 4 revisions! Who can seriously believe that? And yes, the paper is about models – it is a computational biology journal (note that we have also successfully related the models to experimental observations, on request of the editors). The other excuse is that an anonymous reviewer (reviewer #2) had a conflict of interest and his reviews had to be dismissed. I am of course fine with that decision (let me simply state for the record that none of reviewers I have suggested are in such a position). But this happened in April, more than 6 months ago. Quite appropriately, the editor Dr. XXX asked for another reviewer who identified himself (Dr. ZZZ). Dr. ZZZ wrote a positive review, and in addition he read our responses to the other reviewers and wrote “the revisions of the manuscript in response to the other reviewers' comments seem entirely appropriate.”. At this point, given that no objection had been raised by any reviewer or editor on methods, results or clarity, the paper should have been accepted. Instead, the editors decided to follow a nonsensical comment from reviewer #1 alone: “unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists”, which was not even consistent with his/her own first positive assessment (“this is an interesting paper”). Given that Dr. XXX sent the paper for review in the first place, this decision must originate from Dr. YYY (who at this point had not read the paper, by his own admission). I am compelled to conclude that Dr. YYY has not been objective, and in fact has been actively blocking our paper. Unfortunately, this is not the first time I witness the questionable attitude of Dr. YYY, as he has recently been a reviewer for an essay I wrote. The review process was extremely long, went over multiple rounds with massive lists of requests, where Dr. YYY basically wanted to rewrite the text to follow his own views and style. During the review process, Dr. YYY contacted me directly by email to discuss the paper, going so far as asking for co-authorship (“Indeed, the level of suggestions are approaching collaboration on this paper- something I would be happy with but I assume is not what you have in mind.”). In the same email, and while the review process was not over, he asked me for an experimental collaboration – which of course I have not followed up. I had to ask the editor to intervene to stop the madness – which he did: “Indeed your paper has been unduly delayed and I have asked the reviewer to answer me within 24 hours.”. I apologize for disclosing these email excerpts, but I have no other choice since I am asked to provide documentation. It is clear that, had I imagined that Dr. YYY could be chosen as a reviewer (which seemed unlikely given his recent track record), I would have opposed him. But I did not anticipate that he would nominate himself, or overthrow the editor’s decision without even reading my paper (by his own admission).

Therefore, I am asking that Dr. YYY is replaced by a new senior editor with a more reasonable attitude.

As far as I can see: 1) the three reviewers were initially positive on the interest of the paper; 2) the editor Dr. XXX, who as far as I can tell is the only scientist involved in this process who is a member of the computational biology community, supported our paper since he sent it for review; 3) one reviewer, who seems to be an experimental electrophysiologist (unfortunately he or she has decided to remain anonymous), reverted his subjective opinion on the paper’s interest after we have pointed out the errors in his/her report, and even then, still judged the paper interesting for the computational neuroscience community. I have failed to see to how the decision is “not trivial to reach”.

Best regards,

Romain Brette

Attached: Letter to the editors from September 16th

 

Letter to the editors, September 16th

Dear Editors,

In the previous revision, I raised serious objections regarding the abusive attitude of reviewer #1. These objections have apparently been completely dismissed, but what I have been most disappointed about is the total lack of response to these objections. I am writing this letter in the hope that this time it will be given some consideration.

This manuscript has been submitted 8 months ago. This is the fourth major revision that we have been asked to make. The responses are now totaling more than 25 pages, much longer than the article itself. We have now entered a phase where a large part of the responses consist in citing previous revisions where the issues have already been addressed. This revision reaches a new level, where a fourth reviewer is added and repeats mostly questions that we have already answered in previous revisions. Why a fourth reviewer is considered necessary after 8 months of revision is not clear, when none of the three reviewers has raised any serious concern.

I have officially asked a detailed explanation for this peculiar decision. The only response I have obtained so far is that there was “a tie” between “conflicting reviews”. So apparently the editorial decision has been based on a vote between reviewers. This is yet what I read on the journal’s website:

If reviewers appear to disagree fundamentally, the editors may choose to share all the reviews with each of the reviewers and by this means elicit additional comment that may help the editors to make a decision. That said, decisions are not necessarily made according to majority rule. Instead, the editors evaluate the recommendations and comments of the reviewers alongside comments by the authors and material that may not have been made available to those reviewers.

If one followed this process, then one would realize that:

- None of the three reviewers has any remaining objection about results, methods, or clarity of the text.

- Reviewer #2 and #3 have an overall very positive assessment of the paper and in particular of its interest. Rev #2: “This is a great revision. The authors have clarified and addressed all my previous concerns. […] I strongly believe the study is publishable as it stands”; Rev# 3: “This is a very clear and logically presented manuscript dealing with a key question in fundamental cellular neuroscience”.

- On his/her first report, reviewer #1 also made a positive assessment of the paper and of its interest: “This is a clearly written manuscript that addresses an interesting question regarding the nature of spike initiation. Specifically, the authors propose a plausible explanation […] This is an interesting paper.”.

- After two rounds of technical revisions, in which we pointed out the reviewer’s errors and to which no objection has been made, reviewer #1 changed his mood and now concludes, without any argument: “unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists” (sic).

- Reviewer #3 has read our responses to the two other reviewers and concluded: “the revisions of the manuscript in response to the other reviewers' comments seem entirely appropriate.” From these facts, it appears clearly that there are in fact 3 convergent reviews. All 3 reviewers have concluded that results and methods are rigorous and the text is well written. All 3 reviewers have found the paper interesting. It might be that reviewer #1 has “voted” negatively; however I would expect the editorial decision to be based on the content of reviews and responses, which in this case is convergent, and not on the mood of one reviewer, which in this case is inconsistent between the reports. It is my understanding that an editorial decision should be based on arguments and facts, not on the reviewer’s emotions.

Nonetheless, we have replied in detail, again, to all criticisms. We have pointed out in particular the factual errors of reviewer #1. To help the editors, we have underlined the important points. We would appreciate if the editors checked for themselves whether reviewer #1 is right or not. We have also replied to reviewer/senior editor Dr YYY, although I deeply regret that this fourth version is “the first one he has had the chance to read”.

Finally, I would like to call your attention on the conclusion of reviewer #1, on which his/her recommendation is based, which requires in my opinion a clarification from the journal: “Finally, now in their third revision, the authors acknowledge that this work strongly builds on the previous resistive-coupling hypothesis, and tests whether this hypothesis is compatible with sharp spike onset (a view they have already proposed), vs the alternative proposed by Yu, of back propagation. This very specific theoretical result I feel is unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists” (Please see also our response, pointing out that the said acknowledgement was clear already in the very first version.)

This recommendation makes some important presumptions about this journal’s editorial views. Therefore I would very much like to know if this journal:

- also considers that proposing a hypothesis is more important than testing one, and that only the former should be published;

- considers that interesting computational neuroscience studies do not belong to this journal. I would also very much like to know if this journal considers that it is ok for a reviewer to ask for substantial technical revisions when he/she has already decided that the paper should not be published anyway. This has been indeed a lot of work for a decision ultimately based on the mood of one reviewer.

As I have argued in this letter, it is very clear that, given the content of the reports of the 3 reviewers and of our responses, this manuscript should have been accepted already. After 8 months and 4 revisions, and no serious objection on the manuscript, I can only hope very much that this journal does not confuse rigorous peer review with author harassment.

Again, I am hoping that this letter will be seriously taken into consideration, and even perhaps responded to.

Best regards,

Romain Brette

 

Response of the editors-in-chief

Dear Dr. Brette,

Thank you for your response to the recent decision on your paper “The Basis of Sharp Spike Onset in Standard Biophysical Models”. The manuscript and your appeal letter have been carefully evaluated by Dr. XXX and the journal’s Editors-in-Chief.

We understand your frustration regarding the length and complexity of the review process, and we would like to apologize for the time taken to reach a final decision.

We would like to provide some further clarification on how the editorial decision was reached. The manuscript addresses the issue - how do cortical neuronal action potentials rise so sharply? – and after an initial evaluation, Dr. XXX found it interesting enough to merit sending out for review, so that the reviewers could assess the technical solidity of the work and the conceptual advance proposed. The paper received mixed reviews, and hence merited a revision. After several rounds of revision, Reviewer 1 remained unconvinced. In order to aid the review process, Dr. YYY volunteered to evaluate the paper in depth, and his opinion concurred with that of Reviewer 1. Dr. XXX also re-read the paper and came to the conclusion that this manuscript is critically close conceptually to the previous PLOS publications - in fact the idea was laid out clearly and beautifully in the 2013 and 2015 PLOS papers. The present manuscript is an implementation of this idea, showing that other biophysically realistic models used to examine the spike sharpness issue show the mechanism that was suggested in the 2013 and 2015 PLOS papers.

We regret that this did not become fully clear before the third revision, and we understand your disappointment with the final outcome.

However, we agree that the findings of the paper are not significant enough for PLOS Computational Biology, and we will not be reconsidering the paper. We are sorry not to be more encouraging, but we hope that you can understand the reasons for this decision.

My new year resolution : to help move science to the post-journal world

I wish I could make the world a better place. I would like to prevent climate change and wars; but that’s not so easy on a personal level. What I can try to do more modestly as a scientist, is to make the scientific world a better place. We have all heard the phrase “publish or perish”. We all complain that careers are made by publishing in “high-impact” journals who favor story-telling and grand claims, and generally select papers arbitrarily (let alone that they do not even predict the impact of papers they publish); a trend that has been increasingly strong and has very negative impact on how research is done, including serious ethical problems. But what do we do concretely about it? For most of us including myself, not much. We keep on submitting to those journals, and we say we have no choice because that is how we are evaluated (for positions or grants). But who evaluates us? Surely there are some political aspects to it (mostly for funding), but the truth is, we are evaluated by peers. In brief, we are inflicting this madness on ourselves.

So: let us stop complaining and try to change the situation. I have previously exposed a vision of how the academic publishing system could look like without journals (by the way, this is not an original thought, fortunately). How to make it happen?

Where we are now

We should be hopeful, because many good things are happening:

  • Preprint servers are getting more and more attention. In biology, a number of journals are now accepting direct submissions from biorxiv, including all the PLoS journals, PNAS, eLife, the Journal of Neuroscience. This ought to boost submissions of preprints.
  • A few journals have started publishing the reviews along with the accepted paper, for example eLife, eNeuro and Nature Communications.
  • More generally, open access to both paper and data is getting more and more common and enforced.
  • A new journal, F1000 Research, now practices post-publication review. The paper is indexed in pubmed once two reviewers have approved it.
  • Very significantly, the Wellcome Trust has opened a journal for its grantees, Wellcome Open Research, based on post-publication review (in partnership with F1000), with this statement “The expectation is that this, and other similar funder platforms that are expected to emerge, will ultimately combine into one central platform that ensures that assessment can only be done at the article level”.
  • Finally: Pubpeer, started just a few years ago. A simple idea: to let anyone comment on any paper, anonymously or not, and let the authors know and respond. You should install their browser plugin. This is an individual initiative but it has already made a big impact, in particular by showing that the “big journals” are not better than the other ones in preventing flaws or frauds. It also addresses the concern that open reviews would be too nice: anyone who finds serious flaws can spot them anonymously and the authors will have to consider them. Pubmed commons is similar, but with signed comments.

What we can do now on a personal level

  1. Put every single paper you write on a “preprint” server before you submit to a journal.
  2. Put all your data online, see eg OpenAIRE.
  3. Remove journal names from the publications in your website. People who care about them will find out anyway.
  4. Start a literature search routine that does not involve looking at tables of contents; a few ideas in this Twitter thread; you could also have an author alert on Google Scholar.
  5. Write comments on Pubpeer; including on “pre-prints”.
  6. Send your papers to a journal with open post-publication review. I know this one is difficult, because the community still cares about impact factors. But at least you can favor those with public reviews (eg eLife, Nature Communications; I would prefer the former as it is non-profit). Instead of sending your papers to Frontiers, send them to F1000 Research; or at least eNeuro.

At the local community level, we can advocate for post-publication review. For example, the INCF has opened a channel on F1000 Research. Maybe we could have a computational neuroscience channel there, sponsored by the OCNS. It is too bad that F1000 Research is for-profit rather than institutional, but currently I do not know of other options.

What we can do on a more global scale

Open post-publication review potentially addresses the issue of recognition, but it does not address the issue of visibility. One concern I have by submitting in F1000 Research (for example), is that my intended readership will not know about it. There are so many papers published each year, one does not even have the time to read the title of all of them. This is one role journals have fulfilled: to select papers worthy of interest for a given community. But since we do not need journals anymore to publish anything, editorial selection and publication need not be coupled anymore. So here is my proposition. We make an independent website which lets any scientist, or possibly any group of scientists, be their own journal. That is, make a selection of papers they find interesting (including preprints). We provide a number of tools to make this as simple as possible: linking to pubpeer and pubmed commons, searching/browsing, alerting authors whose work is selected, email alerts and RSS feeds, etc. Papers are preferentially linked to the preprint if it exists, so as to completely bypass the journals. We could also let authors suggest their own paper for editorial selection. Basically, we provide all the services a journal typically has. This will be made increasingly easier as public open reviews become more widespread. These new “journals” could be run by an individual scientist, or a lab (eg linked to a journal club), or possibly a scientific society or group of scientists. Let us call any of these an “editor”. I would be happy for example to follow the selections of a few authors I respect, and that would be probably more valuable to me that the selection made by any journal, of which very few typically catch my attention in a given table of contents.

I am hoping that it goes as follows:

  1. People start using these individual journals, because it provides relevant information to them.
  2. As a result, papers in less well-known journals and preprints start getting more attention, and more citations.
  3. People take the habit of putting their papers on preprint servers because they get immediate attention.
  4. Editors progressively stop selecting papers published in journals because they have already selected them when they were preprints.
  5. As editors are also committee members, journal names start to matter less in evaluating research.
  6. Traditional journals disappear; instead, we have direct publication (formerly known as preprints) + open public reviews, both anonymous and signed.

How to get it started?

One simple idea to get it started is to make automatic channels for the actual conventional journals. For each journal, we list the table of contents, linked to preprint versions instead of the journal website, and to pubpeer, etc. If it’s convenient, people might start to use it, especially if it allows free access to the papers (legally, since we would use preprints). Then to get people to use the non-conventional channels, we provide suggestions based on content similarity (ie “you may also like...”).

How about this resolution?

A vision of the post-journal world

In 2016, the academic publication system worked mostly as follows :

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is submitted to a journal, chosen for its prestige.
  3. The editor, possibly a professional editor (ie not a professional scientist), judges that the paper is not worth of review: go back to (2).
  4. The editor sends the paper to 2-3 reviewers, asking them to review the paper quickly (the more prestigious, the quicker). Reviewers are chosen based on the editor’s expertise and network.
  5. Reviewers have too many reviews to write already: go back to (4).
  6. The reviewers (or one reviewer) find the paper uninteresting, or bad, or they “don’t believe it”: go back to (2).
  7. The reviews are mixed: a third reviewer is asked; go back to (4).
  8. The reviewers find the paper interesting enough but have requests for the authors; they should entirely rewrite their paper; they should do new experiments; they should cite the reviewers; they should remove an uninteresting figure; they should add some discussion points.
  9. After some work, the authors submit a revised paper.
  10. Actually, reviewers didn’t think the paper was that interesting anyway: go back to (2).
  11. The reviewers have more requests: go back to (9).
  12. The paper is accepted: congratulations!
  13. The format of references does not follow the journal’s style: please rewrite the reference list.
  14. The author’s paper now belongs to Elsevier/Springer Nature/etc. It will be sold by the publisher to academic institutions, with a margin of 30-40%.
  15. (recent alternative option) The authors pay publication fees of about $2000. The paper is now freely accessible.
  16. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a letter to the editor.
  17. After careful consideration, the letter is rejected.
  18. Another reader, not an expert of the field, reads the paper in a prestigious journal and believes it is highly regarded in the community, until she talks with colleagues and finds out that the paper is very criticized.
  19. The prestigious paper is commented in mainstream journals. Experts rant about the journalists.
  20. The first author applies for a position. The committee examines CVs of all candidates and selects those with publications in prestigious journals. The senior author sends an excellent letter of recommendation. The committee values it if the senior author is famous.
  21. The senior author applies for a grant. The committee selects candidates with a good “track record”, ie prestigious publications.

This publication model is wasteful, full of perverse incentives, opaque, and misleading. What I want to do here is not one more critique of this system, but rather give a vision of what the publication system could be and, in my opinion, will be, perhaps within a decade. This is a vision of a publication system centered on individual articles, rather than on journals. It is made possible today by information technology (aka “the internet”). What stands in its way is, in a word: conservatism. It works as follows:

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is uploaded to an archive site, together with the data. All is freely accessible to anyone.
  3. The authors invite an unlimited number of professional scientists (say, authors of at least one paper) to write a formal, signed, review. The reviews are public.
  4. Any other scientist can contribute a signed or anonymous review or comment, including comments on the public reviews (aka “scientific debate”).
  5. Authors respond to the reviews and may submit a revised version.
  6. Authors decide when to stop the revision process; reviewers express their satisfaction, or not.
  7. Any scientist can recommend the paper if she finds it worth reading. Tools are made available to search for papers, possibly using the number of recommendations, possibly weighted by scientist reputation (all data are freely accessible and third parties provide searching tools).
  8. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a public critique, signed or anonymous, after reexamining the paper’s raw data. The authors respond publicly.
  9. Another reader, not an expert of the field, reads the paper. She notices that many knowledgeable scientists disagree with the conclusions and can read the detailed arguments and the authors’ response.
  10. The first author applies for a position. For every publication, the committee reads the reaction of the community (formal reviews, comments, recommendations) and uses it to evaluate the value of the work along all the dimensions addressed in the reactions (importance, originality, technicality, etc). The committee also asks the candidate what her main contributions are in her view and why. The candidate can point to published reactions to her work that support her claims.
  11. The senior author applies for a grant or a promotion. The committee reads the reaction of the community to the candidate’s previous work. They might also look at review invitations and reviewing activity.

In the post-journal world, a paper is submitted a single time (but of course it can be revised), and in contrast with the journal model, the reaction of the community is known. As a consequence, quality stamps (journal names) are obsolete. No review is thrown away. As a consequence, scientists save time on both submitting and reviewing. Scientists are not incentivized to make grand claims. In addition, the system saves several billions of dollars each year (eg US libraries spend $2 billion each year on subscriptions), not counting the time saved on wasted reviews and submissions.

All of this is clearly possible with today’s technology. The next question I will try to address is: how to move quickly to the post-journal world?

Do we need a new computational neuroscience journal ?

Where to send computational and theoretical neuroscience papers? A list of journals can be found on Jim Perlewitz’s page. First, many journals belong to one of the scientific publishing companies who make huge profits from the free work of academics (margins around 30-40%), with questionable added value. In computational neuroscience, we have:

  • Journal in Computational Neuroscience (Springer Nature)

  • Frontiers in Computational Neuroscience; in Neuroinformatics (Springer Nature)

  • Biological Cybernetics (Springer Nature)

  • The Journal of Mathematical Neuroscience (Springer Nature)

  • Cognitive Neurodynamics (Springer Nature)

  • Neuroinformatics (Springer Nature)

  • Network: Computation in Neural Systems (Taylor & Francis)

Then we have Neural Computation, run by MIT Press, which is non-profit. Unfortunately, papers are behind paywalls. In my case, the library does not have a subscription.

Finally, there is PLoS Computational Biology, a highly respected journal in the field. However, even in this non-profit journal, open access is very expensive ($2250 per paper). I find these costs hard to justify, when any scientist can publish on archives for free (eg arxiv or biorxiv). The solution to this is overlay journals: the papers are linked to an archive site; the journal only deals with peer-review.

I was also recently rather disappointed by PLoS CB, after a paper got rejected 10 month after submission on the sole basis of interest. Based on the reactions I received, this is not an isolated case. Anyway, PLoS CB practices editorial pre-selection, which is highly subjective. As for other prestigious journals, its prestige is precisely based on its relatively low acceptance rate. But this is a highly questionable model, because it is wasteful: wasting the time of authors who submit the same paper to many journals, wasting the time of reviewers who write reports that are thrown away. In my opinion, it is also perverse. It gives a label of quality to individual papers and scientists based on the opinion of just a couple of scientists, sampled by one editor, and this insignificant label is then used by various committees (grants, promotions, hiring). It also pushes authors into boosting their claims to pass the editorial barrier. The solution to this is post-publication review (see also Michael Eisen’s post).

So ideally, I would like to see an overlay journal practicing post-publication review in my field. The question now is: should we create one? What would be needed is the technical infrastructure, but I am not sure it is fully available yet (as regards the post-publication review system, indexing etc).

Alternatively, we could use a general post-publication review journal. There is for example F1000 Research. PeerJ also does this partially (publishing the reviews is an option). Both cost about $1000 per article; both are run by companies. But they are less expensive than alternative open access journals (PLoS ONE: $1495; eNeuro: $1950 for members, $2925 otherwise), and more open. None of these two journals is ideal, but they represent some progress over conventional journals.

There are two issues with choosing a general journal. First, there is the question of editors. I have no experience with these journals so I am not sure there is the relevant expertise in the editorial board. I have seen a couple of computational neuroscientists in PeerJ though. The second issue is visibility. One idea could be to manage a list of computational neuroscience papers, to which anyone could submit a reference or preprint, in addition to automatic additions (say from relevant journals + keyword alerts). One would simply subscribe to the list (RSS feed or email alert).

Any good ideas are welcome!

Update (6.1.2017)

Some more information and thought following comments on this text:

  • One issue for an overlay journal is to get it indexed in Pubmed. This seems quite important for a neuroscience journal, but the indexing requirements seem to be pretty much contradictory with the idea of an overlay journal, as it requires XML-tagged data of its bibliographic citations. But maybe this could be negotiable? (see "Optional arrangement"). Some more general information about indexing here.
  • Other issue is more technical; to get an automatic post-publication review system.
  • Finally a general thought; there is no reason why this should be specific to computational neuroscience. However, this could be a first step towards a more generalized system.

You are not a good scientist if you use impact factors for evaluating research

A lot has been written about impact factors, and why they are a poor indicator of article quality (just google it). Even for the one thing that the impact factor is meant to measure, ie, future citations, it performs badly. There are now many empirical studies on this. Sadly, in 2016, scientists in hiring or promotion committees still value impact factors very highly, especially in life sciences. Yes, in a domain of science that is meant to be highly empirical, scientists still prefer to look away from compelling evidence that impact factors should not be used for evaluating research. Recently, in a PhD thesis committee, we were discussing the grade and one member, an experimental biologist, raised the issue that the candidate had not published in the highest ranked journals. We opposed that we were there precisely to evaluate the manuscript itself as a whole, not some elements of CVHe admitted later that he had not read the thesis.

I could simply cite the overwhelming evidence. But I want to make a simpler point of statistics. Many things are wrong with using impact factors, but let us just consider the one thing it is meant for, ie, predicting the number of future citations of papers. We know that for any journal, the distributions of citations is very broad and highly skewed. According to Seglen (1997), “articles in the most cited half of articles in a journal are cited 10 times as often as the least cited half”. In practice, if you are given the name of the journal in which a paper has been published, you will not know its future number of citations. From a purely statistical point of view, journal name is just not very significant.

Examples are sometimes more convincing that hard facts. So from the paper I just cited, number of citations vs. impact factor for all publications of 4 scientists:

impact

Now look at my own publication statistics. To be more rigorous, I should calculate the citations per year, but the point is obvious enough. My top 3 cited papers were published in: J. Neurophys (IF 2.8); J. Comp. Neuro (IF 1.8); Frontiers in Neuroinformatics(IF 1.3). This third paper (on the Brian simulator) was published in 2008, and the same year I had a paper in Neuron, which has gathered about 4 times fewer citations. And this is in spite of the fact that papers published in big journals are obviously much more visible (ie there is an expected boost in citations that is unrelated to article quality).

Is this really surprising? In a big journal, just like in a small journal, decisions are based on n = 2 or 3 reviewers, selected by one editor, i.e. a biased sample of size 2/3; just after an editorial selection based on n = 1 editor, in many cases not a professional scientist.

It’s 2016 and life scientists have been recently warned repeatedly that good statistics require large unbiased samples, but impact factors, totally insignificant pieces of information, are still broadly used to distribute grants and make careers. So yes, using impact factors as a proxy to evaluate the quality of papers or of their authors is a display of poor scientific practice.

To me the problem with this is so not much that bad decisions are being made, but that it has a huge influence on the way science is done. Scientists know that they are being evaluated in this way, and this influences their research (especially for students and postdocs) so that it meets the editorial biases of the big journals. In 1952, Hodgkin and Huxley published 120 pages in Journal of Physiology, basically defining the field of electrophysiology. Who would take the risk of doing this today at the age of 35, instead of a career-making 4-page Nature paper?