Better than the grant lottery

Funding rates for most research grant systems are currently very low, typically around 10%. This means that 90% of the time spent on writing and evaluating grant applications is wasted. It means that if each grant spans 5 years, then a PI has to write about 2 grants per year to be continuously funded; in practice, to reduce risk it should be more than 2 per year. It is an enormous waste, and in addition to that, it is accepted that below a certain funding rate, grant selection is essentially random (Fang et al., 2016). Such competition also introduces conservative biases (since only those applications that are consensual can make it to the top 10%), for example against interdisciplinary studies. Thus, low funding rates are a problem not only because of waste but also because they introduce distortions.

For these reasons, a number of scientists have proposed to introduce a lottery system (Fang 2016; see also Mark Humphries’ post): after a first selection, of say, the top 20-30%, the winners are picked at random. This would reduce bias without impacting quality. Thus, it would certainly be a progress. However, it does not address the problem of waste. 90% of applications would still be written in vain.

First, there is a very elementary enhancement to be implemented: pick at random before you evaluate the grants, i.e., directly reject every other grant, then select the best 20%. This gives exactly the same result, except the cost of evaluation is divided by two.

Now I am sure it would feel quite frustrating for an applicant to write a full grant only to get immediately rejected by the flip of a coin. So there is again a very simple enhancement: decide who will get rejected before they write the application. Pick at random 50% of scientists and invite them to submit a grant. Again, the result is the same, but in addition you reduce the time spent on grant writing by two.

At this point we might wonder why do this initial selection at random? This introduces variance for no good reason. You never know in advance whether you will be allowed to get funding next year and this seems arbitrary. Thus, there is an obvious enhancement: replace lottery by rotation. Every PI is allowed to submit a grant only every two years. Again, this is equivalent on average to the initial lottery system, except there is less variance and less waste.

This reasoning leads me to a more general point. There is a simple way to increase the success rate of a grant system, which is to reduce the number of applications. The average funding rate of labs does not depend on the number of applications; it depends on the budget and only on the budget. If you bar 50% of scientists from applying, then you don’t divide by two the average budget of every lab. The average budget allocated to each lab is the same, but the success rate is doubled.

The counter-intuitive part is that individually, you increase your personal success rate if you apply to more calls. But collectively it is exactly the opposite: the global success rate decreases if there are more calls (for the same overall budget), since there are more applications. This is because the success rate is low because of other people submitting, not because you are submitting. This is a tragedy of commons phenomenon.

There is a simple way to solve it, which is to add constraints. There are different ways to do it: 1) reduce the frequency of calls, and merge redundant calls, 2) introduce a rotation (e.g. those born on even years submit on even years), 3) do not allow submission if you are already funded (or say, in the first years). Any of these constraints mechanically increases the success rate, thus reduces both waste and bias, with no impact on average funding. It is better than a lottery.

 

p.s.: There is also an obvious and efficient way to reduce the problem, which is to increase base funding, so that scientists do not need grants in order to survive (see this and other ideas in a previous post).

Revues prédatrices : quel est le problème ?

Un récent article du Monde alerte sur un phénomène qui prend de l’ampleur dans l’édition scientifique : les revues prédatrices (voir aussi l’éditorial). Il s’agit d’éditeurs commerciaux qui publient des articles scientifiques en ligne, contre rémunération, sans aucune éthique scientifique, en particulier en acceptant tous les articles sans qu’ils soient revus par des pairs. De manière similaire, les fausses conférences se multiplient ; des entreprises organisent des conférences scientifiques dans un but purement commercial, sans se soucier de la qualité scientifique.

En réaction, certaines institutions commencent à monter des « listes blanches » de journaux à éviter. C’est compréhensible, puisque le phénomène a un coût important. Mais la réponse néglige le problème fondamental. Il faut se rendre à l’évidence : l’éthique commerciale (recherche du profit) n’est pas compatible avec l’éthique scientifique (recherche de la vérité). Les entreprises dont on parle ne sont pas illégales, à ma connaissance. Elles organisent des conférences qui sont réelles ; elles publient des journaux qui sont réels. Simplement, elles ne se soucient pas de la qualité scientifique, mais de leur profit. On considère cela comme immoral ; mais une entreprise commerciale n’a pas de dimension morale, il s’agit simplement d’une organisation dont le but est de générer du profit. On ne peut s’attendre à ce que les intérêts commerciaux correspondent comme par magie exactement aux intérêts scientifiques.

  1. Le problème de l’édition commerciale

Ceci est vrai aux deux extrémités du spectre de la publication académique : pour les journaux prédateurs comme pour les journaux prestigieux. L’article parle de « fausse science » ; mais la plupart des cas de fraude scientifique ont été révélés dans des journaux prestigieux, pas dans des journaux prédateurs – qui de toutes façons ne sont pas lus par la communauté scientifique (voir par exemple Brembs (2018) pour le lien entre qualité méthodologique et prestige du journal). Pour les journaux commerciaux prestigieux, la stratégie commerciale des éditeurs est non pas de maximiser le nombre d’articles publiés, mais de maximiser le prestige perçu de ces journaux, qui servent ensuite d’appâts pour vendre les collections de journaux de l’éditeur. Autrement dit, c’est une stratégie de marque. Cela passe notamment par une sélection drastique des articles soumis, opérée par des éditeurs professionnels, c’est-à-dire pas par des scientifiques professionnels, sur la base de l’importance perçue des résultats, poussant ainsi une génération de scientifiques à gonfler les prétentions de leurs articles. Cela passe par la promotion auprès des institutions publiques de métriques douteuses comme le facteur d’impact, et plus généralement la promotion d’une mythologie de la publication prestigieuse, à savoir l’idée fausse et dangereuse qu’un article doit être jugé par le prestige du journal dans lequel il est publié, plutôt que par sa valeur scientifique intrinsèque – qui elle est évaluée par la communauté scientifique, pas par un éditeur commercial, ni même par deux scientifiques anonymes. En proposant d’éditer des listes de mauvais journaux, on ne résout pas le problème car l’on adhère implicitement à cette logique perverse.

Il suffit de regarder les marges dégagées par les grandes multinationales de l’édition scientifique pour comprendre que le modèle commercial n’est pas adapté. Pour Elsevier par exemple, les marges sont de l’ordre de 40%. La simple lecture de ce chiffre devrait nous convaincre immédiatement que l’édition scientifique devrait être gérée par des institutions publiques, du moins non commerciales (par exemple des sociétés savantes, comme c’est le cas d’un certain nombre de journaux). Quel est la justification pour faire appel à un opérateur commercial pour gérer un service public, ou n’importe quel service ? La motivation est que la compétition permet de diminuer les coûts et d’améliorer la qualité. Or si les marges sont de 40%, c’est que visiblement la compétition n’opère pas. Pourquoi ? Simplement parce que lorsqu’un scientifique soumet un article, il ne choisit pas le journal en fonction du prix ni même du service rendu (qui est en réalité essentiellement rendu par des scientifiques bénévoles), mais en fonction de la visibilité et du prestige du journal. Il n’y a donc pas de compétition sur les prix. Le pire qui pourrait arriver pour un éditeur commercial est que les articles scientifiques soient jugés à leur valeur intrinsèque plutôt que par le journal dans lequel ils sont publiés, parce qu’alors ce modèle commercial unique s’effondrerait et les journaux seraient en compétition sur les prix et les services qu’ils doivent fournir, comme n’importe quelle autre entreprise commerciale. C’est le pire qui puisse arriver aux éditeurs commerciaux, et le mieux qui puisse arriver à la communauté scientifique. Voilà pourquoi les intérêts commerciaux et scientifiques sont divergents.

Quoi qu’il en soit, il faut se rendre à l’évidence : des marges aussi énormes signifient que le modèle commercial est inefficace. Il faut donc cesser immédiatement de faire appel à des journaux commerciaux. Ce n’est pas très difficile : les institutions publiques sont tout à fait capables de gérer des journaux scientifiques ; il en existe et depuis longtemps. Un exemple récent est eLife, un des journaux les plus innovants actuellement en biologie. Cela ne devrait pas être très étonnant : le cœur de l’activité des journaux, à savoir la relecture des articles, est déjà faite par des scientifiques, y compris chez les éditeurs commerciaux qui font appel à leurs services gratuitement. Cela ne veut pas dire que l’on ne peut pas faire appel à des entreprises privées pour fournir des services, par exemple héberger des serveurs, gérer les sites web, fournir de l’infrastructure. Mais les journaux ne doivent plus appartenir à des sociétés commerciales, dont l’intérêt est de gérer ces journaux comme des marques. L’éthique scientifique n’est pas compatible avec l’éthique commerciale.

Comment faire ? En réalité c'est assez évident. Il s’agit pour les pouvoirs publics d’annuler la totalité des abonnements aux éditeurs commerciaux et de cesser de payer des droits de publication à ces éditeurs. De nos jours, il n’est pas difficile d’avoir accès à la littérature scientifique sans passer par les journaux (par les prépublications ou ‘preprints’ ou simplement en écrivant aux auteurs qui sont généralement ravis que l’on s’intéresse à leurs travaux). L’argent économisé peut être réinvesti en partie dans l’édition scientifique non commerciale.

  1. Le mythe de la revue par les pairs

Je veux maintenant en venir à une question d’épistémologie plus subtile mais fondamentale. Quel est au fond le problème des revues prédatrices ? Clairement, il y a le gaspillage d’argent public. Mais l’article du Monde pointe également des problèmes scientifiques, à savoir le fait que de fausses informations sont propagées, sans avoir été vérifiées. L’éditorial parle en effet de ‘la sacro-sainte « revue par les pairs »’, qui n’est pas effectuée par ces revues. Mais est-ce vraiment le problème fondamental ?

L’idée que ce qui fait la valeur d’un article scientifique est qu’il a été validé par la relecture par les pairs avant publication est un mythe tenace mais néanmoins erroné. Cela est faux d’un point de vue empirique, et d’un point de vue théorique.

D’un point de vue empirique, à tout instant, il existe dans la littérature des conclusions contradictoires à propos d’un grand nombre de sujets, publiées dans des revues traditionnelles. Les cas de fraude récents concernent des articles qui ont pourtant subi une relecture par les pairs. Mais c’est le cas aussi d’une quantité beaucoup plus importantes d’articles non frauduleux, mais dont les conclusions ont été contestées par la suite. L’histoire des sciences est remplie de théories scientifiques contradictoires et coexistantes, d’âpres débats entre scientifiques. Ces débats ont lieu, justement, après publication, et le consensus scientifique se forme généralement assez lentement, pratiquement jamais sur la base d’un seul article (voir par exemple Imre Lakatos en philosophie des sciences, ou Thomas Kuhn). Par ailleurs, les résultats scientifiques sont également souvent diffusés dans la communauté scientifique avant publication formelle ; c’est le cas aujourd’hui avec les prépublications (« preprints » en ligne), mais c’était déjà partiellement le cas auparavant avec les conférences. L’article publié reste la référence parce qu’il fournit des détails précis, notamment méthodologiques, mais la contribution des relecteurs sollicités par les journaux n’est dans la plupart des cas pas essentielle, d’autant que celle-ci n’est généralement pas rendue publique.

D’un point de vue théorique, il n’y a aucune raison que la relecture par les pairs « valide » un résultat scientifique. Il n’y a rien de magique dans la revue par les pairs : simplement deux, parfois trois scientifiques donnent leur avis éclairé sur le manuscrit. Ces scientifiques ne sont pas plus experts que ceux qui vont lire l’article lorsqu’il sera publié (je parle bien sûr de la communauté scientifique et pas du grand public). Le fait qu’un article soit publié dans un journal ne dit pas grand chose en soi de la réception des résultats par la communauté ; lorsqu’un article est rejeté d’un journal, il est resoumis ailleurs. La publication finale n’atteste absolument pas d’un consensus scientifique. Par ailleurs, lorsqu’il s’agit d’études empiriques, les relecteurs n’ont pas en réalité la possibilité de vérifier les résultats, et notamment de vérifier s’il n’y a pas eu de fraude. Tout ce qu’ils peuvent faire, c’est vérifier que les méthodes employées semblent appropriées, et que les interprétations semblent sensées (deux points souvent sujets à débat). Pour valider les résultats (mais pas les interprétations), il faudrait au minimum pouvoir refaire les expériences en question, ce qui suppose le temps et l’équipement nécessaire. Ce travail indispensable est fait (ou tenté), mais il n’est pas fait au moment de la publication, ni commissionné par le journal. Il est fait après publication par la communauté scientifique. Le travail de « vérification » (mot inapproprié car il n’y a pas de vérité absolue en science, ce qui la distingue justement de la religion) est le travail de fond de la communauté scientifique, ce n’est pas le travail ponctuel du journal.

C’est cette idée reçue qu’il faut déconstruire : que le travail de revue interne au journal « valide » d’une certaine manière les résultats scientifiques. Ce n’est pas le cas, cela n’a jamais été le cas, et cela ne peut pas être le cas. La validation scientifique est la nature même de l’entreprise scientifique, qui est un travail collectif et de longue haleine. On ne peut pas lire un article et conclure « c’est vrai »; il faut pour cela l’intégrer dans un ensemble de connaissances scientifiques, confronter l’interprétation à des points de vue différents (car toute interprétation requiert un cadre théorique).

C’est justement cette idée reçue que les journaux prestigieux tentent au contraire de consolider. Il faut y résister. L’antidote est de rendre public et transparent le débat scientifique, qui actuellement reste souvent confiné aux couloirs des laboratoires et des conférences. On prétend que la relecture par les pairs valide les résultats scientifiques, mais ces rapports ne sont la plupart du temps pas publiés ; et quid des rapports non publiés lorsque l’article est rejeté par un journal ? Comment savoir alors ce qu’en pense la communauté ? Il faut au contraire rendre public le débat scientifique. C’est par exemple l’ambition de sites comme PubPeer, qui a mis à jour un certain nombre de fraudes, mais qui peut être utilisé simplement pour le débat scientifique de manière générale. Plutôt que de conditionner la publication à un accord confidentiel de scientifiques anonymes, il faut au contraire inverser ce système : publier l’article (c’est en fait déjà le cas par la prépublication), puis solliciter les avis de la communauté, qui seront également publiés, argumentés, discutés par les auteurs et le reste de la communauté. C’est ainsi que les scientifiques, mais également le plus grand public, pourront obtenir un vision plus juste de la valeur scientifique des articles publiés. La revue par les pairs est un principe fondamental de la science, oui, mais pas celle effectuée dans la confidence par les journaux, celle au contraire effectuée au grand jour et sans limite de temps par la communauté scientifique.

Are journals necessary filters?

In my previous post, I argued that one reason why many people cling to the idea that papers should be formally peer-reviewed before they are published, cited and discussed, despite the fact that this system is a recent historical addition to the scientific enterprise, is a philosophical misunderstanding about the nature of scientific truth. That is, the characteristic of science, as opposed to religion, is that it is never validated; it can and must be criticized. Therefore, no amount of peer reviewing can ever be a stamp of approval for “proven facts”. Instead what we need is public discussion of the science, not stamps of approvals.

In response to that post came up another common reason why many people think it’s important to have journals that select papers after peer-review. The reason is that we are crowded with millions of papers and you can’t read everything, so you need some way to know which paper is important, based on peer-review. So here this is not just about peer-reviewing before publishing, but also about the hierarchy of journals. Journals must do an editorial selection so that you don’t have to waste your time reading low-quality papers, or uninteresting papers. What this means, quite literally, is that you only read papers from “top journals”.

Here I want to show that this argument is untenable, because selecting their readings based on journal names is not what scientists should do or actually do, and because the argument is logically inconsistent.

Why is it logically inconsistent? If the argument is correct, then those papers accepted in lower rank journals should not be read because they are not worth reading. But in that case, why publish them at all? There seems to be no reason for the existence of journals that people do not read because they do not have time to read bad papers. If we argue that those journals should exist because in some cases there are some papers worth reading there, for any sort of reason, then we must admit that we don’t actually use journals as filters, or that we should not use them as filters (see below).

Is it good scientific practice to use journal names as filters? What this implies is that you ignore any paper, including papers in your field and potentially relevant to your own studies, which are not published in “top journals”. So for example, you would not cite a relevant study if it’s not from a top journal. It also means that you don’t check that your own work overlaps other studies. So you potentially take credit for ideas that you were not the first to have. Is this a professional attitude?

If in fact you don’t totally ignore those lower journals, then you don’t actually use journal name as a filter, you actually do look at the content of papers independently of the journal they are published in. Which is my final point: to use journal names as filters is not the normal practice of scientists (or maybe I’m optimistic?). When you look for relevant papers on your topic of interest, you typically do a search (eg pubmed). Do you only consider papers from “top journals”, blindly discarding all others? Of course not. You first look at the titles to see if it might be relevant; then you read the abstract if they are; if the abstract is promising you might open the paper and skim through it, and possibly read it carefully if you think it is worth it. Then you will look at cited papers; or at papers that cite the interesting you just read; or you will read a review; maybe a colleague or your advisor will suggest a few readings. In brief: you do a proper bibliographical search. I cannot believe that any good scientist considers that doing a bibliographical search consists in browsing the table of contents of top journals.

The only case when you do use journal names to select papers to read is indeed when you read tables of contents every month for a few selected journals. How much of this accounts for the papers that you cite? You can get a rough idea of this by looking at the cited half-life of papers or journals. For Cell, it’s about 9 years. I personally also follow new papers on biorxiv using keywords, while most new papers in journals are irrelevant to me because they cover too many topics.

In summary: using journals as filters is not professional because it means poor scholarship and misattribution of credit. Fortunately it’s not what scientists normally do anyway.

One related argument that came out in the discussion of my previous post is that having papers reviewed post-publication could not work because that would be too much work, and consequently most papers would not be reviewed, while at least in the current system every paper is peer reviewed. That is wrong in several ways. First, you can have papers published then peer-reviewed formally and publicly (as in F1000 Research), without this being coupled to editorial selection. Second, if anything, having papers submitted a single time instead of many times to different journals implies that there will be less work for reviewers, not more. Third, what is exactly the advantage of having each paper peer-reviewed if it is argued that those papers should not be read or cited? In the logic where peer review in “good journals” serves as filters for important papers, it makes no difference whether the unimportant papers are peer reviewed or not, so this cannot count as a valid argument against post-publication review.

All this being said, there is still a case for editorial selection after publication, as one of the many ways to discover papers of interest, see for example my free journal of theoretical neuroscience.

The great misunderstanding about peer review and the nature of scientific facts

Last week I organized a workshop on the future of academic publication. My point was that our current system, based on private pre-publication peer review, is archaic. I noted that the way the peer review system is currently organized (where external reviewers judge both the quality of the science and the interest for the journal) represents just a few decades in the history of science. It can hardly qualify as the way science is or should be done. It is a historical feature. For example, only one of Einstein’s papers was formally peer-reviewed; Crick & Watson’s DNA paper was not formally peer-reviewed. Many journals introduced external peer review in the 1960s or 1970s to deal with the growth in the number and variety of submissions (see e.g. Baldwin, 2015); before that, editors would decide whether to publish the papers they received, depending on the number of pages they could print.

Given the possibilities that offers the internet, it seems that there is no reason anymore to couple the two current roles of peer review: editorial selection and scientific discussion. One could simply share their work online, get feedback from the community to discuss the work, and then let people recommend papers to their colleagues and compile all sorts of reader’s digests. No time wasted in multiple submissions, no prestige misattributed to publications in glamour journals, who do not do a better a job than any other journal at pointing errors and frauds. Just the science and the public discussion of science.

But there is a lot of resistance to this idea, namely the idea that papers should be formally approved by peer reviewers before they are published. Because otherwise, so many people claim, the scientific world would be polluted by all sorts of unverified claims. It would not be science anymore, just gossip. I have attributed this attitude to conservatism, first because as noted above this system is a rather recent addition to the scientific enterprise, and second because papers are published before peer review. We call those “preprints”, but really these are scientific papers made public, so by definition they are published. I follow the preprints in my field and I don’t see any particular loss in quality.

However, I think I was missing a key element. The more profound reason why many people, in particular experimental biologists, are so attached to peer review is in my view that they hold naive philosophical views about the notion of truth in science. A paper should be peer-reviewed because otherwise you can’t cite it as a true fact. Peer review validates science, thanks to experts who make sure that the claims of the authors are actually true. Of course it can go wrong and reviewers might miss something, but it is the purpose of peer review. This view is reflected in the tendency, especially in biology journals, to choose titles that look like established truths: “Hunger is controlled by HGRase”, instead of “The molecular control of hunger”. Scientists and journalists can then write revealed truths with a verse reference, such as “Hunger is controlled by HGRase (McDonald et al., 2017)”.

The great misunderstanding is that truth is a notion that applies to logical propositions (for example, mathematical theorems), not to empirical claims. This has been well argued by Popper, for example. Truth is by nature a theoretical concept. Everything said is said with words, and in this sense it always refers to theoretical concepts. One can only judge whether observations are congruent with the meaning attributed to the words, and that meaning necessarily has a theoretical nature. There is no such thing as an “established fact”. This is so even of what we might consider as direct observations. Take for example the claim “The resting potential of neurons is -70 mV”. This is a theoretical statement. Why? First, because to establish it, I have recorded a number of neurons. If you test it, it will be on a different neuron, which I have not measured. So I am making a theoretical claim. Probably, I also tested my neurons with a particular method (not mentioning a particular region and species). But my claim makes no reference to the method by which I have made the inference. That would be the “methods” part of my paper, not the conclusion, and when you cite my paper, you will cite it because of the conclusion, the “established fact”, you will not be referring to the methods, which you consider are the means to establish the fact. It is the role of the reviewers to check the methods, to check that they do establish the fact.

But these are trivial remarks. It is not just that the method matters. The very notion of an observation always implicitly relies on a theoretical background. When I say that the resting potential is -70 mV, I mean that there is a potential difference of -70 mV across the membrane. But that’s not what I measure. I measure the difference in potential between some point outside the cell and the inside of a patch pipette whose solution is in contact with the cell’s inside. So I am assuming the potential is the same in all points of the cytosol, even though I have not tested it. I am also implicitly modeling the cytosol as a solution, even though the reality is more complex than that, given the mass of charged proteins in it. I am assuming that the extracellular potential is constant. I am assuming that my pipette solution reasonably matches the actual cytosol solution, given that “solution” is only a convenient model. I am implicitly making all sorts of theoretical assumptions, which have a lot of empirical support but are still of a theoretical nature.

I have tried with this example to show that even a very simple “fact” is actually a theoretical proposition, with many layers of assumptions. But of course in general, papers typically make claims that rely less firmly on accepted theoretical grounds, since they must be “novel”. So it is never the case that a paper definitely proves its conclusions. Because conclusions have a theoretical nature, all that can be checked is whether observations are consistent with the authors’ interpretation.

So the goal of peer review can’t be to establish the truth. If it were the case, then why would reviewers ever disagree? They disagree because they cannot actually judge whether a claim is true; they can only say whether they are personally convinced. This makes the current peer review system extremely poor, because all the information we get is: two anonymous people were convinced (and maybe others were not, but we’ll never find out). What would be more useful would be to have an open public discussion, with criticisms, qualifications and alternative interpretations fully disclosed for anyone to read and make their own opinion. In such a system, the notion of a stamp of approval on a paper would simply be absurd; why hide the disapprovals? There is the paper, and there is the scientific discussion of the paper, and that is all there needs to be.

There is some concern these days that peer reviewed research is unreliable. Well, science is unreliable. That is almost what defines it: it can be criticized and revised. Seeing peer review as the system that establishes the scientific truth is not only a historical error, it is a great philosophical error, and a dangerous bureaucratic view of science. We don’t need editorial decisions based on peer review. We need free publication (we have it) and we need open scientific discussion (it’s coming). That’s all we need.

10 simple rules to format a preprint

Submitting papers to preprint servers (bioRxiv) is finally getting popular in biology. Unfortunately, many of these papers are formatted in a way that is very inconvenient to read, possibly because authors stick to the format asked by journals. Here are 10 simple rules to format your preprints:

  1. Format your preprint in the way you would like to read it. The next rules simply implement this first rule.
  2. Use single spacing. No one is going to write between the lines.
  3. Insert figures and their captions in the text, at the relevant place. It is really annoying when you have to continuously go back and forth between the text and the last pages. Putting figures at the end of the paper and captions yet at another place should be punished.
  4. Don’t forget the supplementary material.
  5. We don’t really need 10 rules. In fact the first rule is just fine.

Free the journals

These days, academic journals serve two purposes: to organize peer review, and to make an editorial selection. With internet and in particular “preprint” servers (eg biorxiv), journals are no longer necessary for distributing academic papers. It is also worth reminding that the peer review system has not always been organized in the way it is currently organized, ie with several external reviewers, multiple revision rounds, etc. For example, Nature only introduced formal peer review in 1967. Before that, the selection would be done internally by the editor.

These two missions, organizing peer review and making an editorial selection, are currently coupled. But this is neither necessary, nor a good thing. It is obvious that this coupling is not necessary. One could easily imagine a system where papers are submitted to a mega-journal (e.g. PLoS ONE, PeerJ, F1000 Research), which organizes the peer review, and then journals (e.g. Nature or Cell) make their editorial selection based on perceived “impact”, possibly using the reviews. Instead, authors must submit to each journal separately until their paper is accepted, and reviewers are asked both to check the scientific standards, which is the alleged purpose of peer review, and to judge the perceived importance of papers, which is an editorial task. This results in frustration for the authors, unnecessary delays and tremendous waste of resources. Thus, it is not a good thing.

Once a peer review system is organized (eg by mega-journals), the remaining role of journals is then editorial selection, and this could be done separately. Once we realize that, it becomes clear that very little infrastructure should be needed to run a journal. A journal issue is then just a list of papers selected by an editor, put online with the appropriate links and editorial comments. I propose that every scientist, or lab, or possibly group of interest, starts their own journal, which I will call “free journal” (see my related posts here and there). Free journals are not tied to any commercial interest; they are self-managed academic journals. Crucially, the editor makes a personal selection based on her readings, papers that she personally thinks are interesting. This means in particular that, in contrast with most journals, the editor is highly qualified. The selection is meaningful, not a collection of thumbs up/down made by disparate reviewers based on vague criteria (“impact”). It also implies that it is totally acceptable for the editorial selection to include “preprints”: the editor is a peer and therefore any featured paper is by definition peer reviewed (ie, as in Nature before 1967). I have made my own free journal of theoretical neuroscience in this spirit. I have collected some technical advice on how to make one, and I would be happy to receive any suggestion.

The bottomline is: it is very easy to make a free journal. On the technical side, one essentially needs to use a blogging system, eg WordPress. This automatically generates an RSS feed (eg one post per issue). I also put all the comments associated to the selected papers on PubPeer, with a link to the journal; this way, anyone using their plugin automatically sees that the papers have been selected in the journal when looking at them on pubmed, for example. On the editorial side, it actually represents little work, and I believe that this little amount is actually quite interesting work to do. Every scientist reads and annotates papers. All that is required to run a free journal then, is to set half a day in the month to put your notes together into a post. This helps organizing your bibliographical notes, and it is helpful for your colleagues and your students. One could also imagine coupling this writing with the lab’s journal club.

Please leave me a note if you are making your own free journal and I would be happy to link it from my journal.

My free journal: Brette’s free journal of theoretical neuroscience

My appeal to PLoS Computational Biology

I recently reported that one of my papers has been rejected by PLoS Computational Biology after 10 months and 4 revisions, on the ground of general interest. This has generated a little buzz. A colleague mentioned it on his blog. As a result, the editor of my paper contacted him directly to tell his version of the story, which my colleague has now reported on his blog.

Unfortunately, the editor’s story is “misleading”, to be polite. It is a shame that the review process is confidential, as it allows the journal to hide what actually happens behind their closed doors. Nevertheless, I have asked the journal for the authorization to publish the content of my appeal and their response, where I explain what happened in more detail (and more accurately). They have accepted. I have removed names of the persons involved. This illustrates one of the flaws of the current peer-review system (see this post for how it could work better).

(Just one note: the editor has apparently told my colleague that the third reviewer was a collaborator, so they could not take into account his review. Well, that’s a lie. I know because he chose to sign his review. The "collaboration" was the scientist sending me published data.)

So here it is.

Appeal

Re: Manuscript PCOMPBIOL-D-16-00007R4

So after 10 months and 4 revisions, our paper has been rejected, following the recommendation of one reviewer, because it is not considered of broad enough interest. I quote from the final editorial message: “We regret that the specific hypothesis that your manuscript is geared to dispute does not reach that level of general interest.”.

These facts being recalled, it should be obvious enough that the editorial process has gone very wrong. There were no more technical criticisms already after revision 2, on July 8th, 4 months ago, and the paper should have been accepted then. I have repeatedly asked the editors to explain why we were required to justify novelty and significance after having been required to do so much work on technical aspects. But the editors have refused to answer this simple query. Frankly, I was expecting a bit more respect for the authors that make this journal, and I do not think that explaining the journal’s policy and the decisions is so much to ask. All I know is Michael Eisen’s view, founding editor of this journal, who has cared to comment “I agree - a paper going out for review should mean it is of interest”.

This editorial process has gone beyond anything I have ever witnessed in my career in terms of absurdity and waste. Why scientists (“peers”) would voluntarily make each other’s life so unnecessarily hard instead of cooperating and debating is beyond my understanding. In the end it appears that the ego of one (important?) reviewer matters more than science, and that is very sad. This being said, I have been notified that appeals are only considered when “a) a reviewer or editor is thought to have made a significant factual error” or “b) his/her objectivity is compromised by a documented competing interest”, and since bureaucracy apparently beats reason and ethics, I will now explain how this applies.

I have already explained at length the factual errors of the first reviewer, who is apparently the only one that is trusted by the editors. This editorial message repeats some of them (no, we are not criticizing simulation results of a particular model, but the biophysical interpretation (what goes on physically), and we did so in several state-of-the-art biophysical models, not one). I will therefore focus on case (b), and attach my previous letter to the editors for reference; please also read the responses to reviewers as regards case (a), in particular to reviewer-editor Dr. YYY who has unfortunately not cared to reply to our point-by-point response that he had required from us. The editorial decisions that have led to rejecting the paper on the basis of general interest after 10 months are so bizarre that I am compelled to question senior editor Dr. YYY’s objectivity – I presume that Dr. XXX, who sent the paper for review in the first place, does consider the paper of interest. The sequence of facts speaks for itself:

- On June 6th (revision #2), the editorial message reads “We understand that Reviewer 2 was very enthusiastic, and Reviewer 3 had relatively minor comments, but we both stress that addressing Reviewer 1's reservations are essential. Indeed, it is only fair to say that it seems to us that it will be challenging to address these comments in the context of the presented results.”. The exclusive reliance on one reviewer and the presumption that we could not address the comments is rather surprising. Nonetheless, the editorial message that followed was exclusively about the match with experimental data, not about interest (“the reviewer's point about (apparently) unrealistic voltage dependencies of the currents […]”). We did successfully address these comments, pointing out that the reviewer had made factual errors (such as misreading the figure he was commenting, and discussing the results of an experimental paper he had not opened).

- On July 8th (revision #3), the editorial message was now asking to explain the novelty compared to what we had done in the past (and published in the same journal), blindly following the 3-sentence report of reviewer #1, and making no mention whatsoever to the fact that we had just answered the major (and flawed) criticisms on experimental observations, which constituted the previous editorial message. At this point we complained that we were asked to justify the novelty of our study 7 months after submission, especially when it was explicit in the introduction; nonetheless, we complied and explained again.

- On August 25th (revision #4), we were appalled to read that, instead of finally accepting the paper, senior editor Dr. YYY decided to nominate himself as a reviewer, admitting that “the latest revision is first one he has had the chance to read”. The report was not an assessment of the novelty of the paper, as would have been logical since this was what the previous editorial message was about. Instead, it was a 6 pages long report full of technical queries, making negative criticisms that, for most of them, had already been addressed in previous reports, and asking for substantial modifications of the paper.

- At this point, I replied to the editorial message and obtained no response; as the message stated “If you would like to discuss anything, please don't hesitate to contact either of us directly”, I emailed Dr. YYY, and he started his response as follows: “To answer your email, allow me to be brief, because this sort of exchange should really be going through the journal, and indeed that will be the case from now on.”. Nonetheless, we exchanged a few emails, in which he offered no explanation; in the end we agreed that I would write a point-by-point response to his six-page review, but not modify the paper. I submitted it, together with a response to the first reviewer, and a letter to the editors, on September 22nd.

- Three weeks later, on October 10th, I received a message where I was asked to edit the letter so that it could be passed on to the reviewers. Apparently the editors had not noticed the response to reviewers. It still took them three weeks to read a letter, which, considering the history of this paper, does not strike me as very respectful. I complained to Dr. YYY, who replied “We believe that you have been adequately notified by the PLoS administrative team concerning the status of your revision.”. I had to exchange several emails with Dr. XXX who realized the error. I received no apology from Dr. YYY.

- On November 11th, I received the reject decision, together with the response of reviewer #1 and, oddly enough, of reviewer #3 to which I had not replied (since there was no remaining comment). He also was surprised, since he wrote “I don’t have the expertise, authority or, honestly, the time to judge whether the new comments from Reviewers 1 & 4 are fair, or whether the authors’ responses have fully addressed them – this is clearly a job for the Editors (although hopefully not for the Editor who just became a Reviewer)”. But, editor-reviewer #4 Dr. YYY did not bother replying to my point-by-point response, which he had explicitly required.

- The final decision comes with excuses that are frankly hard to swallow. One is that the editors had failed to see the word “models” in the title. In 10 months and 4 revisions! Who can seriously believe that? And yes, the paper is about models – it is a computational biology journal (note that we have also successfully related the models to experimental observations, on request of the editors). The other excuse is that an anonymous reviewer (reviewer #2) had a conflict of interest and his reviews had to be dismissed. I am of course fine with that decision (let me simply state for the record that none of reviewers I have suggested are in such a position). But this happened in April, more than 6 months ago. Quite appropriately, the editor Dr. XXX asked for another reviewer who identified himself (Dr. ZZZ). Dr. ZZZ wrote a positive review, and in addition he read our responses to the other reviewers and wrote “the revisions of the manuscript in response to the other reviewers' comments seem entirely appropriate.”. At this point, given that no objection had been raised by any reviewer or editor on methods, results or clarity, the paper should have been accepted. Instead, the editors decided to follow a nonsensical comment from reviewer #1 alone: “unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists”, which was not even consistent with his/her own first positive assessment (“this is an interesting paper”). Given that Dr. XXX sent the paper for review in the first place, this decision must originate from Dr. YYY (who at this point had not read the paper, by his own admission). I am compelled to conclude that Dr. YYY has not been objective, and in fact has been actively blocking our paper. Unfortunately, this is not the first time I witness the questionable attitude of Dr. YYY, as he has recently been a reviewer for an essay I wrote. The review process was extremely long, went over multiple rounds with massive lists of requests, where Dr. YYY basically wanted to rewrite the text to follow his own views and style. During the review process, Dr. YYY contacted me directly by email to discuss the paper, going so far as asking for co-authorship (“Indeed, the level of suggestions are approaching collaboration on this paper- something I would be happy with but I assume is not what you have in mind.”). In the same email, and while the review process was not over, he asked me for an experimental collaboration – which of course I have not followed up. I had to ask the editor to intervene to stop the madness – which he did: “Indeed your paper has been unduly delayed and I have asked the reviewer to answer me within 24 hours.”. I apologize for disclosing these email excerpts, but I have no other choice since I am asked to provide documentation. It is clear that, had I imagined that Dr. YYY could be chosen as a reviewer (which seemed unlikely given his recent track record), I would have opposed him. But I did not anticipate that he would nominate himself, or overthrow the editor’s decision without even reading my paper (by his own admission).

Therefore, I am asking that Dr. YYY is replaced by a new senior editor with a more reasonable attitude.

As far as I can see: 1) the three reviewers were initially positive on the interest of the paper; 2) the editor Dr. XXX, who as far as I can tell is the only scientist involved in this process who is a member of the computational biology community, supported our paper since he sent it for review; 3) one reviewer, who seems to be an experimental electrophysiologist (unfortunately he or she has decided to remain anonymous), reverted his subjective opinion on the paper’s interest after we have pointed out the errors in his/her report, and even then, still judged the paper interesting for the computational neuroscience community. I have failed to see to how the decision is “not trivial to reach”.

Best regards,

Romain Brette

Attached: Letter to the editors from September 16th

 

Letter to the editors, September 16th

Dear Editors,

In the previous revision, I raised serious objections regarding the abusive attitude of reviewer #1. These objections have apparently been completely dismissed, but what I have been most disappointed about is the total lack of response to these objections. I am writing this letter in the hope that this time it will be given some consideration.

This manuscript has been submitted 8 months ago. This is the fourth major revision that we have been asked to make. The responses are now totaling more than 25 pages, much longer than the article itself. We have now entered a phase where a large part of the responses consist in citing previous revisions where the issues have already been addressed. This revision reaches a new level, where a fourth reviewer is added and repeats mostly questions that we have already answered in previous revisions. Why a fourth reviewer is considered necessary after 8 months of revision is not clear, when none of the three reviewers has raised any serious concern.

I have officially asked a detailed explanation for this peculiar decision. The only response I have obtained so far is that there was “a tie” between “conflicting reviews”. So apparently the editorial decision has been based on a vote between reviewers. This is yet what I read on the journal’s website:

If reviewers appear to disagree fundamentally, the editors may choose to share all the reviews with each of the reviewers and by this means elicit additional comment that may help the editors to make a decision. That said, decisions are not necessarily made according to majority rule. Instead, the editors evaluate the recommendations and comments of the reviewers alongside comments by the authors and material that may not have been made available to those reviewers.

If one followed this process, then one would realize that:

- None of the three reviewers has any remaining objection about results, methods, or clarity of the text.

- Reviewer #2 and #3 have an overall very positive assessment of the paper and in particular of its interest. Rev #2: “This is a great revision. The authors have clarified and addressed all my previous concerns. […] I strongly believe the study is publishable as it stands”; Rev# 3: “This is a very clear and logically presented manuscript dealing with a key question in fundamental cellular neuroscience”.

- On his/her first report, reviewer #1 also made a positive assessment of the paper and of its interest: “This is a clearly written manuscript that addresses an interesting question regarding the nature of spike initiation. Specifically, the authors propose a plausible explanation […] This is an interesting paper.”.

- After two rounds of technical revisions, in which we pointed out the reviewer’s errors and to which no objection has been made, reviewer #1 changed his mood and now concludes, without any argument: “unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists” (sic).

- Reviewer #3 has read our responses to the two other reviewers and concluded: “the revisions of the manuscript in response to the other reviewers' comments seem entirely appropriate.” From these facts, it appears clearly that there are in fact 3 convergent reviews. All 3 reviewers have concluded that results and methods are rigorous and the text is well written. All 3 reviewers have found the paper interesting. It might be that reviewer #1 has “voted” negatively; however I would expect the editorial decision to be based on the content of reviews and responses, which in this case is convergent, and not on the mood of one reviewer, which in this case is inconsistent between the reports. It is my understanding that an editorial decision should be based on arguments and facts, not on the reviewer’s emotions.

Nonetheless, we have replied in detail, again, to all criticisms. We have pointed out in particular the factual errors of reviewer #1. To help the editors, we have underlined the important points. We would appreciate if the editors checked for themselves whether reviewer #1 is right or not. We have also replied to reviewer/senior editor Dr YYY, although I deeply regret that this fourth version is “the first one he has had the chance to read”.

Finally, I would like to call your attention on the conclusion of reviewer #1, on which his/her recommendation is based, which requires in my opinion a clarification from the journal: “Finally, now in their third revision, the authors acknowledge that this work strongly builds on the previous resistive-coupling hypothesis, and tests whether this hypothesis is compatible with sharp spike onset (a view they have already proposed), vs the alternative proposed by Yu, of back propagation. This very specific theoretical result I feel is unlikely to be of broad interest to the computational biology field, but could be of interest to computational neuroscientists” (Please see also our response, pointing out that the said acknowledgement was clear already in the very first version.)

This recommendation makes some important presumptions about this journal’s editorial views. Therefore I would very much like to know if this journal:

- also considers that proposing a hypothesis is more important than testing one, and that only the former should be published;

- considers that interesting computational neuroscience studies do not belong to this journal. I would also very much like to know if this journal considers that it is ok for a reviewer to ask for substantial technical revisions when he/she has already decided that the paper should not be published anyway. This has been indeed a lot of work for a decision ultimately based on the mood of one reviewer.

As I have argued in this letter, it is very clear that, given the content of the reports of the 3 reviewers and of our responses, this manuscript should have been accepted already. After 8 months and 4 revisions, and no serious objection on the manuscript, I can only hope very much that this journal does not confuse rigorous peer review with author harassment.

Again, I am hoping that this letter will be seriously taken into consideration, and even perhaps responded to.

Best regards,

Romain Brette

 

Response of the editors-in-chief

Dear Dr. Brette,

Thank you for your response to the recent decision on your paper “The Basis of Sharp Spike Onset in Standard Biophysical Models”. The manuscript and your appeal letter have been carefully evaluated by Dr. XXX and the journal’s Editors-in-Chief.

We understand your frustration regarding the length and complexity of the review process, and we would like to apologize for the time taken to reach a final decision.

We would like to provide some further clarification on how the editorial decision was reached. The manuscript addresses the issue - how do cortical neuronal action potentials rise so sharply? – and after an initial evaluation, Dr. XXX found it interesting enough to merit sending out for review, so that the reviewers could assess the technical solidity of the work and the conceptual advance proposed. The paper received mixed reviews, and hence merited a revision. After several rounds of revision, Reviewer 1 remained unconvinced. In order to aid the review process, Dr. YYY volunteered to evaluate the paper in depth, and his opinion concurred with that of Reviewer 1. Dr. XXX also re-read the paper and came to the conclusion that this manuscript is critically close conceptually to the previous PLOS publications - in fact the idea was laid out clearly and beautifully in the 2013 and 2015 PLOS papers. The present manuscript is an implementation of this idea, showing that other biophysically realistic models used to examine the spike sharpness issue show the mechanism that was suggested in the 2013 and 2015 PLOS papers.

We regret that this did not become fully clear before the third revision, and we understand your disappointment with the final outcome.

However, we agree that the findings of the paper are not significant enough for PLOS Computational Biology, and we will not be reconsidering the paper. We are sorry not to be more encouraging, but we hope that you can understand the reasons for this decision.

My new year resolution : to help move science to the post-journal world

I wish I could make the world a better place. I would like to prevent climate change and wars; but that’s not so easy on a personal level. What I can try to do more modestly as a scientist, is to make the scientific world a better place. We have all heard the phrase “publish or perish”. We all complain that careers are made by publishing in “high-impact” journals who favor story-telling and grand claims, and generally select papers arbitrarily (let alone that they do not even predict the impact of papers they publish); a trend that has been increasingly strong and has very negative impact on how research is done, including serious ethical problems. But what do we do concretely about it? For most of us including myself, not much. We keep on submitting to those journals, and we say we have no choice because that is how we are evaluated (for positions or grants). But who evaluates us? Surely there are some political aspects to it (mostly for funding), but the truth is, we are evaluated by peers. In brief, we are inflicting this madness on ourselves.

So: let us stop complaining and try to change the situation. I have previously exposed a vision of how the academic publishing system could look like without journals (by the way, this is not an original thought, fortunately). How to make it happen?

Where we are now

We should be hopeful, because many good things are happening:

  • Preprint servers are getting more and more attention. In biology, a number of journals are now accepting direct submissions from biorxiv, including all the PLoS journals, PNAS, eLife, the Journal of Neuroscience. This ought to boost submissions of preprints.
  • A few journals have started publishing the reviews along with the accepted paper, for example eLife, eNeuro and Nature Communications.
  • More generally, open access to both paper and data is getting more and more common and enforced.
  • A new journal, F1000 Research, now practices post-publication review. The paper is indexed in pubmed once two reviewers have approved it.
  • Very significantly, the Wellcome Trust has opened a journal for its grantees, Wellcome Open Research, based on post-publication review (in partnership with F1000), with this statement “The expectation is that this, and other similar funder platforms that are expected to emerge, will ultimately combine into one central platform that ensures that assessment can only be done at the article level”.
  • Finally: Pubpeer, started just a few years ago. A simple idea: to let anyone comment on any paper, anonymously or not, and let the authors know and respond. You should install their browser plugin. This is an individual initiative but it has already made a big impact, in particular by showing that the “big journals” are not better than the other ones in preventing flaws or frauds. It also addresses the concern that open reviews would be too nice: anyone who finds serious flaws can spot them anonymously and the authors will have to consider them. Pubmed commons is similar, but with signed comments.

What we can do now on a personal level

  1. Put every single paper you write on a “preprint” server before you submit to a journal.
  2. Put all your data online, see eg OpenAIRE.
  3. Remove journal names from the publications in your website. People who care about them will find out anyway.
  4. Start a literature search routine that does not involve looking at tables of contents; a few ideas in this Twitter thread; you could also have an author alert on Google Scholar.
  5. Write comments on Pubpeer; including on “pre-prints”.
  6. Send your papers to a journal with open post-publication review. I know this one is difficult, because the community still cares about impact factors. But at least you can favor those with public reviews (eg eLife, Nature Communications; I would prefer the former as it is non-profit). Instead of sending your papers to Frontiers, send them to F1000 Research; or at least eNeuro.

At the local community level, we can advocate for post-publication review. For example, the INCF has opened a channel on F1000 Research. Maybe we could have a computational neuroscience channel there, sponsored by the OCNS. It is too bad that F1000 Research is for-profit rather than institutional, but currently I do not know of other options.

What we can do on a more global scale

Open post-publication review potentially addresses the issue of recognition, but it does not address the issue of visibility. One concern I have by submitting in F1000 Research (for example), is that my intended readership will not know about it. There are so many papers published each year, one does not even have the time to read the title of all of them. This is one role journals have fulfilled: to select papers worthy of interest for a given community. But since we do not need journals anymore to publish anything, editorial selection and publication need not be coupled anymore. So here is my proposition. We make an independent website which lets any scientist, or possibly any group of scientists, be their own journal. That is, make a selection of papers they find interesting (including preprints). We provide a number of tools to make this as simple as possible: linking to pubpeer and pubmed commons, searching/browsing, alerting authors whose work is selected, email alerts and RSS feeds, etc. Papers are preferentially linked to the preprint if it exists, so as to completely bypass the journals. We could also let authors suggest their own paper for editorial selection. Basically, we provide all the services a journal typically has. This will be made increasingly easier as public open reviews become more widespread. These new “journals” could be run by an individual scientist, or a lab (eg linked to a journal club), or possibly a scientific society or group of scientists. Let us call any of these an “editor”. I would be happy for example to follow the selections of a few authors I respect, and that would be probably more valuable to me that the selection made by any journal, of which very few typically catch my attention in a given table of contents.

I am hoping that it goes as follows:

  1. People start using these individual journals, because it provides relevant information to them.
  2. As a result, papers in less well-known journals and preprints start getting more attention, and more citations.
  3. People take the habit of putting their papers on preprint servers because they get immediate attention.
  4. Editors progressively stop selecting papers published in journals because they have already selected them when they were preprints.
  5. As editors are also committee members, journal names start to matter less in evaluating research.
  6. Traditional journals disappear; instead, we have direct publication (formerly known as preprints) + open public reviews, both anonymous and signed.

How to get it started?

One simple idea to get it started is to make automatic channels for the actual conventional journals. For each journal, we list the table of contents, linked to preprint versions instead of the journal website, and to pubpeer, etc. If it’s convenient, people might start to use it, especially if it allows free access to the papers (legally, since we would use preprints). Then to get people to use the non-conventional channels, we provide suggestions based on content similarity (ie “you may also like...”).

How about this resolution?

A vision of the post-journal world

In 2016, the academic publication system worked mostly as follows :

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is submitted to a journal, chosen for its prestige.
  3. The editor, possibly a professional editor (ie not a professional scientist), judges that the paper is not worth of review: go back to (2).
  4. The editor sends the paper to 2-3 reviewers, asking them to review the paper quickly (the more prestigious, the quicker). Reviewers are chosen based on the editor’s expertise and network.
  5. Reviewers have too many reviews to write already: go back to (4).
  6. The reviewers (or one reviewer) find the paper uninteresting, or bad, or they “don’t believe it”: go back to (2).
  7. The reviews are mixed: a third reviewer is asked; go back to (4).
  8. The reviewers find the paper interesting enough but have requests for the authors; they should entirely rewrite their paper; they should do new experiments; they should cite the reviewers; they should remove an uninteresting figure; they should add some discussion points.
  9. After some work, the authors submit a revised paper.
  10. Actually, reviewers didn’t think the paper was that interesting anyway: go back to (2).
  11. The reviewers have more requests: go back to (9).
  12. The paper is accepted: congratulations!
  13. The format of references does not follow the journal’s style: please rewrite the reference list.
  14. The author’s paper now belongs to Elsevier/Springer Nature/etc. It will be sold by the publisher to academic institutions, with a margin of 30-40%.
  15. (recent alternative option) The authors pay publication fees of about $2000. The paper is now freely accessible.
  16. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a letter to the editor.
  17. After careful consideration, the letter is rejected.
  18. Another reader, not an expert of the field, reads the paper in a prestigious journal and believes it is highly regarded in the community, until she talks with colleagues and finds out that the paper is very criticized.
  19. The prestigious paper is commented in mainstream journals. Experts rant about the journalists.
  20. The first author applies for a position. The committee examines CVs of all candidates and selects those with publications in prestigious journals. The senior author sends an excellent letter of recommendation. The committee values it if the senior author is famous.
  21. The senior author applies for a grant. The committee selects candidates with a good “track record”, ie prestigious publications.

This publication model is wasteful, full of perverse incentives, opaque, and misleading. What I want to do here is not one more critique of this system, but rather give a vision of what the publication system could be and, in my opinion, will be, perhaps within a decade. This is a vision of a publication system centered on individual articles, rather than on journals. It is made possible today by information technology (aka “the internet”). What stands in its way is, in a word: conservatism. It works as follows:

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is uploaded to an archive site, together with the data. All is freely accessible to anyone.
  3. The authors invite an unlimited number of professional scientists (say, authors of at least one paper) to write a formal, signed, review. The reviews are public.
  4. Any other scientist can contribute a signed or anonymous review or comment, including comments on the public reviews (aka “scientific debate”).
  5. Authors respond to the reviews and may submit a revised version.
  6. Authors decide when to stop the revision process; reviewers express their satisfaction, or not.
  7. Any scientist can recommend the paper if she finds it worth reading. Tools are made available to search for papers, possibly using the number of recommendations, possibly weighted by scientist reputation (all data are freely accessible and third parties provide searching tools).
  8. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a public critique, signed or anonymous, after reexamining the paper’s raw data. The authors respond publicly.
  9. Another reader, not an expert of the field, reads the paper. She notices that many knowledgeable scientists disagree with the conclusions and can read the detailed arguments and the authors’ response.
  10. The first author applies for a position. For every publication, the committee reads the reaction of the community (formal reviews, comments, recommendations) and uses it to evaluate the value of the work along all the dimensions addressed in the reactions (importance, originality, technicality, etc). The committee also asks the candidate what her main contributions are in her view and why. The candidate can point to published reactions to her work that support her claims.
  11. The senior author applies for a grant or a promotion. The committee reads the reaction of the community to the candidate’s previous work. They might also look at review invitations and reviewing activity.

In the post-journal world, a paper is submitted a single time (but of course it can be revised), and in contrast with the journal model, the reaction of the community is known. As a consequence, quality stamps (journal names) are obsolete. No review is thrown away. As a consequence, scientists save time on both submitting and reviewing. Scientists are not incentivized to make grand claims. In addition, the system saves several billions of dollars each year (eg US libraries spend $2 billion each year on subscriptions), not counting the time saved on wasted reviews and submissions.

All of this is clearly possible with today’s technology. The next question I will try to address is: how to move quickly to the post-journal world?