A vision of the post-journal world

In 2016, the academic publication system worked mostly as follows :

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is submitted to a journal, chosen for its prestige.
  3. The editor, possibly a professional editor (ie not a professional scientist), judges that the paper is not worth of review: go back to (2).
  4. The editor sends the paper to 2-3 reviewers, asking them to review the paper quickly (the more prestigious, the quicker). Reviewers are chosen based on the editor’s expertise and network.
  5. Reviewers have too many reviews to write already: go back to (4).
  6. The reviewers (or one reviewer) find the paper uninteresting, or bad, or they “don’t believe it”: go back to (2).
  7. The reviews are mixed: a third reviewer is asked; go back to (4).
  8. The reviewers find the paper interesting enough but have requests for the authors; they should entirely rewrite their paper; they should do new experiments; they should cite the reviewers; they should remove an uninteresting figure; they should add some discussion points.
  9. After some work, the authors submit a revised paper.
  10. Actually, reviewers didn’t think the paper was that interesting anyway: go back to (2).
  11. The reviewers have more requests: go back to (9).
  12. The paper is accepted: congratulations!
  13. The format of references does not follow the journal’s style: please rewrite the reference list.
  14. The author’s paper now belongs to Elsevier/Springer Nature/etc. It will be sold by the publisher to academic institutions, with a margin of 30-40%.
  15. (recent alternative option) The authors pay publication fees of about $2000. The paper is now freely accessible.
  16. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a letter to the editor.
  17. After careful consideration, the letter is rejected.
  18. Another reader, not an expert of the field, reads the paper in a prestigious journal and believes it is highly regarded in the community, until she talks with colleagues and finds out that the paper is very criticized.
  19. The prestigious paper is commented in mainstream journals. Experts rant about the journalists.
  20. The first author applies for a position. The committee examines CVs of all candidates and selects those with publications in prestigious journals. The senior author sends an excellent letter of recommendation. The committee values it if the senior author is famous.
  21. The senior author applies for a grant. The committee selects candidates with a good “track record”, ie prestigious publications.

This publication model is wasteful, full of perverse incentives, opaque, and misleading. What I want to do here is not one more critique of this system, but rather give a vision of what the publication system could be and, in my opinion, will be, perhaps within a decade. This is a vision of a publication system centered on individual articles, rather than on journals. It is made possible today by information technology (aka “the internet”). What stands in its way is, in a word: conservatism. It works as follows:

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is uploaded to an archive site, together with the data. All is freely accessible to anyone.
  3. The authors invite an unlimited number of professional scientists (say, authors of at least one paper) to write a formal, signed, review. The reviews are public.
  4. Any other scientist can contribute a signed or anonymous review or comment, including comments on the public reviews (aka “scientific debate”).
  5. Authors respond to the reviews and may submit a revised version.
  6. Authors decide when to stop the revision process; reviewers express their satisfaction, or not.
  7. Any scientist can recommend the paper if she finds it worth reading. Tools are made available to search for papers, possibly using the number of recommendations, possibly weighted by scientist reputation (all data are freely accessible and third parties provide searching tools).
  8. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a public critique, signed or anonymous, after reexamining the paper’s raw data. The authors respond publicly.
  9. Another reader, not an expert of the field, reads the paper. She notices that many knowledgeable scientists disagree with the conclusions and can read the detailed arguments and the authors’ response.
  10. The first author applies for a position. For every publication, the committee reads the reaction of the community (formal reviews, comments, recommendations) and uses it to evaluate the value of the work along all the dimensions addressed in the reactions (importance, originality, technicality, etc). The committee also asks the candidate what her main contributions are in her view and why. The candidate can point to published reactions to her work that support her claims.
  11. The senior author applies for a grant or a promotion. The committee reads the reaction of the community to the candidate’s previous work. They might also look at review invitations and reviewing activity.

In the post-journal world, a paper is submitted a single time (but of course it can be revised), and in contrast with the journal model, the reaction of the community is known. As a consequence, quality stamps (journal names) are obsolete. No review is thrown away. As a consequence, scientists save time on both submitting and reviewing. Scientists are not incentivized to make grand claims. In addition, the system saves several billions of dollars each year (eg US libraries spend $2 billion each year on subscriptions), not counting the time saved on wasted reviews and submissions.

All of this is clearly possible with today’s technology. The next question I will try to address is: how to move quickly to the post-journal world?

29 réflexions au sujet de « A vision of the post-journal world »

  1. 10. The first author applies for a position. The candidate identifies their n (n = a small number) most significant research outputs, detailing the SCIENTIFIC importance and SUBSTANCE of those contributions ("I performed the electrophysiological recordings underlying Fig. 4 and 6", not "I signed a Nature paper"). The committee READS the relevant papers and QUESTIONS the candidate about those contributions, paying particular attention to the METHODOLOGICAL and SCIENTIFIC COMPETENCE of the candidate.

    Of course, the committee can use any and all commentary as a guide for questioning. Rinse and repeat for 11.

    • Sure but in my experience, committee members are often not experts in the specific domain of the candidate. For example, they cannot necessarily judge the originality of the contributions, or the technical difficulty. I suppose that's why journal names are used as quality stamps.

      • "committee members are often not experts in the specific domain of the candidate"

        That's obviously a problem, but the only solution is to find one or more expert(s). Otherwise we end up with the blind leading the blind or - shudder - Human Resources doing the hiring.

        • The issue is that in a hiring committee one needs to compare between candidates with different specialties. So either we find experts for every candidate, and then we're faced with the issue of comparing expert reports, or we have committee members that have broad expertise but cannot judge all aspects of the work in detail as an expert would.

  2. In addition, there are already several systems for commenting. ArXiv doesn't currently have comments, but bioRxiv does. Preprints (and all published papers) can be commented on PubPeer (pubpeer.com), which accepts anonymous comments.

    Forget metrics. Read the paper.

    • Yes there are commenting systems including pubpeer, which is great. But a comment is not the same as a formal review. I think we need both, public: invited formal reviews + anonymous comments.

      Of course we should forget metrics. People use journals for two distinct reasons: to be aware of new papers of interest (eg one follows the new papers in her favorite journal) and to assign value. I see the recommendation system more as a way to search information; ie, every scientist is an editor; not as a new metric to assign value. To assign value, I should of course read the paper; but if I'm a journalist or just not exactly in the field, then I'd be happy to know what experts have to say about it - again not metrics, but reading the reviews.

      • "invited formal reviews"

        You may be underestimating how difficult it is to obtain in-depth reviews. If they are not confidential they will rarely be critical.

          • Fair point. I should look at them again. I've never yet read a paper in those journals (but have already seen many interesting preprints).

        • "You may be underestimating how difficult it is to obtain in-depth reviews. If they are not confidential they will rarely be critical."

          This is absolute true. And how about Ph.D. students?

          And I am not convinced that public and anonymous reviews in the Internet would do any good for academia and science in this era of fake news.

          • I didn't get your question about PhD students. What about them?

            "This is absolutely true" seems like a rather definitive answer without any evidence. Have a look at the journals I mentioned above who actually tried it. Also, mainstream journals like eLife, eNeuro, Nature Communications are publishing reviews. This is not "absolutely true" and in fact empirically wrong.

            Finally, post-publication review (as in eg pubpeer) is precisely what prevents "fake news" (ie flawed scientific papers) to be left there in journals unchallenged.

  3. Part of the problem (correctly) identified may be solved simply by authors starting to choose to preferentially submit to journals that are run on a non-commercial basis (e.g. by a society or similar) with non-prefessional editors. In most fields, there are already many such journals to choose from (and I think professional editors are very rare in general).
    As for the alternative model proposed, I am afraid that only a small fraction of all "published" papers will attract any reviewers at all within a reasonable time and papers that have not been reviewed or commented on by anyone (or by very few) will be impossible to evaluate (and revise!). Other problems with this model is that it is commonly not possible to make all "data" available to everyone (data are not always in the format of text and numbers, and may belong to someone else), and if authors are allowed to revise their papers whenever they feel for it there will be no final "citable" version (or at least it will be impossible to know if a paper is "final" or not).

    • Regarding data availability, I find your statement surprising. All journals require data to be available upon request, and authors sign a formal agreement. Asking for availability at the time of submission is a small requirement. Any kind of data can be shared nowadays. See for example OpenAIRE https://www.openaire.eu/

      • Perhaps we are from different fields, but I have not always run into this requirement. I tend to provide anyways because as you say it is easy and good practice, and it is frequently recommended by the journal, but I think it is not obligatory. Looking the JBC (http://www.jbc.org/site/misc/ifora.xhtml) just as a random example of a mainstream journal, they want certain types of data to be deposited in databases, but for instance numerical datasets of measurements would not be required for submission.

        • I was not talking about the obligation of depositing data in databases, but about the authors agreeing to send data to anyone who requests it. This seems like the strict minimum.

  4. The first authors pays 100 "friends" to write favorable reviews. The system becomes rigged. Possibly in a worse way than the current system.

  5. I find it useful to compare scientific papers with software. Both are a fruit of creativity that are supposed to become a functional brick in a larger building. Both articles and software have bugs and need peer review. Both can be a relevant contribution to their field, or just a waste of time of a person reading/downloading them.

    In the case of articles, if 3 potential users find too many bugs or find that the article is not relevant, then the article may never become available, and all the time invested in its creation wasted. On the contrary, a bugy, uninteresting, hyped article will be accepted as a final piece of truth if the same 3 users happen to feel like that.

    In the case of software, basically anything we code can become immediatly available. In software, peer reviewers are contributors: they report issue and can even eventually fix them. It would be ridiculous and wasteful to have 3 random people curating any single piece of software submitted to github, for example. It would be ridiculous to prevent a code to be posted to github just because 3 people decided it. The peer review for code is a never ending task, and code is never really finished (think about the many versions of OS X, even when Apple has probably thousands of engineers working on it). Also, it doesn't matter if there are millions of pieces of code on github, on the contrary, it's good! And there are many ways of knowing which are the pieces of code that are valuable or popular: You look at the number of users, number of downloads, number of issues, number of pull requests.
    Actually, issues and pull requests are very interesting: It means that one voluntary peer reviewer found your contribution interesting, downloaded it, used it, and suggests an improvement. Or event better, finds the contribution interesting enough to take the time to help and improve it, and become a collaborator by submitting a pull request. That's to me the real, important, peer review. Collaboration.

    The software industry has deeply changed society in a very short time. I think the way in which coders work and collaborate is an example for us scientists. Software industry has as strict standards for quality and recruitment of skilled people as science do. And they work perfectly fine without anything like our decrepit peer review system. Validation of an idea, and prospects of a carreer are not determined by publication in glamour journals, but by actual working code that's being used and constantly revised. That's the peer-review system we need. And that's likely the peer-review system Google X, Verily (Google life science) or Apple Health division will use to judge their researchers.

    • That's an excellent analogy and I had never thought about it! Then maybe it would be worth looking more deeply at how the different challenges are addressed there: selection of scientists; searching for relevant software; credit assignment; review/collaboration etc. Are there even tools we could reuse?

  6. Ping : Weekend reads: Pseudoscience in the literature; a world without journals; "invisible and abandoned" trials - Retraction Watch at Retraction Watch

  7. Thanks for the good read - I feel though that it has been written black and white: You only summon negative arguments on the old peer-review system, and positive views on the open system. On this new system you propose, we should address the issues/pitfalls that can emerge, for example;
    - it can become twitter/facebook-like, with only the big names having many followers and the rest (especially non-western) ignored.
    - as mentioned above; more susceptible to politics, with having many friends or enemies that can make/destroy your career
    - too easy commenting; this is may lead to gut feelings expressed, with poor argumentation.
    - Who is allowed to comment, and when is somebody an expert? How does the non-expert filter between experts, and also level of education (i.e. 1st yr phd, 4th yr phd, postdoc, whatever). There are always (beginning) scientists who claim expertise after reading/publishing one paper on a topic.
    - My main worry is the susceptibility to group thinking/behavior. Seeing more opinions does not mean that those opinions are right, although people tend to follow the crowd. Scientists are more susceptible to this than they are likely to admit, which is in principle not a problem, but this open communication could lead to more herding behavior.
    - You will also see that some scientists enjoy bashing a given paper, like a school of piranhas that smell blood. There is a difference between overly critical and reasonable.
    In fact, I would favor a much more personal, private communication after posting a manuscript online, after which the issues raised in the communication are addressed in the paper. This will remove the public shaming aspect when a mistake has been made (which we all make). In the online paper, one can make notifications which parts are under renewed investigation. I think that in principle a paper can be under continuous investigation. For citing, a date of the given version should be implemented.

    • Thank you for these criticisms, anonymous commenter (just teasing).
      Two general comments first:
      1) The flaws of the current peer review system are empirical (see for example https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/); while criticisms of open systems are theoretical, or potential (ie depend on the specific implementation).
      2) Most of the potential problems that you describe are in fact actual problems of the current publication system. So the question is whether it would be better or worse in an open system.

      Specifically:
      - Only the big names have many followers: that's actually the current system, due to editorial filtering and single-blind system. In a system where everything is accessible and you can choose to follow recommendations from a larger variety of people, I would expect that this is less of a problem.
      - More susceptible to politics: I don't get this one. In the open system, you can read the reviews, so you know if they are fair not. In the closed system, you just know that the person published in Nature. An opaque system is orders of magnitude more prone to corruption than a transparent system.
      - Too easy commenting and poor argumentation. First, this happens in the current system; I have had many reviews like that. There is no incentive for reviewers to do a good job. Second, this could be a problem with anonymous spontaneous commenting (as in pubpeer; although I don't think there is no argumentation), but I don't see why it would be with invited open reviews. Have a look at the reviews in F1000 Research maybe.
      - Level of expertise. Well frankly that's again a problem with the current system. You don't know who reviews you. For most of my papers, I have had at least one reviewer who was not competent (possibly a bigger problem in interdisciplinary fields). Several reasons for that: it's sometimes difficult to find someone motivated to review so you get a non-expert; or the PI gives the review to his/her students. The problem is you never know, and the reader never knows. With an open system, it's easier to address this problem. For example we could be transparent about the level of expertise; other commenters could also rank the comment (actually what is done in pubpeer, and many other online services like yahoo answer etc). In pubmed commons for example, you have to be an author of a paper index in pubmed.
      - Group thinking. I'm not sure that reducing sample size (to n=2/3 reviewers) solves this problem. Currently, if you publish in Nature everyone seems to think you are an amazing scientist. That's not better. Anyway the point of peer review should not be to label papers as good or bad, but to have a scientific discussion. This is exactly what a closed system does not allow.
      - Bashing. Again not specific to open vs. closed system. The difference is in the open system bad reviews can be tagged as such. In fact pubpeer for example is moderated.

      This being said, I agree that these are valid concerns (but not specific to open systems) that must be addressed. I think we should experiment rather than speculate. There's the question of signed vs. anonymous reviews, and of invited vs. spontaneous reviews. At this point, I would favor a double system, with 2-3 invited signed reviews, and spontaneous anonymous reviews. Signing a review makes the reviewer accountable; so that should address some of the problems you mentioned. Spontaneous anonymous reviews ensure that issues with a paper can be pointed out without the risk of making enemies (in particular if authors are well-known).

      We should take a broader perspective and have a look at what has been done on the web in other contexts to solve these same issues. Look at Stack Exchange for example. http://stackexchange.com/
      Comments can be ranked by usefulness; reviewers can be evaluated too. There are technical solutions for some of the issues you mention.

      Finally, let us keep in mind that the point is not to build an ideal system. The point is to build a better system. Open scientific discussion is what we should be aiming for, instead of prestige distribution.

  8. How about this:

    1) put paper onto bioRxiv
    2) transfer it to Axios (an independent review service) for a round of standard peer review
    3) revise the paper
    4) send it back to Axios
    5) the Axios Editor write a short decision on the quality of the revisions
    6) put the revision and the 'response to reviewers' back onto bioRxiv
    7) call it done.

    The Axios process also includes reviewer assessments of which journal the paper could appear in, which could be a way to filter the papers for scope and novelty.

    • I didn't know this service. Interesting. I've read though that about half of the papers are reviewed again by the journals (did I get it correctly?).

      • That's right - if the paper has been heavily revised after our round of review it makes sense for the journal to check whether the new version has fixed the problems.

        When papers are submitted directly to a journal, about half get accepted after 1 round of review, and half after 2.

  9. Ping : Free the journals | Romain Brette

  10. Ping : The great misunderstanding about peer review and the nature of scientific facts | Romain Brette

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *