You are not a good scientist if you use impact factors for evaluating research

A lot has been written about impact factors, and why they are a poor indicator of article quality (just google it). Even for the one thing that the impact factor is meant to measure, ie, future citations, it performs badly. There are now many empirical studies on this. Sadly, in 2016, scientists in hiring or promotion committees still value impact factors very highly, especially in life sciences. Yes, in a domain of science that is meant to be highly empirical, scientists still prefer to look away from compelling evidence that impact factors should not be used for evaluating research. Recently, in a PhD thesis committee, we were discussing the grade and one member, an experimental biologist, raised the issue that the candidate had not published in the highest ranked journals. We opposed that we were there precisely to evaluate the manuscript itself as a whole, not some elements of CVHe admitted later that he had not read the thesis.

I could simply cite the overwhelming evidence. But I want to make a simpler point of statistics. Many things are wrong with using impact factors, but let us just consider the one thing it is meant for, ie, predicting the number of future citations of papers. We know that for any journal, the distributions of citations is very broad and highly skewed. According to Seglen (1997), “articles in the most cited half of articles in a journal are cited 10 times as often as the least cited half”. In practice, if you are given the name of the journal in which a paper has been published, you will not know its future number of citations. From a purely statistical point of view, journal name is just not very significant.

Examples are sometimes more convincing that hard facts. So from the paper I just cited, number of citations vs. impact factor for all publications of 4 scientists:

impact

Now look at my own publication statistics. To be more rigorous, I should calculate the citations per year, but the point is obvious enough. My top 3 cited papers were published in: J. Neurophys (IF 2.8); J. Comp. Neuro (IF 1.8); Frontiers in Neuroinformatics(IF 1.3). This third paper (on the Brian simulator) was published in 2008, and the same year I had a paper in Neuron, which has gathered about 4 times fewer citations. And this is in spite of the fact that papers published in big journals are obviously much more visible (ie there is an expected boost in citations that is unrelated to article quality).

Is this really surprising? In a big journal, just like in a small journal, decisions are based on n = 2 or 3 reviewers, selected by one editor, i.e. a biased sample of size 2/3; just after an editorial selection based on n = 1 editor, in many cases not a professional scientist.

It’s 2016 and life scientists have been recently warned repeatedly that good statistics require large unbiased samples, but impact factors, totally insignificant pieces of information, are still broadly used to distribute grants and make careers. So yes, using impact factors as a proxy to evaluate the quality of papers or of their authors is a display of poor scientific practice.

To me the problem with this is so not much that bad decisions are being made, but that it has a huge influence on the way science is done. Scientists know that they are being evaluated in this way, and this influences their research (especially for students and postdocs) so that it meets the editorial biases of the big journals. In 1952, Hodgkin and Huxley published 120 pages in Journal of Physiology, basically defining the field of electrophysiology. Who would take the risk of doing this today at the age of 35, instead of a career-making 4-page Nature paper?

This week's paper selection (7 December 2016)

 

A praise of post-publication review

We have all suffered from the inadequacy of the current publication system. As readers, we often cannot freely access papers; when we can, papers often become available a very long time after having been written, sometimes years. As authors, we spend a large amount of our time submitting the same paper to many journals, reformatting, addressing criticisms that are sometimes contradictory, only to be rejected for subjective reasons (“readership interest”). As reviewers, multiple submissions mean that the same paper gets evaluated multiple times only to eventually put most reports in the trash. Our reviewing work is free and we do not get any credit for it. As hiring committee members, we have very little information to judge CVs because we do not have access to expert reviews on the papers, which are kept confidential by the journals. As a result, we use journal names as a proxy for quality. As candidates, we suffer from the randomness of editorial selection and the fact that committee members use journal names as a proxy for quality. As tax payers, we pay for an extremely inefficient, redundant and expensive publication system, largely to for-profit journals that make huge margins.

There is a solution to these problems; it’s called post-publication review. Journals belong to the past. We do not need journals to publish anymore. To publish means to make your work public. You publish when you decide to put your work online. Nowadays we call this a “pre-print”. The term itself is awfully misleading: is printing a paper really the goal of the scientific publication system? When was the last time you read a paper in actual printed journal? If I put my paper on arxiv, bioarxiv or another site then it is published, because one can publicly access it.

An objection is: scientific papers must be peer-reviewed. But why? Because we want to know what the community, in particular the most expert scientists, think of the paper. Does the current system allow that? No, because in almost all journals, reviews and author responses are confidential. Peer review can be also useful to correct errors. Great, but why should this correction process stop at publication time? If you want to let other scientists make useful suggestions, it makes so much more sense to publish your paper so that anyone can do this. If all the process is public, then we do not need the journal name to judge the quality of a paper; we can directly read what the community thinks, objects, whether there might be alternative interpretations, whether the authors have counter-arguments, all useful things that are currently hidden by an opaque review process. Finally, journal names are used as a quality label by hiring committees and grant agencies. But that’s bad; we know that journal impact factor and paper influence (eg future citations) are poorly correlated. Committees use it because they are often not in the exact field of the candidate and therefore have little information to judge the quality of papers. If reviews were public, and ideally signed, each candidate’s paper would come with several detailed recommendations and criticisms, much more informative than journal name.

The new publication process would be: publish online, then ask for public reviews, answer and revise. How would you get to know the important papers in your field if journals disappear? Well, there are many ways: keyword or author-based email alerts, most positively reviewed papers, papers liked or reviewed by peers you respect, blogs that make a monthly selection of papers (= journals), PageRank algorithm, etc.

This is not a utopia: it is coming, although quite slowly because we scientists appear to be a very conservative group of people – still happily reviewing for private journals that make huge profits from our free work, most of which is thrown away. There have been many initiatives these last years. PLoS has pioneered open access. eLife publishes reviews and responses after publication. Nature communications just started the same thing. Pubpeer allows post-publication comments. F1000Research publishes papers and then does post-publication reviews. And now the Wellcome trust implements this same scheme for its grantees.

The question then is not, can we realistically switch to a post-publication process, but rather: what can we do to accelerate this switch?

There are a few ideas in Michael Eisen (founder of PLoS) “Mission Bay Manifesto on Science Publishing”. Some are more difficult than others. Here are some ideas that anyone could easily implement:

  • Remove journal names from your online publication lists, and from your presentations. Anyone interested can find out anyway.
  • Start writing post-publication reviews, for example in Pubpeer, but on papers you are interested in – not just pointing out flaws.
  • Post every paper on arxiv, bioarxiv , before sending it to a journal.
  • Think about what journal to submit to. Whenever possible, submit to open-access non-profit journals instead of for-profit journals. There is for example the PLoS journals. As much as you can, submit to journals with public reviews. For example, eLife.
  • If you are in hiring committees, try to resist the journal name criterion.
  • Start a blog and make an annotated selection of papers you like.

Journals belong to the past. We have the opportunity now to build a much better scientific publication system for all parties involved, and we should do it.

 

This week's paper selection (30 November 2016)

Two review papers this week on autonomous systems – should be an inspiration for computational neuroscientists!

 

This week's paper selection (17 November 2016)

Some more patch clamp automation this week!

 

This week's paper selection (10 November 2016)

This week I got interested in automatic patch clamp. I would like to write an open source Python package to do that: contact me if you have relevant material or expertise! (eg controlling acquisition boards; video analysis; pressure control)

 

This week's paper selection (3 November 2016)

This week many interesting reviews from a special issue of Current Biology on evolution of nervous systems (the first 6 papers):

 

This week's paper selection (20 October 2016)