Do we need a new computational neuroscience journal ?

Where to send computational and theoretical neuroscience papers? A list of journals can be found on Jim Perlewitz’s page. First, many journals belong to one of the scientific publishing companies who make huge profits from the free work of academics (margins around 30-40%), with questionable added value. In computational neuroscience, we have:

  • Journal in Computational Neuroscience (Springer Nature)

  • Frontiers in Computational Neuroscience; in Neuroinformatics (Springer Nature)

  • Biological Cybernetics (Springer Nature)

  • The Journal of Mathematical Neuroscience (Springer Nature)

  • Cognitive Neurodynamics (Springer Nature)

  • Neuroinformatics (Springer Nature)

  • Network: Computation in Neural Systems (Taylor & Francis)

Then we have Neural Computation, run by MIT Press, which is non-profit. Unfortunately, papers are behind paywalls. In my case, the library does not have a subscription.

Finally, there is PLoS Computational Biology, a highly respected journal in the field. However, even in this non-profit journal, open access is very expensive ($2250 per paper). I find these costs hard to justify, when any scientist can publish on archives for free (eg arxiv or biorxiv). The solution to this is overlay journals: the papers are linked to an archive site; the journal only deals with peer-review.

I was also recently rather disappointed by PLoS CB, after a paper got rejected 10 month after submission on the sole basis of interest. Based on the reactions I received, this is not an isolated case. Anyway, PLoS CB practices editorial pre-selection, which is highly subjective. As for other prestigious journals, its prestige is precisely based on its relatively low acceptance rate. But this is a highly questionable model, because it is wasteful: wasting the time of authors who submit the same paper to many journals, wasting the time of reviewers who write reports that are thrown away. In my opinion, it is also perverse. It gives a label of quality to individual papers and scientists based on the opinion of just a couple of scientists, sampled by one editor, and this insignificant label is then used by various committees (grants, promotions, hiring). It also pushes authors into boosting their claims to pass the editorial barrier. The solution to this is post-publication review (see also Michael Eisen’s post).

So ideally, I would like to see an overlay journal practicing post-publication review in my field. The question now is: should we create one? What would be needed is the technical infrastructure, but I am not sure it is fully available yet (as regards the post-publication review system, indexing etc).

Alternatively, we could use a general post-publication review journal. There is for example F1000 Research. PeerJ also does this partially (publishing the reviews is an option). Both cost about $1000 per article; both are run by companies. But they are less expensive than alternative open access journals (PLoS ONE: $1495; eNeuro: $1950 for members, $2925 otherwise), and more open. None of these two journals is ideal, but they represent some progress over conventional journals.

There are two issues with choosing a general journal. First, there is the question of editors. I have no experience with these journals so I am not sure there is the relevant expertise in the editorial board. I have seen a couple of computational neuroscientists in PeerJ though. The second issue is visibility. One idea could be to manage a list of computational neuroscience papers, to which anyone could submit a reference or preprint, in addition to automatic additions (say from relevant journals + keyword alerts). One would simply subscribe to the list (RSS feed or email alert).

Any good ideas are welcome!

Update (6.1.2017)

Some more information and thought following comments on this text:

  • One issue for an overlay journal is to get it indexed in Pubmed. This seems quite important for a neuroscience journal, but the indexing requirements seem to be pretty much contradictory with the idea of an overlay journal, as it requires XML-tagged data of its bibliographic citations. But maybe this could be negotiable? (see "Optional arrangement"). Some more general information about indexing here.
  • Other issue is more technical; to get an automatic post-publication review system.
  • Finally a general thought; there is no reason why this should be specific to computational neuroscience. However, this could be a first step towards a more generalized system.

You are not a good scientist if you use impact factors for evaluating research

A lot has been written about impact factors, and why they are a poor indicator of article quality (just google it). Even for the one thing that the impact factor is meant to measure, ie, future citations, it performs badly. There are now many empirical studies on this. Sadly, in 2016, scientists in hiring or promotion committees still value impact factors very highly, especially in life sciences. Yes, in a domain of science that is meant to be highly empirical, scientists still prefer to look away from compelling evidence that impact factors should not be used for evaluating research. Recently, in a PhD thesis committee, we were discussing the grade and one member, an experimental biologist, raised the issue that the candidate had not published in the highest ranked journals. We opposed that we were there precisely to evaluate the manuscript itself as a whole, not some elements of CVHe admitted later that he had not read the thesis.

I could simply cite the overwhelming evidence. But I want to make a simpler point of statistics. Many things are wrong with using impact factors, but let us just consider the one thing it is meant for, ie, predicting the number of future citations of papers. We know that for any journal, the distributions of citations is very broad and highly skewed. According to Seglen (1997), “articles in the most cited half of articles in a journal are cited 10 times as often as the least cited half”. In practice, if you are given the name of the journal in which a paper has been published, you will not know its future number of citations. From a purely statistical point of view, journal name is just not very significant.

Examples are sometimes more convincing that hard facts. So from the paper I just cited, number of citations vs. impact factor for all publications of 4 scientists:

impact

Now look at my own publication statistics. To be more rigorous, I should calculate the citations per year, but the point is obvious enough. My top 3 cited papers were published in: J. Neurophys (IF 2.8); J. Comp. Neuro (IF 1.8); Frontiers in Neuroinformatics(IF 1.3). This third paper (on the Brian simulator) was published in 2008, and the same year I had a paper in Neuron, which has gathered about 4 times fewer citations. And this is in spite of the fact that papers published in big journals are obviously much more visible (ie there is an expected boost in citations that is unrelated to article quality).

Is this really surprising? In a big journal, just like in a small journal, decisions are based on n = 2 or 3 reviewers, selected by one editor, i.e. a biased sample of size 2/3; just after an editorial selection based on n = 1 editor, in many cases not a professional scientist.

It’s 2016 and life scientists have been recently warned repeatedly that good statistics require large unbiased samples, but impact factors, totally insignificant pieces of information, are still broadly used to distribute grants and make careers. So yes, using impact factors as a proxy to evaluate the quality of papers or of their authors is a display of poor scientific practice.

To me the problem with this is so not much that bad decisions are being made, but that it has a huge influence on the way science is done. Scientists know that they are being evaluated in this way, and this influences their research (especially for students and postdocs) so that it meets the editorial biases of the big journals. In 1952, Hodgkin and Huxley published 120 pages in Journal of Physiology, basically defining the field of electrophysiology. Who would take the risk of doing this today at the age of 35, instead of a career-making 4-page Nature paper?

A praise of post-publication review

We have all suffered from the inadequacy of the current publication system. As readers, we often cannot freely access papers; when we can, papers often become available a very long time after having been written, sometimes years. As authors, we spend a large amount of our time submitting the same paper to many journals, reformatting, addressing criticisms that are sometimes contradictory, only to be rejected for subjective reasons (“readership interest”). As reviewers, multiple submissions mean that the same paper gets evaluated multiple times only to eventually put most reports in the trash. Our reviewing work is free and we do not get any credit for it. As hiring committee members, we have very little information to judge CVs because we do not have access to expert reviews on the papers, which are kept confidential by the journals. As a result, we use journal names as a proxy for quality. As candidates, we suffer from the randomness of editorial selection and the fact that committee members use journal names as a proxy for quality. As tax payers, we pay for an extremely inefficient, redundant and expensive publication system, largely to for-profit journals that make huge margins.

There is a solution to these problems; it’s called post-publication review. Journals belong to the past. We do not need journals to publish anymore. To publish means to make your work public. You publish when you decide to put your work online. Nowadays we call this a “pre-print”. The term itself is awfully misleading: is printing a paper really the goal of the scientific publication system? When was the last time you read a paper in actual printed journal? If I put my paper on arxiv, bioarxiv or another site then it is published, because one can publicly access it.

An objection is: scientific papers must be peer-reviewed. But why? Because we want to know what the community, in particular the most expert scientists, think of the paper. Does the current system allow that? No, because in almost all journals, reviews and author responses are confidential. Peer review can be also useful to correct errors. Great, but why should this correction process stop at publication time? If you want to let other scientists make useful suggestions, it makes so much more sense to publish your paper so that anyone can do this. If all the process is public, then we do not need the journal name to judge the quality of a paper; we can directly read what the community thinks, objects, whether there might be alternative interpretations, whether the authors have counter-arguments, all useful things that are currently hidden by an opaque review process. Finally, journal names are used as a quality label by hiring committees and grant agencies. But that’s bad; we know that journal impact factor and paper influence (eg future citations) are poorly correlated. Committees use it because they are often not in the exact field of the candidate and therefore have little information to judge the quality of papers. If reviews were public, and ideally signed, each candidate’s paper would come with several detailed recommendations and criticisms, much more informative than journal name.

The new publication process would be: publish online, then ask for public reviews, answer and revise. How would you get to know the important papers in your field if journals disappear? Well, there are many ways: keyword or author-based email alerts, most positively reviewed papers, papers liked or reviewed by peers you respect, blogs that make a monthly selection of papers (= journals), PageRank algorithm, etc.

This is not a utopia: it is coming, although quite slowly because we scientists appear to be a very conservative group of people – still happily reviewing for private journals that make huge profits from our free work, most of which is thrown away. There have been many initiatives these last years. PLoS has pioneered open access. eLife publishes reviews and responses after publication. Nature communications just started the same thing. Pubpeer allows post-publication comments. F1000Research publishes papers and then does post-publication reviews. And now the Wellcome trust implements this same scheme for its grantees.

The question then is not, can we realistically switch to a post-publication process, but rather: what can we do to accelerate this switch?

There are a few ideas in Michael Eisen (founder of PLoS) “Mission Bay Manifesto on Science Publishing”. Some are more difficult than others. Here are some ideas that anyone could easily implement:

  • Remove journal names from your online publication lists, and from your presentations. Anyone interested can find out anyway.
  • Start writing post-publication reviews, for example in Pubpeer, but on papers you are interested in – not just pointing out flaws.
  • Post every paper on arxiv, bioarxiv , before sending it to a journal.
  • Think about what journal to submit to. Whenever possible, submit to open-access non-profit journals instead of for-profit journals. There is for example the PLoS journals. As much as you can, submit to journals with public reviews. For example, eLife.
  • If you are in hiring committees, try to resist the journal name criterion.
  • Start a blog and make an annotated selection of papers you like.

Journals belong to the past. We have the opportunity now to build a much better scientific publication system for all parties involved, and we should do it.

 

A few ideas to improve the grant system

In France, the success rate for national grants has gone below 10% (with the ANR). Not surprisingly since the budget has dropped in recent years. Unfortunately, there is not much else that is available for labs (apart from European grants, which have similar success rate). The situation is also quite difficult in the American's NIH (although not as bad). With success rates so low, many scientists complain about the time wasted on writing grants that eventually seem to be more or less arbitrarily selected. This is not just an expression of the frustration of losers: once again a study comes out showing that peer review of grants is no better than chance, when applied to the top grants. The public should be aware of this: writing a grant application takes a significant amount of time (not mentioning reviewers time), and 90% of this tax-payed timed is completely wasted, in a process that is not better than lottery. Does this really make sense?

How have we arrived to such an absurd situation? There are different factors. The first obvious error I see is to fund scientific projects almost entirely based on a competitive grant system. Only the best science should be funded, we hear. Makes sense. Really? Given that both the basic infrastructure (administration, space) and the salary of permanent scientists are paid for anyway, what this advertising slogan means concretely is that only a small proportion of paid scientists will be given the means to work normally. How does that make sense? So the first obvious thing to do is to ensure that there is a baseline that allows labs to work normally (eg pay for conferences, publications, basic equipment). Why would you permanently hire a scientist and then not give her the means to work? Absurd, but that's what the CNRS currently does (main French research organization). INSERM (medical research organization) gives a start-up package to junior scientists; it covers only two years though, but is certainly an improvement.

It is only quite recently that there is a competitive grant system in France. Many people didn't (and still don't) like it, partly for the obvious reasons I just discussed. Personally, I don't think it's a bad thing in principle. It has one value, which is to shift the power from lab directors to individual scientists, who can be more independent (that is, if they get a grant). The problem is not so much that it exists, but that it entirely replaces any other means of funding.

But a competitive grant system doesn't necessarily need to be based on projects. Great if you have a particular project that requires specific expensive equipment, for example. Or a project that requires simultaneous funding of several partners. But in most cases I know, scientists use the grant system to fund the daily work of their lab, and pitch it into a “project”, sometimes assembling different arbitrary “partners” just because the system requires it. Everyone knows that the best way to get a grant is when you already have the results but have not published them yet (so-called “preliminary results” that grant reviewers like so much). Then you get the grant to fund the “preliminary results” of the next one. It is not always like that; sometimes you really do have a good project idea – although in most cases you will use the budget parly for things that are not in the grant anyway, since it's research, not an engineering project. This happens because the funding system is built this way. Scientists have no choice but be cynical, which also results in a huge waste of resources. For programmatic funding (ie funding political priorities, say research on energy), a project-based grant system might make sense. But for basic research, all we ask the funding system is to fund individual scientists or labs because it is believed that they will do good science. For experienced scientists, the best way to predict whether they will do good science is probably to just look at what they did before. Anyway, that's what grant reviewers partly do when they judge CVs and “preliminary results”, with the added bias of story-telling skills that the project system emphasizes. So, there should be a competitive system that allocates grants based on previous work. This system already exists: it is used for hiring and promotion; it also exists at the level of teams and labs, except it has little impact on budgets. Let us just expand it and combine it with some long-term budget, and a lot of resources will be saved.

One possible objection is that it is difficult to estimate budgets if there is no specific project. True, but in reality this is exactly the same problem for project-based grants. Reviewers, who are often international scientists, cannot and do not judge budgets. In practice, the institution generally sets orders of magnitude for budgets, and applicants just ask for the maximum budget (there are of course exceptions).

A second objection is that it doesn't work for junior scientists, since they don't have much to show in their CV. I disagree, because they are hired through a highly competitive process, which is not much based on projects but on their achievements. So by design, the junior scientist did go through a highly selective process and should be granted budget. This is just called a start-up package and should be generalized. There is a valid concern, however, which is to compare scientists that are at very different stages in their career. But this doesn't sound like an impossible difficulty; one could use for example a similar system as the ERC (starting/consolidator/advanced).

All this being said, I believe there is still a need for project-based grants, but with a smaller perimeter. Projects make sense when they require setting up a collaboration, or buying some specific equipment, or within programmatic constraints. Not so much in general to fund a single team or individual. I would like to make a final suggestion. If the project-based system still has a very low success rate, so low that selection is random, then there is a simple way to increase it and waste less ressource. One suggestion made in the abovementioned paper is to use a lottery. Indeed if the process is no better than chance, we might as well dispense ourselves of the review process and save some time for reviewers. This is an improvement, but it still wastes the time of applicants; I don't think it would be received very well. There is another way that also saves the time of applicants: let them apply only every other year (say, if you are born on an even year then you can apply only on even years). The number of funded grants would stay the same of course, if the budget is the same, but the success rate would automatically increase. Maybe not by a factor of two, but probably not far from it. Note that this is not the same as the individual strategy to decide to apply only every two years; when selection is random, it is in your best interest to apply every year, because each application is independent. There is not even any reason to modify your application between successive years. Whereas if success rate is forcefully increased by blocking part of the applications, the same individual success rate is obtained by applicants with half as many applications. In addition, it makes selection less random. This system is clearly better than lottery since you know in advance that you will be blocked and don't have to write a grant. You can also prepare better for the next round.

In summary, I have made four complementary propositions:

  1. Ensure a baseline budget for each individual scientist or team, which allows them to work normally (conferences, publication costs, basic equipment).
  2. Have a competitive grant system based on past rather than hypothetical future.
  3. Have a smaller competitive project-based grant system.
  4. Increase success rates of competitive systems (possibly both) above chance level by letting scientists apply only every two years.

None of these propositions depends on total budget – which is clearly too low in France, but that's an obvious point.

Is peer review a good thing?

Everyone knows peer review is a good thing for science. It's almost what makes it science: papers are critically examined by scientists and so only (or mostly) the good stuff get published, or at least errors are generally corrected in the process. Of course no system is perfect, but it certainly raises the quality of science.

Well for someone interested in epistemology like myself, this is quite interesting. Here are a bunch of scientists, possibly most scientists, who are claiming that the claims of their fellow scientists should be scrutinized for adequate empirical evidence (especially p-values !) and filtered so as to improve the quality of science. And how about that claim? Is it backed up by any empirical evidence?

So the situation is interesting, because the widespread belief in peer review is purely theoretical and based on no evidence at all. Sounds a bit like the liar paradox! Some people have looked for evidence though, but as far as I know what they found is mostly negative evidence (see for example Smith (2010), Classical peer review: an empty gun).

What is peer review meant to be good for?

  1. to remove errors (verification);
  2. to identify the most significant work and increase their visibility (publicity);
  3. to give proper credit to good work (a label of quality used for grant applications and careers).
  4. to scare scientists into producing better papers (prevention).

1) We all know that any study can be published, whether it's flawed or not, significant or not. There are just so many journals that it ought to get published somewhere. We also know that errors also often get into most visible journals. Why wouldn't they? Whether prestigious or not, the journal still relies on a couple of scientists giving an opinion on a paper. Private peer-review is constitutively incapable of spotting errors. If the paper is not selected by the peers, no one will ever know those errors, which will be published in another journal. A more useful alternative would be that scientists who spot errors or disagree with the interpretations publicly write their comments under their name, to which the authors can respond, all of this linked to the original publication. If this were constitutive of the publication process, it could perhaps be done in a civilized way (as opposed to the occasional angry letters of the editor).

2) In the most prestigious journals, the selection is not done by peers anyway. So the publicity argument doesn't work. More importantly, there appears to be no reason to apply this editorial filter before peer review. Since the paper is going to be peer reviewed anyway, why not do an editorial selection afterwards? This would then take the form of a reader's digest. Editorial boards could invite selected authors to write a shortened version of their work, for example. One could imagine many methods of increasing the visibility of selected papers, which do not need to be centralized by journals. For example, each scientist could make a selection of papers he liked most; someone who values the choices of that scientist could then have a look. An organized online social network might do the job.

3) My feeling is that this is what scientists are actually expecting of the peer-review system: credit. That is, to identify the best scientists out there. Careers are made and grants are distributed based on the outcomes of the publication system. Note: those are not based on peer review itself (in particular on the criticisms that may have been raised in the process), but on which peer-reviewed journal the paper lands in. So it's actually not exactly based on peer-review, since as noted above the most stringent filters are not applied by peers but by editors. But in any case, if what we are expecting of this process is a “label of quality”, then there appears to be no reason why it should be applied before publication rather than after, and that could be done as outlined above.

4) I think the only vaguely valuable argument in favor of pre-publication peer review is that it influences the way scientists write their papers. In particular, one might argue that even though peer review does not do a great job at spotting errors, it may nevertheless reduce the flaws in publications by its preventive action: scientists are more careful because they want their paper published. This might be true, but why would it be different if peers commented on the paper after publication? I would actually think that the opposite should be true: to spot flaws in the private circle of journals and peer reviewers is not so bad for one's reputation as spotting flaws in the public sphere.

So what is the conclusion? My feeling is that the current peer review system is based on an outdated publication system, with printed journals having to make a selection of papers for economical reasons. Now that publication is actually not the limiting factor anymore (e.g. you can publish on arxiv), there appears to be no reason to apply filters before publication. Papers could be reviewed, commented and selected by scientific communities, after online publication. This would save a lot of effort and money, produce a richer scientific debate and possibly reduce some biases.