Free the journals

These days, academic journals serve two purposes: to organize peer review, and to make an editorial selection. With internet and in particular “preprint” servers (eg biorxiv), journals are no longer necessary for distributing academic papers. It is also worth reminding that the peer review system has not always been organized in the way it is currently organized, ie with several external reviewers, multiple revision rounds, etc. For example, Nature only introduced formal peer review in 1967. Before that, the selection would be done internally by the editor.

These two missions, organizing peer review and making an editorial selection, are currently coupled. But this is neither necessary, nor a good thing. It is obvious that this coupling is not necessary. One could easily imagine a system where papers are submitted to a mega-journal (e.g. PLoS ONE, PeerJ, F1000 Research), which organizes the peer review, and then journals (e.g. Nature or Cell) make their editorial selection based on perceived “impact”, possibly using the reviews. Instead, authors must submit to each journal separately until their paper is accepted, and reviewers are asked both to check the scientific standards, which is the alleged purpose of peer review, and to judge the perceived importance of papers, which is an editorial task. This results in frustration for the authors, unnecessary delays and tremendous waste of resources. Thus, it is not a good thing.

Once a peer review system is organized (eg by mega-journals), the remaining role of journals is then editorial selection, and this could be done separately. Once we realize that, it becomes clear that very little infrastructure should be needed to run a journal. A journal issue is then just a list of papers selected by an editor, put online with the appropriate links and editorial comments. I propose that every scientist, or lab, or possibly group of interest, starts their own journal, which I will call “free journal” (see my related posts here and there). Free journals are not tied to any commercial interest; they are self-managed academic journals. Crucially, the editor makes a personal selection based on her readings, papers that she personally thinks are interesting. This means in particular that, in contrast with most journals, the editor is highly qualified. The selection is meaningful, not a collection of thumbs up/down made by disparate reviewers based on vague criteria (“impact”). It also implies that it is totally acceptable for the editorial selection to include “preprints”: the editor is a peer and therefore any featured paper is by definition peer reviewed (ie, as in Nature before 1967). I have made my own free journal of theoretical neuroscience in this spirit. I have collected some technical advice on how to make one, and I would be happy to receive any suggestion.

The bottomline is: it is very easy to make a free journal. On the technical side, one essentially needs to use a blogging system, eg WordPress. This automatically generates an RSS feed (eg one post per issue). I also put all the comments associated to the selected papers on PubPeer, with a link to the journal; this way, anyone using their plugin automatically sees that the papers have been selected in the journal when looking at them on pubmed, for example. On the editorial side, it actually represents little work, and I believe that this little amount is actually quite interesting work to do. Every scientist reads and annotates papers. All that is required to run a free journal then, is to set half a day in the month to put your notes together into a post. This helps organizing your bibliographical notes, and it is helpful for your colleagues and your students. One could also imagine coupling this writing with the lab’s journal club.

Please leave me a note if you are making your own free journal and I would be happy to link it from my journal.

My free journal: Brette’s free journal of theoretical neuroscience

Update on the book

I am writing a book on the theory of action potentials. As I haven't published any new chapter for about 6 months, I will give a brief update on the progress of the book.

It turns out that writing a book is difficult (!). As I write the book, I get to question some aspects I take for granted and I realize they are actually not that simple. That I have learned a lot about biophysics. In turn, this tends to make the book more complicated, and so it requires some additional digestion work. I also realize that some important questions are just unresolved and require some research work. Finally, it is quite difficult to write a book while continuing the normal course of research. I started with a one hour per day habit, but this may not be optimal; I tend to spend the first half-hour trying to get back to the matter of the book. I am starting a new routine with two mornings twice a week. We will see how it goes!

These last months I have been working on the 4th chapter, on excitability of an isopotential membrane. This will contain in particular some of the material in Platkiewicz & Brette (2010) on the relation between biophysical parameters and threshold. I wanted to use the same theoretical approach to apply it to other aspects of the action potential (speed, peak etc), so that needed some more work. I realized that some approximations we had done could be enhanced, but the math is slightly more complicated. It is a challenge to keep this chapter simple. I also wanted to apply the theory to the original Hodgkin-Huxley model, but unfortunately it works moderately well. One reason is that the model was fitted to the full action potential and not to initiation (as in fact essentially all empirical neuron models). So in particular, H&H experiments show that the Na+ conductance depends exponentially on voltage at low voltage, but their model doesn't (or approximately does with a different factor). Another reason is the K+ channel has less delay than in the actual squid axon (which they acknowledge), so there is some interaction with the initial Na+ current. So I will go with a simpler approach, using a simplified excitability model. Neurons are not isopotential anyway.

I am also planning to reorganize and partly rewrite chapters 2 and 3. I find the current chapter 3 (action potential of an isopotential membrane) a bit too technical. I also want to change chapter 2 (membrane polarization) to talk about alternative theories of the membrane potential (Tasaki, Ling, etc). Then I want to insert a chapter on the ionic channel theory of action potentials, which explains the theory and discusses empirical evidence, before the chapter on the Hodgkin-Huxley model. Generally, I want to simplify the exposition. But given my rather slow progress on the book so far, I will probably postpone this and first write drafts of the following chapters.

Finally, I have worked a bit on energy consumption and pumps, and I found out that the current treatment in the literature is not entirely satisfactory (see for example my comments on a classical paper on the subject). It turns out that it is a pretty complicated problem (especially systems of pumps).

In brief, I am trying to finish a first version of the chapter on excitability of an isopotential membrane, hopefully within one month.

Brette’s micro-journal of theoretical neuroscience - February 2017

This is the first issue of my journal of theoretical neuroscience.

 

Editorial

This month, I have selected 4 papers on spike initiation (1-4), 1 classical paper on the theory of brain energetics (5), and 1 paper on bibliometrics (6). Three of the papers on spike initiation (1-3) have in common that they are about the relation between geometry (morphology of the neuron and spatial distribution of channels) and excitability. Spikes are initiated in a small region called the axon initial segment (AIS), and this region is very close to the soma. Thus there is a discontinuity in both the geometry (big soma, thin axon) and the spatial distribution of channels (lots in the AIS). It has great impact on excitability, but this has not been very deeply explored theoretically. In fact, as I have discussed in a recent review (Brette, 2015), most theory on excitability (dynamical systems theory) has been developed on isopotential models, and so is largely obsolete. So there is much to do on spike initiation theory that takes into account the soma-AIS system.

 

Articles

1. Evans MD, Tufo C, Dumitrescu AS and MS Grubb. (2017). Myosin II activity is required for structural plasticity at the axon initial segment. (Comment on PubPeer)

A number of studies have shown that the AIS can move over hours or days, with various manipulations such as depolarizing the neuron (as in this study) or stimulating it optogenetically. Two open questions: what are the molecular mechanisms involved in this displacement? Is it actually a displacement or is it just that stuff is removed at one end and inserted at the other end? The same lab previously addressed the first question, showing the involvement of somatic L-type calcium channels and calmodulin. This study shows that myosin (the stuff of muscle, except not the type expressed in muscles) is involved, which strongly suggests that it is an actual displacement; this is in line with previous studies showing that dendrites and axons are contractile structures (e.g. Roland et al. (2014)). This and previous studies start to provide building blocks for a model of activity-dependent structural plasticity of the AIS (working on it!).

 

2. Hamada M, Goethals S, de Vries S, Brette R, Kole M (2016). Covariation of axon initial segment location and dendritic tree normalizes the somatic action potential. (Comment on PubPeer)

Full disclosure: I am an author of this paper. In the lab, we are currently interested in the relation between neural geometry and excitability. In particular, what is the electrical impact of the location of the axon initial segment (AIS)? Experimentally, this is a difficult question because manipulations of AIS geometry (distance, length) also induce changes in Nav channel and other channel properties, in particular phosphorylation (Evans et al., 2015). So this is typically a good question for theorists. I have previously shown that moving the AIS away from the soma should make the neuron more excitable (lower spike threshold), everything else being equal (Brette, 2013). Here we look at what happens after axonal spike initiation, when the current enters the soma (I try to avoid the term “backpropagate”, see Telenczuk et al., 2016). The basic insight is simple: when the axonal spike is fully developed, the voltage gradient between soma and start of the AIS should be roughly 100 mV, and so the axonal current into the soma should be roughly 100 mV divided by resistance between soma and AIS, which is proportional to AIS distance. Next, to charge a big somatodendritic compartment, you need a bigger current. So we predict that big neurons should have a more proximal AIS. This is what the data obtained by Kole’s lab show in this paper (along with many other things, as our theoretical work is a small part of the paper – as often, most of the theory ends up in the supplementary).

 

3. Michalikova M, Remme MWH and R Kempter. (2017). Spikelets in Pyramidal Neurons: Action Potentials Initiated in the Axon Initial Segment That Do Not Activate the Soma. (Comment on PubPeer)

Using simulations of detailed models, the authors propose to explain the observation of spikelets in vivo (small all-or-none events) by the failed propagation of axonal spikes to the soma. Under certain circumstances, they show that a spike generated at the distal axonal initiation site may fail to reach the somatic threshold for AP generation, so that only the smaller axonal spike is observed at the soma. This paper provides a nice overview of the topic and I found the study convincing. There is in fact a direct relation to our paper discussed above (Hamada et al., 2016): this study shows how the axonal spike can fail to trigger the somatic spike, which explains why the AIS needs to be placed at the right position to prevent this. One can argue (speculatively) that if AIS position is indeed tuned to produce the right amount of somatic depolarization, then sometimes this should fail and result in a spikelet (algorithm: if no spikelet, move AIS distally; if spikelet, move AIS proximally).

 

4. Mensi S, Hagens P, Gerstner W and C Pozzorini (2016). Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons. (Comment on PubPeer)

I had to love this paper, because the authors basically experimentally confirm every theoretical prediction we had made in our paper on spike threshold adaptation (Platkiewicz and Brette, 2011). Essentially, what we had done is derive the dynamics of spike threshold from the dynamics of Nav channel inactivation. There were a number of non-trivial predictions, such as the shortening of the effective integration time constant, sensitivity to input variance, the specific way in which spike threshold depends on membrane potential, and the interaction between spike-triggered and subthreshold adaptation (that we touched upon in the discussion). This study uses a non-parametric model-fitting approach in cortical slices to empirically derive the dynamics of spike threshold (indirectly, based on responses to fluctuating currents), and the results are completely in line with our theoretical predictions.

 

5. Attwell D and SB Laughlin (2001). An energy budget for signaling in the grey matter of the brain (Comment on PubPeer and Pubmed Commons).

This is an old but important paper on energetics of the brain, in particular: how much does it cost to maintain the resting potential? How much does it cost to propagate a spike? The paper explains some theoretical ideas to do these estimations, and is also a good source for relevant empirical numbers. It is important though to look at follow-up studies, which have addressed some issues, for example action potential efficiency is underestimated in this study. One problem in this study is the estimation of the cost of the resting potential, which I think is just wrong (see my detailed comment on Pubmed Commons; I believe the authors will soon respond). Unfortunately, I think it is really hard to estimate this cost by theoretical means; it would require knowing the permeability at rest to various ions, most importantly in the axon.

 

6. Brembs B, Button K and M Munafò (2013). Deep Impact: Unintended consequences of journal rank. (Comment on PubPeer)

The authors look at the relation between journal rank (derived from impact factor) and various indicators, for example effect sizes reported, statistical power, etc. In summary, they found that the only thing journal rank strongly correlates with is the proportion of retractions and frauds. Another interesting finding is about the predictive power of journal rank on future citations. There is obviously a positive correlation since impact factor measures the number of citations. But it is really quite small (see my post on this). What is most interesting is that the predictive power started increasing in the 1960s, when the impact factor was introduced. This strongly suggests that, rather than being a quality indicator, the impact factor biases the citations of papers (increases the visibility of otherwise equally good papers). This paper also shows evidence of manipulation of impact factors by journals (including Current Biology, whose impact factor went from 7 to 12 after its acquisition by Elsevier), and is generally a good source of references on the subject.

My new year resolution : to help move science to the post-journal world

I wish I could make the world a better place. I would like to prevent climate change and wars; but that’s not so easy on a personal level. What I can try to do more modestly as a scientist, is to make the scientific world a better place. We have all heard the phrase “publish or perish”. We all complain that careers are made by publishing in “high-impact” journals who favor story-telling and grand claims, and generally select papers arbitrarily (let alone that they do not even predict the impact of papers they publish); a trend that has been increasingly strong and has very negative impact on how research is done, including serious ethical problems. But what do we do concretely about it? For most of us including myself, not much. We keep on submitting to those journals, and we say we have no choice because that is how we are evaluated (for positions or grants). But who evaluates us? Surely there are some political aspects to it (mostly for funding), but the truth is, we are evaluated by peers. In brief, we are inflicting this madness on ourselves.

So: let us stop complaining and try to change the situation. I have previously exposed a vision of how the academic publishing system could look like without journals (by the way, this is not an original thought, fortunately). How to make it happen?

Where we are now

We should be hopeful, because many good things are happening:

  • Preprint servers are getting more and more attention. In biology, a number of journals are now accepting direct submissions from biorxiv, including all the PLoS journals, PNAS, eLife, the Journal of Neuroscience. This ought to boost submissions of preprints.
  • A few journals have started publishing the reviews along with the accepted paper, for example eLife, eNeuro and Nature Communications.
  • More generally, open access to both paper and data is getting more and more common and enforced.
  • A new journal, F1000 Research, now practices post-publication review. The paper is indexed in pubmed once two reviewers have approved it.
  • Very significantly, the Wellcome Trust has opened a journal for its grantees, Wellcome Open Research, based on post-publication review (in partnership with F1000), with this statement “The expectation is that this, and other similar funder platforms that are expected to emerge, will ultimately combine into one central platform that ensures that assessment can only be done at the article level”.
  • Finally: Pubpeer, started just a few years ago. A simple idea: to let anyone comment on any paper, anonymously or not, and let the authors know and respond. You should install their browser plugin. This is an individual initiative but it has already made a big impact, in particular by showing that the “big journals” are not better than the other ones in preventing flaws or frauds. It also addresses the concern that open reviews would be too nice: anyone who finds serious flaws can spot them anonymously and the authors will have to consider them. Pubmed commons is similar, but with signed comments.

What we can do now on a personal level

  1. Put every single paper you write on a “preprint” server before you submit to a journal.
  2. Put all your data online, see eg OpenAIRE.
  3. Remove journal names from the publications in your website. People who care about them will find out anyway.
  4. Start a literature search routine that does not involve looking at tables of contents; a few ideas in this Twitter thread; you could also have an author alert on Google Scholar.
  5. Write comments on Pubpeer; including on “pre-prints”.
  6. Send your papers to a journal with open post-publication review. I know this one is difficult, because the community still cares about impact factors. But at least you can favor those with public reviews (eg eLife, Nature Communications; I would prefer the former as it is non-profit). Instead of sending your papers to Frontiers, send them to F1000 Research; or at least eNeuro.

At the local community level, we can advocate for post-publication review. For example, the INCF has opened a channel on F1000 Research. Maybe we could have a computational neuroscience channel there, sponsored by the OCNS. It is too bad that F1000 Research is for-profit rather than institutional, but currently I do not know of other options.

What we can do on a more global scale

Open post-publication review potentially addresses the issue of recognition, but it does not address the issue of visibility. One concern I have by submitting in F1000 Research (for example), is that my intended readership will not know about it. There are so many papers published each year, one does not even have the time to read the title of all of them. This is one role journals have fulfilled: to select papers worthy of interest for a given community. But since we do not need journals anymore to publish anything, editorial selection and publication need not be coupled anymore. So here is my proposition. We make an independent website which lets any scientist, or possibly any group of scientists, be their own journal. That is, make a selection of papers they find interesting (including preprints). We provide a number of tools to make this as simple as possible: linking to pubpeer and pubmed commons, searching/browsing, alerting authors whose work is selected, email alerts and RSS feeds, etc. Papers are preferentially linked to the preprint if it exists, so as to completely bypass the journals. We could also let authors suggest their own paper for editorial selection. Basically, we provide all the services a journal typically has. This will be made increasingly easier as public open reviews become more widespread. These new “journals” could be run by an individual scientist, or a lab (eg linked to a journal club), or possibly a scientific society or group of scientists. Let us call any of these an “editor”. I would be happy for example to follow the selections of a few authors I respect, and that would be probably more valuable to me that the selection made by any journal, of which very few typically catch my attention in a given table of contents.

I am hoping that it goes as follows:

  1. People start using these individual journals, because it provides relevant information to them.
  2. As a result, papers in less well-known journals and preprints start getting more attention, and more citations.
  3. People take the habit of putting their papers on preprint servers because they get immediate attention.
  4. Editors progressively stop selecting papers published in journals because they have already selected them when they were preprints.
  5. As editors are also committee members, journal names start to matter less in evaluating research.
  6. Traditional journals disappear; instead, we have direct publication (formerly known as preprints) + open public reviews, both anonymous and signed.

How to get it started?

One simple idea to get it started is to make automatic channels for the actual conventional journals. For each journal, we list the table of contents, linked to preprint versions instead of the journal website, and to pubpeer, etc. If it’s convenient, people might start to use it, especially if it allows free access to the papers (legally, since we would use preprints). Then to get people to use the non-conventional channels, we provide suggestions based on content similarity (ie “you may also like...”).

How about this resolution?

A vision of the post-journal world

In 2016, the academic publication system worked mostly as follows :

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is submitted to a journal, chosen for its prestige.
  3. The editor, possibly a professional editor (ie not a professional scientist), judges that the paper is not worth of review: go back to (2).
  4. The editor sends the paper to 2-3 reviewers, asking them to review the paper quickly (the more prestigious, the quicker). Reviewers are chosen based on the editor’s expertise and network.
  5. Reviewers have too many reviews to write already: go back to (4).
  6. The reviewers (or one reviewer) find the paper uninteresting, or bad, or they “don’t believe it”: go back to (2).
  7. The reviews are mixed: a third reviewer is asked; go back to (4).
  8. The reviewers find the paper interesting enough but have requests for the authors; they should entirely rewrite their paper; they should do new experiments; they should cite the reviewers; they should remove an uninteresting figure; they should add some discussion points.
  9. After some work, the authors submit a revised paper.
  10. Actually, reviewers didn’t think the paper was that interesting anyway: go back to (2).
  11. The reviewers have more requests: go back to (9).
  12. The paper is accepted: congratulations!
  13. The format of references does not follow the journal’s style: please rewrite the reference list.
  14. The author’s paper now belongs to Elsevier/Springer Nature/etc. It will be sold by the publisher to academic institutions, with a margin of 30-40%.
  15. (recent alternative option) The authors pay publication fees of about $2000. The paper is now freely accessible.
  16. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a letter to the editor.
  17. After careful consideration, the letter is rejected.
  18. Another reader, not an expert of the field, reads the paper in a prestigious journal and believes it is highly regarded in the community, until she talks with colleagues and finds out that the paper is very criticized.
  19. The prestigious paper is commented in mainstream journals. Experts rant about the journalists.
  20. The first author applies for a position. The committee examines CVs of all candidates and selects those with publications in prestigious journals. The senior author sends an excellent letter of recommendation. The committee values it if the senior author is famous.
  21. The senior author applies for a grant. The committee selects candidates with a good “track record”, ie prestigious publications.

This publication model is wasteful, full of perverse incentives, opaque, and misleading. What I want to do here is not one more critique of this system, but rather give a vision of what the publication system could be and, in my opinion, will be, perhaps within a decade. This is a vision of a publication system centered on individual articles, rather than on journals. It is made possible today by information technology (aka “the internet”). What stands in its way is, in a word: conservatism. It works as follows:

  1. Scientists do some research and write a paper about it to share their findings with the world.
  2. The paper is uploaded to an archive site, together with the data. All is freely accessible to anyone.
  3. The authors invite an unlimited number of professional scientists (say, authors of at least one paper) to write a formal, signed, review. The reviews are public.
  4. Any other scientist can contribute a signed or anonymous review or comment, including comments on the public reviews (aka “scientific debate”).
  5. Authors respond to the reviews and may submit a revised version.
  6. Authors decide when to stop the revision process; reviewers express their satisfaction, or not.
  7. Any scientist can recommend the paper if she finds it worth reading. Tools are made available to search for papers, possibly using the number of recommendations, possibly weighted by scientist reputation (all data are freely accessible and third parties provide searching tools).
  8. A knowledgeable reader disagrees with the main conclusion of the paper: she writes a public critique, signed or anonymous, after reexamining the paper’s raw data. The authors respond publicly.
  9. Another reader, not an expert of the field, reads the paper. She notices that many knowledgeable scientists disagree with the conclusions and can read the detailed arguments and the authors’ response.
  10. The first author applies for a position. For every publication, the committee reads the reaction of the community (formal reviews, comments, recommendations) and uses it to evaluate the value of the work along all the dimensions addressed in the reactions (importance, originality, technicality, etc). The committee also asks the candidate what her main contributions are in her view and why. The candidate can point to published reactions to her work that support her claims.
  11. The senior author applies for a grant or a promotion. The committee reads the reaction of the community to the candidate’s previous work. They might also look at review invitations and reviewing activity.

In the post-journal world, a paper is submitted a single time (but of course it can be revised), and in contrast with the journal model, the reaction of the community is known. As a consequence, quality stamps (journal names) are obsolete. No review is thrown away. As a consequence, scientists save time on both submitting and reviewing. Scientists are not incentivized to make grand claims. In addition, the system saves several billions of dollars each year (eg US libraries spend $2 billion each year on subscriptions), not counting the time saved on wasted reviews and submissions.

All of this is clearly possible with today’s technology. The next question I will try to address is: how to move quickly to the post-journal world?

Do we need a new computational neuroscience journal ?

Where to send computational and theoretical neuroscience papers? A list of journals can be found on Jim Perlewitz’s page. First, many journals belong to one of the scientific publishing companies who make huge profits from the free work of academics (margins around 30-40%), with questionable added value. In computational neuroscience, we have:

  • Journal in Computational Neuroscience (Springer Nature)

  • Frontiers in Computational Neuroscience; in Neuroinformatics (Springer Nature)

  • Biological Cybernetics (Springer Nature)

  • The Journal of Mathematical Neuroscience (Springer Nature)

  • Cognitive Neurodynamics (Springer Nature)

  • Neuroinformatics (Springer Nature)

  • Network: Computation in Neural Systems (Taylor & Francis)

Then we have Neural Computation, run by MIT Press, which is non-profit. Unfortunately, papers are behind paywalls. In my case, the library does not have a subscription.

Finally, there is PLoS Computational Biology, a highly respected journal in the field. However, even in this non-profit journal, open access is very expensive ($2250 per paper). I find these costs hard to justify, when any scientist can publish on archives for free (eg arxiv or biorxiv). The solution to this is overlay journals: the papers are linked to an archive site; the journal only deals with peer-review.

I was also recently rather disappointed by PLoS CB, after a paper got rejected 10 month after submission on the sole basis of interest. Based on the reactions I received, this is not an isolated case. Anyway, PLoS CB practices editorial pre-selection, which is highly subjective. As for other prestigious journals, its prestige is precisely based on its relatively low acceptance rate. But this is a highly questionable model, because it is wasteful: wasting the time of authors who submit the same paper to many journals, wasting the time of reviewers who write reports that are thrown away. In my opinion, it is also perverse. It gives a label of quality to individual papers and scientists based on the opinion of just a couple of scientists, sampled by one editor, and this insignificant label is then used by various committees (grants, promotions, hiring). It also pushes authors into boosting their claims to pass the editorial barrier. The solution to this is post-publication review (see also Michael Eisen’s post).

So ideally, I would like to see an overlay journal practicing post-publication review in my field. The question now is: should we create one? What would be needed is the technical infrastructure, but I am not sure it is fully available yet (as regards the post-publication review system, indexing etc).

Alternatively, we could use a general post-publication review journal. There is for example F1000 Research. PeerJ also does this partially (publishing the reviews is an option). Both cost about $1000 per article; both are run by companies. But they are less expensive than alternative open access journals (PLoS ONE: $1495; eNeuro: $1950 for members, $2925 otherwise), and more open. None of these two journals is ideal, but they represent some progress over conventional journals.

There are two issues with choosing a general journal. First, there is the question of editors. I have no experience with these journals so I am not sure there is the relevant expertise in the editorial board. I have seen a couple of computational neuroscientists in PeerJ though. The second issue is visibility. One idea could be to manage a list of computational neuroscience papers, to which anyone could submit a reference or preprint, in addition to automatic additions (say from relevant journals + keyword alerts). One would simply subscribe to the list (RSS feed or email alert).

Any good ideas are welcome!

Update (6.1.2017)

Some more information and thought following comments on this text:

  • One issue for an overlay journal is to get it indexed in Pubmed. This seems quite important for a neuroscience journal, but the indexing requirements seem to be pretty much contradictory with the idea of an overlay journal, as it requires XML-tagged data of its bibliographic citations. But maybe this could be negotiable? (see "Optional arrangement"). Some more general information about indexing here.
  • Other issue is more technical; to get an automatic post-publication review system.
  • Finally a general thought; there is no reason why this should be specific to computational neuroscience. However, this could be a first step towards a more generalized system.

You are not a good scientist if you use impact factors for evaluating research

A lot has been written about impact factors, and why they are a poor indicator of article quality (just google it). Even for the one thing that the impact factor is meant to measure, ie, future citations, it performs badly. There are now many empirical studies on this. Sadly, in 2016, scientists in hiring or promotion committees still value impact factors very highly, especially in life sciences. Yes, in a domain of science that is meant to be highly empirical, scientists still prefer to look away from compelling evidence that impact factors should not be used for evaluating research. Recently, in a PhD thesis committee, we were discussing the grade and one member, an experimental biologist, raised the issue that the candidate had not published in the highest ranked journals. We opposed that we were there precisely to evaluate the manuscript itself as a whole, not some elements of CVHe admitted later that he had not read the thesis.

I could simply cite the overwhelming evidence. But I want to make a simpler point of statistics. Many things are wrong with using impact factors, but let us just consider the one thing it is meant for, ie, predicting the number of future citations of papers. We know that for any journal, the distributions of citations is very broad and highly skewed. According to Seglen (1997), “articles in the most cited half of articles in a journal are cited 10 times as often as the least cited half”. In practice, if you are given the name of the journal in which a paper has been published, you will not know its future number of citations. From a purely statistical point of view, journal name is just not very significant.

Examples are sometimes more convincing that hard facts. So from the paper I just cited, number of citations vs. impact factor for all publications of 4 scientists:

impact

Now look at my own publication statistics. To be more rigorous, I should calculate the citations per year, but the point is obvious enough. My top 3 cited papers were published in: J. Neurophys (IF 2.8); J. Comp. Neuro (IF 1.8); Frontiers in Neuroinformatics(IF 1.3). This third paper (on the Brian simulator) was published in 2008, and the same year I had a paper in Neuron, which has gathered about 4 times fewer citations. And this is in spite of the fact that papers published in big journals are obviously much more visible (ie there is an expected boost in citations that is unrelated to article quality).

Is this really surprising? In a big journal, just like in a small journal, decisions are based on n = 2 or 3 reviewers, selected by one editor, i.e. a biased sample of size 2/3; just after an editorial selection based on n = 1 editor, in many cases not a professional scientist.

It’s 2016 and life scientists have been recently warned repeatedly that good statistics require large unbiased samples, but impact factors, totally insignificant pieces of information, are still broadly used to distribute grants and make careers. So yes, using impact factors as a proxy to evaluate the quality of papers or of their authors is a display of poor scientific practice.

To me the problem with this is so not much that bad decisions are being made, but that it has a huge influence on the way science is done. Scientists know that they are being evaluated in this way, and this influences their research (especially for students and postdocs) so that it meets the editorial biases of the big journals. In 1952, Hodgkin and Huxley published 120 pages in Journal of Physiology, basically defining the field of electrophysiology. Who would take the risk of doing this today at the age of 35, instead of a career-making 4-page Nature paper?

This week's paper selection (7 December 2016)