In my previous post, I argued that one reason why many people cling to the idea that papers should be formally peer-reviewed before they are published, cited and discussed, despite the fact that this system is a recent historical addition to the scientific enterprise, is a philosophical misunderstanding about the nature of scientific truth. That is, the characteristic of science, as opposed to religion, is that it is never validated; it can and must be criticized. Therefore, no amount of peer reviewing can ever be a stamp of approval for “proven facts”. Instead what we need is public discussion of the science, not stamps of approvals.
In response to that post came up another common reason why many people think it’s important to have journals that select papers after peer-review. The reason is that we are crowded with millions of papers and you can’t read everything, so you need some way to know which paper is important, based on peer-review. So here this is not just about peer-reviewing before publishing, but also about the hierarchy of journals. Journals must do an editorial selection so that you don’t have to waste your time reading low-quality papers, or uninteresting papers. What this means, quite literally, is that you only read papers from “top journals”.
Here I want to show that this argument is untenable, because selecting their readings based on journal names is not what scientists should do or actually do, and because the argument is logically inconsistent.
Why is it logically inconsistent? If the argument is correct, then those papers accepted in lower rank journals should not be read because they are not worth reading. But in that case, why publish them at all? There seems to be no reason for the existence of journals that people do not read because they do not have time to read bad papers. If we argue that those journals should exist because in some cases there are some papers worth reading there, for any sort of reason, then we must admit that we don’t actually use journals as filters, or that we should not use them as filters (see below).
Is it good scientific practice to use journal names as filters? What this implies is that you ignore any paper, including papers in your field and potentially relevant to your own studies, which are not published in “top journals”. So for example, you would not cite a relevant study if it’s not from a top journal. It also means that you don’t check that your own work overlaps other studies. So you potentially take credit for ideas that you were not the first to have. Is this a professional attitude?
If in fact you don’t totally ignore those lower journals, then you don’t actually use journal name as a filter, you actually do look at the content of papers independently of the journal they are published in. Which is my final point: to use journal names as filters is not the normal practice of scientists (or maybe I’m optimistic?). When you look for relevant papers on your topic of interest, you typically do a search (eg pubmed). Do you only consider papers from “top journals”, blindly discarding all others? Of course not. You first look at the titles to see if it might be relevant; then you read the abstract if they are; if the abstract is promising you might open the paper and skim through it, and possibly read it carefully if you think it is worth it. Then you will look at cited papers; or at papers that cite the interesting you just read; or you will read a review; maybe a colleague or your advisor will suggest a few readings. In brief: you do a proper bibliographical search. I cannot believe that any good scientist considers that doing a bibliographical search consists in browsing the table of contents of top journals.
The only case when you do use journal names to select papers to read is indeed when you read tables of contents every month for a few selected journals. How much of this accounts for the papers that you cite? You can get a rough idea of this by looking at the cited half-life of papers or journals. For Cell, it’s about 9 years. I personally also follow new papers on biorxiv using keywords, while most new papers in journals are irrelevant to me because they cover too many topics.
In summary: using journals as filters is not professional because it means poor scholarship and misattribution of credit. Fortunately it’s not what scientists normally do anyway.
One related argument that came out in the discussion of my previous post is that having papers reviewed post-publication could not work because that would be too much work, and consequently most papers would not be reviewed, while at least in the current system every paper is peer reviewed. That is wrong in several ways. First, you can have papers published then peer-reviewed formally and publicly (as in F1000 Research), without this being coupled to editorial selection. Second, if anything, having papers submitted a single time instead of many times to different journals implies that there will be less work for reviewers, not more. Third, what is exactly the advantage of having each paper peer-reviewed if it is argued that those papers should not be read or cited? In the logic where peer review in “good journals” serves as filters for important papers, it makes no difference whether the unimportant papers are peer reviewed or not, so this cannot count as a valid argument against post-publication review.
All this being said, there is still a case for editorial selection after publication, as one of the many ways to discover papers of interest, see for example my free journal of theoretical neuroscience.