Peer review is the gold standard for scientific work. Some people argue that the fact that an article has been peer-reviewed is what makes it scientific. I would not go that far. In my opinion, research is scientific because it is conducted according to scientific standards, not because someone else has read it. However, having peers review your work is a quality mechanism that ensures that others agree that those scientific standards have been met.

Too few researchers, too many manuscripts

The peer review institution (yes, we could call it that) requires that there are peers to review manuscripts. This brings us to the heart of today’s problem. The acceleration in the volume of publications we have seen in recent years, driven by a publish-or-perish culture, is clearly unsustainable. Sure, the number of researchers has increased, but the number of publications has increased even more. That also means that researchers need to review more manuscripts than they did in the past. However, because of institutional and peer pressure, they must also write more manuscripts than former generations of researchers. This is not sustainable in the long run.

The problem of the “big five” publishers

This blog post mainly concerns the burden of peer review, but we need to consider business models to understand the root of the problem. Due to the capitalist nature of many institutional and/or national reward systems, many private publishing houses have found a market to thrive. The biggest players (Reed-Elsevier, Taylor & Francis, Wiley-Blackwell, Springer and Sage) now control more than half of academic publishing. They have made a lucrative deal for themselves. They ask (largely) publicly funded researchers to give them their most valuable research outputs (for free!). Then, they ask their colleagues to spend time reviewing these manuscripts (for free!). Finally, they ask the researchers (or their institution) to pay a hefty fee (APC) to publish the work. Sometimes, they even ask the institutions (through the libraries) to pay subscription fees for the journals (or large collections of journals) to read the material.

The academic publishing business model is extremely lucrative for companies and expensive for institutions (and taxpayers). What began as a positive collaborative effort among peers has become the oil in the machinery for big global companies. Why don’t researchers complain? Some do, like myself in this blog post. However, many researchers don’t see the costs; their institutions take the bill. Many early career researchers are primarily concerned with publishing as much as possible to get a permanent job.

Solution #1: Publish less

One solution to the problem is for researchers to publish less. This, however, requires a change of both culture and reward systems. My attempt at making a change here has been to develop and promote the Norwegian Career Assessment Matrix (NOR-CAM), which focuses on rewarding other things besides publications, such as data, source code, teaching, and leadership.

Solution #2: Review less

Many researchers (myself included) receive review requests daily. I only have the capacity to review a couple of articles per month, so I reject the large majority of requests. Some end up writing sub-optimal reviews with little helpful feedback. I have done a couple of those myself, and I am not proud of it. Nowadays, I try to refuse to review if I don’t have time to do it properly. I have this strategy:

  • I refuse to review for MDPI and Elsevier journals. I do not like the business models of these publishing houses. MDPI hosts many journals with questionable review and publishing practices. I know of some acceptable journals within their platform, but I have still decided to desk reject any requests from MDPI. Elsevier has better quality control but an even more monopolistic business model, so I refuse to review anything coming from them as well.

  • I accept some but reject more requests from Frontiers journals. They have a nice platform and generally reasonable quality control. I also like that they name reviewers on articles, which gives credibility to the review process and credits to the reviewers. However, I think the Frontiers system is too “pushy”, requesting turnover in just a few weeks and with lots of automatically generated emails.

  • I prefer to review for community-driven OA journals, and I always try to accept requests from Transactions of the International Society for Music Information Retrieval and Music & Science.

  • When I get a manuscript to review, I immediately check if data and source code are available (if relevant). If not, I send it back to the editor and ask for this to be made available. If not, I will refuse the review. I cannot do a proper review of empirical research without checking that data and code are in good shape.

  • In addition to journals (and some books), I generally also accept reviews for most of the major music technology conferences (NIME, SMC, ISMIR, ICMC, DAFx, ICAD, CMMR). These conferences are entirely community-driven, and I have benefited from being part of them.

These guidelines have helped lessen my reviewer load, and talking about my approach is a way for others to reflect on their reviewing practices.

Solution #3: Choosing alternative models

While the vast majority of publications are based on the “classic” double-blind peer review model, there are alternatives. I have long advocated for new publication practices.

The best and most concrete example of a new approach is Open Research Europe (ORE), which acts as a combined pre-print archive, peer review system and publishing solution. Unfortunately, only researchers with European Commission funding can use it for now, but the idea is that it will be made available to others.

In ORE, researchers submit a manuscript like they would to a regular journal. The manuscript is made available online immediately, similar to how pre-prints work today. Peer review can be done in a traditional way (albeit not being blind), but others can comment on articles as well. The authors can update the manuscript as many times as they like, and the system keeps track of everything.

ORE is a good example of the future of academic publishing. Research results are made available directly to the community and eventually “stamped” as valid once two peers have endorsed them. This can take a long or short amount of time, thus reducing the stress of peer reviewing.

In sum

I hope more researchers will oppose the current publication practices that have turned the noble pursuit of knowledge into a commercialized race against time. In its current form, the peer review system is unsustainable and detrimental to the advancement of science. The academic community can reclaim the spirit of genuine scholarly collaboration by embracing alternative models (like ORE) and adopting more conscientious publication and review practices.