Why We Are Paying for Peer Review

We need high-quality reviews of our work to ensure that what we are doing is backed by science.

--

Illustration by Aparna Nambiar

We set up CSEI as an ‘impact ecosystem builder’ to scale research-backed solutions. One of the key functions we play in building impact ecosystems is ‘research synthesis’ — summarising the state of the science to build consensus among actors. One question we have faced is how do we establish the legitimacy of a solution, given that science is not always settled and scientists — even those in our immediate circle — often disagree on the way forward.

We have proposed the ‘living labs’ approach to intervention design, for cases where we cannot wait for the science to be settled because where the cost of inaction is unacceptable. Sometimes, we must proceed even though we cannot anticipate unintended consequences. Indeed, in socio-environmental problem solving, the science may only get settled when we conduct science at the scale of the problem. This in and of itself, is not a novel idea — after all clinical trials for drugs and their subsequent extension to development / social sector interventions via Randomised Control Trials (RCTs) and quasi-experimental approaches are well accepted. It’s just less common in larger-scale ecological/environmental interventions.

Once the science is done or even while it is being done, it must be documented publicly. This article isn’t about the scientific method but about what form the documentation of the science must take to ensure widespread acceptance.

Traditional peer review is broken.

Scientific consensus is established by publishing in peer-reviewed journals. The peer review process is a system used to assess the quality of a manuscript before it is published in a journal. Independent researchers in the relevant research area (‘reviewers’) assess submitted manuscripts for originality, significance, and validity. Journal editors then determine whether a manuscript should be published in their journal.

The reward for going through peer review is citability — the paper can be found and cited by subsequent researchers using well-established citation practices. Search engines like Google Scholar and digital archives like BioRXiv are upending the process of search and citation. Nevertheless, recognition by scientific search engines like Scopus and WebofScience, which only include peer-reviewed journals, remains an important marker to distinguish between ‘scientific’ vs. ‘grey’ literature.

Read | A call for a commitment to open science

But having served on editorial boards of reputed peer-reviewed journals though, I would argue that the process of peer review results in both Type 1 and Type 2 errors. A lot of poor science makes it through peer review and a lot of solid science gets left out.

First, the explosion of well-funded R1 universities, i.e. those that engage in high-quality research activity, particularly in the developing world has expanded the ‘club’ of researchers capable of publishing in reputed journals. Further, in many universities promotion metrics are increasingly tied to journal impact factors, a measure of the frequency with which an “average article” in a journal has been cited in a particular period.

Today, the number of papers being submitted far exceeds the bandwidth of journal editors and reviewers. Reviewers meet deadlines by reviewing late into the night. Consequently, the quality of peer review itself is not up to the mark. With the rise of open access pay-to-publish journals, a lot of bad science gets accepted, if not to the most prestigious journals, at least to journals that confer a paper legitimacy of being classified as ‘science’.

Read | Research needs to leave the ivory tower

Second, in a desperate bid to maintain sanity and control quality, journal editors must play the role of gatekeepers, judging what work is original or eye-catching, or of ‘widespread interest’ before even sending it out for review. As a result of this, a lot of good taxpayer-funded science is never published in peer-reviewed journals. Most academics are sitting on piles of data that is pretty good but will never see the light of day.

This skews science in two ways — it forces researchers to ask questions that are “catchy”. In the water science world at least, this translates to research with insights at regional or global scales instead of a local scale, to pass the significance filter. But it also discourages researchers, who must constantly chase novelty, from replicating prior research or publishing null results, which in the long run, is bad for science.

We don’t need traditional peer review to ensure the validity of science

On the whole, the system of peer review, as it is practiced today, does not serve our purpose because many of the functions of the peer review process are only partly relevant in an open science context, where applied researchers are engaged in the production of knowledge to serve society.

By the definition above: there are three separate, distinct functions peer review serves — originality, significance, and validity. We argue that for work of wide public interest, originality is an irrelevant arbitrary standard and significance can only be determined by the beneficiaries of the research. The validity role, though, remains important — the research methodology must be sound and the conclusions drawn must be supported by the evidence. The question is how to achieve validity without worrying about the other two.

We want to pay for peer review.

To ensure the validity of our conclusions, we are proposing to pay something in the range of $150 (Rs. 10,000) per paper for brutal, honest peer review that focuses strictly on validity. We are not asking for judgment on whether the work is original or significant. That is for our societal stakeholders to judge. Our logic is that if we have spent something on the order of $30,000 (Rs. 20 lakhs) to generate a piece of research, paying $300 to two peer reviewers (Rs. 20,000) constitutes a mere 1% of the research spending and is entirely justifiable.

Academics are notoriously underpaid and exploited. Our proposal helps experts we respect, who have spent years if not decades of their life becoming subject-matter experts, to earn a bit more money. In an Indian context, a retired academic or young post-doctoral researcher could earn a decent living with this supplemental income.

We aren’t upending peer review, just modifying it.

CSEI is not an academic institution. We don’t need anyone to score us, judge us or act as a gatekeeper for our work. We are paying for peer review to establish the legitimacy of the work for ourselves. We want high-quality reviews to ensure that what we are doing is backed by the best science. It also allows us to ‘show our work’ while satisfying our donors and partners with the soundness of our approach. We plan to obtain a Digital Object Identifier (DOI) or ISBN number to ensure the work is searchable and citable (note: a DOI or ISBN number is not correlated with the peer review status of work).

This, of course, doesn’t solve the problem of external legitimacy. We still want to be cited by others and we still need the wider community to accept the work. This is where traditional peer review comes in. We presume that once a work has been peer-reviewed, it should find a better, smoother process to getting published in traditional journals. We are thus not rejecting or upending peer review — just parsing it to ensure better science for society and channeling more money to experts in lieu of big publishing companies.

This still needs more discussion — our collaborators' promotion reviews won’t change because we decided to change.

We recognise that paying for peer-review is not a complete solution. Many high-impact journals won't accept papers that have been put out as a pre-print. And there remains a wider conversation to be had on collaboration in a trans-disciplinary setting, where different people are recognised for different types of publications with different levels of rigour — quick reports that are accessible to the general reader vs. denser writing written for a scientific audience that references prior literature and theoretical frameworks. We also recognize that we may want to co-author with others whose institutions may not accept our approach.

This article aims to start this conversation.

Read | Lakes as living labs

Follow us on Twitter and LinkedIn to stay updated about our work.

To collaborate with us, write to csei.collab@atree.org. We would love to hear from you.

--

--

Veena Srinivasan
Centre for Social and Environmental Innovation, ATREE

Researcher@ ATREE Interested in water resources, urbanization, hydrology, and sustainable development