Johannes, Adobe Stock Images

Peer Review Is Broken. Here’s How to Fix It.

It’s time to restore the medieval “community of scholars.”

Within academia, there seems to be a growing consensus that the peer-review system—once the backbone of academic scholarship—is broken. But is it irreparably so? Perhaps. At the very least, the breakdown of its current form is worth exploring. However, rather than abandoning the entire endeavor, we believe we have a novel solution. First, though, let us examine where the system went wrong.

In the Middle Ages, most scientific research was self-published, as scholars shared their findings among themselves. But, as the profession grew, that became impractical, and the scientific journal was born as a way of disseminating information. A scholar would have an idea, investigate, summarize his conclusions, and submit the resulting article to a journal. There, the editor or editors would consider it and decide whether to publish the work as-is, request revisions, or reject it altogether. Over time, as the number of scholars continued to proliferate, all of them under increasing pressure to publish, publish, publish—in order to be hired, earn tenure, and qualify for grants—the task of journal editors became overwhelming. There were just too many submissions to give them all fair consideration.

Over time, as the number of scholars continued to proliferate, the task of journal editors became overwhelming. And so they came up with the idea of farming out their evaluation of submissions to teams of unpaid reviewers, other scholars in the same field or a related field who were (theoretically, at least) qualified to judge the quality of the research under consideration. This would relieve some of the burden on the editors while also bestowing an additional stamp of legitimacy on the finished product. Whether a given piece of scholarship was worthy of publication was to be determined not just by one or two people but rather by a group of “blind” experts. Thus, the label “peer-reviewed” became the gold standard for scholarly research. A publication in a “peer-reviewed journal” has long been considered essentially unassailable, to the point that politicians and media types seem convinced they can win any argument simply by referencing a piece of “peer-reviewed research.”

Even with more journals and more reviewers, the system has broken down, as all large, complex systems eventually do. It was initially a pretty good system, and it worked reasonably well for a long time. But it seems to have now run its course. Tenure requirements have become more quantitative. The internet has decreased barriers to submission, encouraging more scholars to submit more articles to more journals. The number of submissions from Asian, African, and Middle Eastern universities has exploded. Even with more journals and more reviewers, the system has broken down, as all large, complex systems eventually do. We know this to be the case because of a problem first identified 20 years ago by Stanford scientist John Ioannidis, which has since come to be known as the “replication crisis.”

One of the hallmarks of good science is that an experiment can be replicated—that is, another researcher using the same methodology will achieve the same result, meaning the findings are both valid and consistent. But what Ioannidis argued in his seminal 2005 article “Why Most Published Research Findings Are False” (updated in 2022) was that, well, most published research findings are flawed. The experiments can’t be replicated, casting their validity into question.

Other scholars have since taken issue with Ioannidis’s thesis, especially his use of the word “most.” Social scientists, in particular, argue that experiments involving human beings often can’t be replicated precisely because people are themselves inconsistent. Nevertheless, scholars generally agree that the replication crisis is real, if not quite as widespread as Ioannidis suggests.

What does this have to do with peer review? Obviously, if the system were functioning as intended, with teams of bona fide experts checking and double-checking each other’s work, we might expect that very few flawed studies would slip through. In other words, there wouldn’t be a replication crisis if peer review actually worked.

Unfortunately, the accuracy of the system isn’t even the biggest issue. Like many institutions, it has devolved into a highly politicized echo chamber. Rather than a mechanism for determining and disseminating truth through a system of scholarly checks and balances, peer review has become an instrument for promoting and enforcing orthodoxy. No longer a community of scholars rigorously but collegially testing each other’s hypotheses, journal editors and reviewers have appointed themselves gatekeepers. Only those who recite the correct passwords are admitted.

Take the field of climate research, for example. For at least a couple of decades now, the scientific consensus has been that anthropogenic climate change poses an existential threat to humanity. Anyone who challenges that orthodoxy, regardless of the quality of his research or the logic of his arguments, finds it very difficult to publish his findings in leading journals. The gatekeepers (read: reviewers) simply won’t allow it.

Or how about transgender ideology? Even before we learned that the World Professional Association for Transgender Health (WPATH) was hiding and manipulating its data, why did very few scholars question the claim that social, medical, or surgical transitioning for minors reduced their suffering? You know the answer: They knew they couldn’t do so without derailing their careers. Even now, we take a professional risk just by pointing this out. That is not science, which advances the search for truth; it is politics, which impedes it.

Ideas that challenge the prescribed way of thinking have always been unpopular among those doing the prescribing. In all fairness, it’s easy to understand why this happens. We’re not even claiming it’s entirely nefarious. It’s just human nature. Ideas that challenge the prescribed way of thinking have always been unpopular among those doing the prescribing, going back to Copernicus and Martin Luther. New findings and the theories that grow out of them threaten to discredit the theories of the previous generation of scholars—and guess who primarily serve as reviewers? When we say “politics,” we don’t necessarily mean that in the partisan sense but, rather, in the personal sense: Whose ox is being gored?

Even in disciplines that are not politically fraught, young scholars must still bow to the ideological gods of their seniors. But, of course, partisan politics—and ideology, specifically—often enter into the equation, as well. Even in disciplines that are not as politically fraught as climatology or “gender studies”—such as accounting or marketing—young scholars must still bow to the ideological gods of their seniors. They must pay proper homage to concepts such as “diversity, equity, and inclusion,” “whiteness,” and “marginalized populations,” even if those concepts have nothing whatsoever to do with their research or, worse, are unsupported by their findings. And, of course, if they really want to be published, they will find some way to tie those findings into the political flavor of the month. Hence, we get articles with titles such as “How Branding for Whiteness Disadvantages BIPOC Consumers” or “Addressing Marginalized Populations in Management Research.” (One of these is real; the other we made up. Can you tell which is which?)

So, what now? We believe it is time to return to the medieval “community of scholars” model—with a 21st-century twist. Sure, in most disciplines, it is nearly impossible to get all scholars together to pass around manuscripts (as anyone who has been to a conference can attest), but, with modern technology, scholars can indeed “pass around” their manuscripts, sharing their work in progress with colleagues from across the country and around the world.

Our idea involves creating official online forums for each discipline, where scholars can post essays about their ideas at any stage, laying out the theoretical background, proposing hypotheses, disclosing research findings (including methodology), and extrapolating to implications or predictions. Other scholars in the community can comment on those essays, offering critiques, providing missing information, and suggesting new directions in which to take the research. They can also try the experiments themselves to see if they get the same or similar results and “report back” to the group. Then the original authors can take that information and apply it to their further exploration of the research topic.

One advantage of this approach is that it is iterative, with each scholar building on the efforts of those who came before. Another is that scholars can “publish” regardless of their results. A common criticism of the current peer-review system is that scholars can publish only if they get positive results. Yet negative results are also results and help, in their own way, to advance knowledge. Just as scholars need to know what has been found to be true in order to build on that progress, so they must also know what has been proved to be false so they can avoid the same pitfalls.

Submissions to the forums would be time-stamped, so authors could easily prove ownership of ideas. The posts could be hyperlinked to make follow-up research and citations quick and easy. To discourage bad actors, there would be no anonymity for contributors and commenters. And the forums would be lightly moderated to ensure posts met scholarly standards, with appropriate decorum, civility, and attribution. But all ideas would be entertained. There would be no gatekeeping. Instead, the community would police itself, “ratioing” (to use the social-media term) rather than censoring “bad” ideas.

Obviously, in order for this system to ultimately take the place of the current peer-review system, universities would need to embrace it and figure out how to evaluate scholars’ productivity for the purposes of granting tenure and so forth—perhaps based on the number of posts and the reaction to them from the community.

But we believe this is where things are headed, and universities, disciplines, and learned societies would do well to get on board. The current system has outlived its usefulness, becoming a hindrance to the pursuit of truth rather than a means of supporting it.

Rob Jenkins is an associate professor of English at Georgia State University-Perimeter College. The views expressed here are his own. Michael R. Jenkins is an assistant professor of marketing at Mississippi State University. He studies the impact of branding on consumer behavior, with a focus on small businesses. The views expressed here are his own and not those of his employer.