Why ‘Publish or Perish’ is Killing Good Science

Why ‘Publish or Perish’ is Killing Good Science

In academia, there’s a phrase every researcher hears early in their career: publish or perish. It’s meant as a warning - keep producing papers, or risk your career stalling. But over time, it’s become more than a warning. It’s become the operating system of the scientific world.

And that’s a problem.

The Metrics That Mislead

The publish-or-perish culture is fuelled by quantitative metrics like the H-index and journal Impact Factor. On paper, they look like objective measures of success. In reality, they’re proxies for productivity and prestige, not rigour or reproducibility.

These metrics reward:

  • Quantity over quality: multiple thin papers instead of one robust study
    Strategic publishing: aiming for high-IF journals regardless of fit or readership
  • Safe science: sticking to low-risk projects that guarantee publishable results
  • Or overly “sexy” science: methods that are so unique, complicated, and novel that there is no hope of reproducing the results

When the goal is to hit a number, science becomes a numbers game.

The Human Cost

This pressure doesn’t just distort the research; it harms the people doing it. Researchers feel compelled to work longer hours, cut corners, or frame results to appear more “significant” than they are. The fear of gaps in one’s publication record can discourage necessary but slow work, like replication studies or long-term data collection.

For early-career researchers, the stakes are even higher. One “quiet year” without publications can mean losing a fellowship, funding, or a permanent job opportunity.

The Quality Crisis

The results of this system are visible:

  • Irreproducible findings clogging the literature
  • Retractions that come too late to stop flawed studies being cited
  • Neglected datasets that are never shared or fully analysed
  • Missed innovation because risky, unconventional projects don’t make the “safe bet” list

The tragedy is that important work can be left unfinished or unpublished simply because it doesn’t score well on traditional metrics.

How We Can Change It

We need to shift the reward system from counting papers to evaluating their quality, transparency, and impact on the field. That means valuing:

  • Rigorous methodology
  • Openness in data and code
  • Willingness to engage with criticism and improve work
  • Contributions to reproducibility and replication

Paperstars is one attempt to help make that shift. By letting researchers rate and review published papers based on structured, qualitative criteria, we aim to give the community a more honest, nuanced way to assess research - one that reflects substance, not just numbers.

A Better Future for Science

Science should be a marathon, not a sprint. It should reward the researcher who spends years gathering irreplaceable field data, not just the one who can turn around multiple papers in a semester.

If we can move beyond publish or perish, we might find a culture where scientists can take the time to ask better questions, design stronger studies, and build knowledge that actually lasts.

If you’re tired of the paper-counting game, help us change it: Join Paperstars and start reviewing research for what it’s really worth.