Science is facing a "reproducibility
crisis" where more than two-thirds of researchers have tried and failed to
reproduce another scientist's experiments, research suggests.
This is frustrating clinicians and drug
developers who want solid foundations of pre-clinical research to build upon.
From his lab at the University of
Virginia's Centre for Open Science, immunologist Dr Tim Errington runs The
Reproducibility Project, which attempted to repeat the findings reported in
five landmark cancer studies.
"The idea here is to take a bunch of
experiments and to try and do the exact same thing to see if we can get the
same results."
You could be forgiven for thinking that
should be easy. Experiments are supposed to be replicable.
The authors should have done it themselves
before publication, and all you have to do is read the methods section in the
paper and follow the instructions.
Sadly nothing, it seems, could be further
from the truth.
After meticulous research involving
painstaking attention to detail over several years (the project was launched in
2011), the team was able to confirm only two of the original studies' findings.
Two more proved inconclusive and in the
fifth, the team completely failed to replicate the result.
"It's worrying because replication is
supposed to be a hallmark of scientific integrity," says Dr Errington.
Concern over the reliability of the results
published in scientific literature has been growing for some time.
According to a survey published in the
journal Nature last summer, more than 70% of researchers have tried and failed
to reproduce another scientist's experiments.
Marcus Munafo is one of them. Now professor
of biological psychology at Bristol University, he almost gave up on a career
in science when, as a PhD student, he failed to reproduce a textbook study on
anxiety.
"I had a crisis of confidence. I
thought maybe it's me, maybe I didn't run my study well, maybe I'm not cut out
to be a scientist."
The problem, it turned out, was not with
Marcus Munafo's science, but with the way the scientific literature had been
"tidied up" to present a much clearer, more robust outcome.
"What we see in the published
literature is a highly curated version of what's actually happened," he
says.
"The trouble is that gives you a
rose-tinted view of the evidence because the results that get published tend to
be the most interesting, the most exciting, novel, eye-catching, unexpected
results.
"What I think of as high-risk,
high-return results."
The reproducibility difficulties are not
about fraud, according to Dame Ottoline Leyser, director of the Sainsbury
Laboratory at the University of Cambridge.
That would be relatively easy to stamp out.
Instead, she says: "It's about a culture that promotes impact over
substance, flashy findings over the dull, confirmatory work that most of
science is about."
She says it's about the funding bodies that
want to secure the biggest bang for their bucks, the peer review journals that
vie to publish the most exciting breakthroughs, the institutes and universities
that measure success in grants won and papers published and the ambition of the
researchers themselves.
"Everyone has to take a share of the
blame," she argues. "The way the system is set up encourages less
than optimal outcomes."
For its part, the journal Nature is taking
steps to address the problem.
It's introduced a reproducibility checklist
for submitting authors, designed to improve reliability and rigour.
"Replication is something scientists
should be thinking about before they write the paper," says Ritu Dhand,
the editorial director at Nature.
"It is a big problem, but it's
something the journals can't tackle on their own. It's going to take a
multi-pronged approach involving funders, the institutes, the journals and the
researchers."
But we need to be bolder, according to the
Edinburgh neuroscientist Prof Malcolm Macleod.
"The issue of replication goes to the
heart of the scientific process."
Writing in the latest edition of Nature, he
outlines a new approach to animal studies that calls for independent,
statistically rigorous confirmation of a paper's central hypothesis before
publication.
"Without efforts to reproduce the
findings of others, we don't know if the facts out there actually represent
what's happening in biology or not."
Without knowing whether the published
scientific literature is built on solid foundations or sand, he argues, we're
wasting both time and money.
"It could be that we would be much
further forward in terms of developing new cures and treatments. It's a
regrettable situation, but I'm afraid that's the situation we find ourselves
in."
You can listen to Tom Feilden's report and
the further discussion on BBC
Radio 4's Today programme.