Labels

Thursday, May 11, 2017

Medical studies are almost always bogus | New York Post - By Susannah Cahalan

How many times have you encountered a study — on, say, weight loss — that trumpeted one fad, only to see another study discrediting it a week later?
That’s because many medical studies are junk. It’s an open secret in the research community, and it even has a name: “the reproducibility crisis.”
For any study to have legitimacy, it must be replicated, yet only half of medical studies celebrated in newspapers hold water under serious follow-up scrutiny — and about two-thirds of the “sexiest” cutting-edge reports, including the discovery of new genes linked to obesity or mental illness, are later “disconfirmed.”
Though erring is a key part of the scientific process, this level of failure slows scientific progress, wastes time and resources and costs taxpayers excesses of $28 billion a year, writes NPR science correspondent Richard Harris in his book “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions” (Basic Books).
“When you read something, take it with a grain of salt,” Harris tells The Post. “Even the best science can be misleading, and often what you’re reading is not the best science.”
Take one particularly enraging example: For many years research on breast cancer was conducted on misidentified melanoma cells, which means that thousands of papers published in credible scientific journals were actually studying the wrong cancer. “It’s impossible to know how much this sloppy use of the wrong cells has set back research into breast cancer,” writes Harris.
Another study claimed to have invented a blood test that could detect ovarian cancer — which would mean much earlier diagnosis. The research was hailed as a major breakthrough on morning shows and in newspapers. Further scrutiny, though, revealed the only reason the blood test “worked” was because the researchers tested the two batches on two separate days — all the women with ovarian cancer on one day, and without the disease the next. Instead of measuring the differences in the cancer, the blood test had, in fact, measured the day-to-day differences in the machine.
So why are so many tests bogus? Harris has some thoughts.
For one, science is hard. Everything from unconscious bias — the way researchers see their data through the rosy lens of their own theses — to the types of beaker they use or the bedding that they keep mice in can cloud results and derail reproducibility.
Then there is the funding issue. During the heyday of the late ’90s and early aughts, research funding increased until Congress decided to hold funding flat for the next decade, creating an atmosphere of intense, some would say unhealthy, competition among research scientists. Now only 17 percent of grants get funded (compared to a third three decades ago). Add this to the truly terrible job market for post-docs — only 21 percent land tenure track jobs — and there is a greater incentive to publish splashy counterintuitive studies, which have a higher likelihood of being wrong, writes Harris.
One effect of this “pressure to publish” situation is intentional data manipulation, where scientists cherry-pick the information that supports a hypothesis while ignoring the data that doesn’t — an all too common problem in academic research, writes Harris.
“There’s a constant scramble for research dollars. Promotions and tenure depend on making splashy discoveries. There are big rewards for being first, even if the work ultimately fails the test of time,” writes Harris.
‘Promotions and tenure depend on making splashy discoveries. There are big rewards for being first, even if the work ultimately fails the test of time.’
This will only get worse if funding is cut further — something that seems inevitable under proposed federal tax cuts. “It only exacerbates the problems. With so many scientists fighting for a shrinking pool of money, cuts will only make all of these issues worse,” Harris says.
Luckily, there is a growing group of people working to expose the ugly side of how research is done. One of them is Stanford professor John Ioannidis, considered one of the heroes of the reproducibility movement. He’s written extensively on the topic, including a scathing paper titled “Why Most Published Scientific Research Findings Are False.”
He’s found, for example, out of tens of thousands of papers touting discoveries of specific genes linked to everything from depression to obesity, only 1.2 percent had truly positive results. Meanwhile, Dr. Ioannidis followed 49 studies that had been cited at least a thousand times — of which seven had been “flatly contradicted” by further research. This included one that claimed estrogen and progestin benefited women after hysterectomies “when in fact the drug combination increased the risk of heart disease and breast cancer.”
Other organizations like Retraction Watch, which tracks discredited studies in real time, and the Cochrane group, an independent network of researchers that pushes for evidence-based medicine, act as industry watchdogs. There is also an internal push for scientists to make their data public so it’s easier to police bad science.
The public can play a role, too. “If we curb our enthusiasm a bit,” Harris writes, “scientists will be less likely to run headlong after dubious ideas.”