WhatFinger


The Institute of Medicine estimates that only 4 percent of treatments and tests are backed up by strong scientific evidence

Confused by health and science news?



The Institute of Medicine estimates that only 4 percent of treatments and tests are backed up by strong scientific evidence: more than half have very weak evidence or none.
Coffee causes cancer! Coffee prevents cancer! Drinking gallons of orange juice and popping vitamin pills will make you live longer! Drinking gallons of orange juice and popping vitamin pills will not make you live longer! Is lithotripsy using ultrasound to blast kidney stones into tiny bits better than surgery? It might not be as safe as doctors and patients think it is. Does everybody with slightly elevated cholesterol really need to take high doses of cholesterol-lowering drugs? Is eating whole eggs as dangerous as smoking? These questions represent a microscopic fraction of the mysteries that remain in medicine. The Institute of Medicine estimates that only 4 percent of treatments and tests are backed up by strong scientific evidence; more than half have very weak evidence or none. (1) What study should one believe? Perhaps if you wait long enough, the study you choose to believe will be contradicted by some future research. This is not as far-fetched as it might seem at first blush.

Support Canada Free Press


John Ioannidis reported that one-third of studies published in three reputable peer reviewed journals didn’t hold up. He looked at 45 studies published between 1990 and 2003 and found that subsequent research contradicted the results of seven of those studies, and another seven were found to have weaker results than originally published. In other words, 32% did not withstand the test of time. (2) This translates into a lot of medical misinformation! Ioannidis reviewed high-impact journals including, The New England Journal of Medicine, The Journal of the American Medical Association (JAMA), and Lancet along with a number of others. Each article had been cited at least 1,000 times, all within a span of 13 years. Ioannidis is what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; where it is heavily cited; and he is a big draw at conferences. (3) Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case at least twelve years after the results were discredited. These results are worse than it sounds. Ioannidis had been examining only the less than one-tenth of one percent of published medical research that makes it to the most prestigious journals. In other words, in determining the errors in research he found, he is offering what can easily be seen as an extremely optimistic assessment. Throw in the presumably less careful work from lesser journals, and take into account the way the results end up being spun and misinterpreted by university and industrial PR departments and by journalists, and it’s clear that whatever it was about expert wrongness that Ioannidis had stumbled on in these journals, the wrongness rate would only worsen from there, reports David Freeman. (3) Others are reporting similar results. Researchers at Spain’s University of Girona went back over the data from forty-four papers from the British Medical Journal and Nature, and found statistical errors in a quarter of the British Medical Journal papers and in 38 percent of the Nature papers. (4) In a number of cases, the explanation for the discrepancies was in precisely what you’d expect, sample size. The smaller the group, the shorter the study, the more likely it was that subsequent, deeper investigation contradicted or altered the original thesis. Any experiment or measurement that examines a large number of test subjects will have a smaller margin of error than one having fewer subjects. Not surprisingly, results of experiments and studies with small samples often appear in the literature, and these results frequently suggest that the observed effects are quite large—at one end or the other of the large margin of error. When researchers attempt to demonstrate the effect on a larger sample of subjects, the margin of error is smaller and so the effect size seems to shrink or decline. (5) Publication bias is also a part of the reason for the decline effect. In other words, seemingly significant experimental results will be published much more readily than those that suggest no experimental effect or only a small one. People, including journal editors, naturally prefer papers announcing or at least suggesting a dramatic breakthrough to those saying, in effect, “Ehh, nothing much here,” says John Allen Paulos. (5) Bias is an inescapable element of research, especially in fields such as biomedicine that strive to isolate cause-effect relations in complex systems in which relevant variables and phenomena can never be fully identified or characterized. Yet if biases were random, then multiple studies ought to converge on truth. Evidence is mounting that biases are not random. A comment in Nature reported that researchers at Amgen were able to confirm the results of only 6 out of 53 landmark studies in preclinical research. (6) There’s also the problem of poor experimental design and the sometimes unknown confounding variables (even different placebos) whose effects can mask or reverse the suspected effect. The human tendency to exaggerate results and to indulge in one’s vanity by sticking with the initial exaggeration cannot be dismissed either. (5) So, what should you believe? What should you do? Mara Burney offers this suggestion, “All of this does not mean that medical studies are of no value or that health reports are always wrong. It simply serves as a caution that science is fluid, not static or absolute. Study design, sample size and whether a study is prospective or retrospective in nature will all affect the outcome of a trial. Scientists require more than one study, regardless of how large or well-designed that one may be, before they accept a result, and so should you. Every time that you see a headline claiming that X causes cancer or that Y prevents it, proceed with caution. A little skepticism may be just what the doctor ordered.” (7) References
  1. Shannon Brownlee, Overtreated, (New York, Bloomsbury, 2007), 92
  2. John P. Ioannidis, “Contradicted and initially stronger effects in highly cited clinical research,” JAMA, 294(2), 218, July 13, 2005
  3. David H. Freedman, “Lies, damn lies, and medical science,” The Atlantic, November 2010
  4. David H. Freedman, Wrong, (New York, Little, Brown & Company, 2010), 64
  5. John Allen Paulos, “Study vs. study: the decline effect and why scientific ‘truth’ so often turns out wrong,” abcnews.com, January 2, 2011
  6. C. G. Begley and L. M. Ellis, “Drug development: raise standards for preclinical research,” Nature, 483, 531, March 28, 2012
  7. Mara Burney, “Don’t believe everything you read-even in medical journals,” healthfactsandfears.com, American Council on Science and Health, July 15, 2005


View Comments

Jack Dini -- Bio and Archives

Jack Dini is author of Challenging Environmental Mythology.  He has also written for American Council on Science and Health, Environment & Climate News, and Hawaii Reporter.


Sponsored