Dr. Ioannidis and his team have been researching the accuracy and reliability of conclusions reached in scientific research articles for years. Here are some quotes from, and a link to an article that he published in 2005:
It can be proven that most claimed research findings are false
He offers 6 corollaries with detailed explanations (and scientific references) for each that I have truncated for brevity in this quote:
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. . . .
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. Power is also related to the effect size. . . .
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. . . .
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. . . .
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. . . .
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. . . .
The red emphasis is mine, of course.
Why Most Published Research Findings Are False
Here is a quote from (and a link to) an article published in 2010 in Science Magazine:
During the past century, though, a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
It’s science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.
“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains unsolved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
Statistical significance is not always statistically significant.
Again, the red emphasis is mine.
Odds Are, It's Wrong — Science fails to face the shortcomings of statistics
IMO, the safest approach is to simply assume that the conclusions in any report you are reading are incorrect, and you will be right most of the time.
Tex