The wisest and most cautious of us all frequently gives credit to stories which he himself is afterwards both ashamed and astonished that he could possibly think of believing . . .It is acquired wisdom and experience only that teach incredulity,
and they very seldom teach it enough. - Adam Smith
BTTT!
I find her columns to be good reads as of late. I like her back-story and perspective as well.
Interesting article.
In my experience, the further one gets from basic research, the less reliable the results are.
In basic research, one designs a set of experiments to test a hypothesis. Every possible factor is kept constant, except for the factor being tested. Even in such a controlled situation, results sometimes are inconclusive, or do not support the hypothesis. At that point, it is time to think about the hypothesis: is it flawed, or was the experiment the correct one to test the hypothesis (sometimes one cannot predict that before doing the experiment)? The hypothesis and/or experiment must be adjusted and tested again. It is a slow process, but one who is highly critical of one’s own results and is willing to change course will eventually get to solid data. I never did prove the hypothesis that I set out to investigate in graduate school, but I got good solid results and some evidence that my overall hypothesis was, at least, on the right track. I have a paper that has been referenced at least 19 times—more than half of scientific articles are never referenced.
Contrast that to the majority of medical “studies.” Most MDs know nothing about experimental design, hypothesizing, or interpretation of results. They use formulaic study designs, in which they measure every possible patient characteristic relevant to their speciality they can think of, and then turn over mountains of measurements to a statistician. The statistician then tests to see if any of the test results are even remotely correlated—and, as the article linked in your post says, that automatically means one out of twenty correlations is false. Also, correlation is not causation, a fact ignored by most study authors (else we would not have the current hysteria over how drinking soda causes obesity). And then they interpret the data and come up with conclusions that fit their preconceived bias, instead of conclusions that fit the data. At best, studies should identify areas of potential future research—but instead, they’re used as the research end-point.
I have major issues with the majority of medical research. Most of it is GIGO. Basic science has its problems, too—just not as many.