Posted on 01/09/2011 10:42:27 AM PST by jazusamo
First, there were the true believers, like James Hansen, whose belief in the need to eliminate industrial civilization far predates the global warming explanation. (There is a side story to be told there as well. What do the true believers really believe? What do they advocate as ways to reduce humanity's environmental impact?) These true believers seem to be quite willing to ... adapt their scientific results to make sure that people on the outside are as frightened as possible.
There is another, larger group, who may or may not be true believers -- who can know what is in another man's heart? -- but who don't seem to worry too much about their own carbon impact, like Al Gore. (Oh, he buys indulgences from his own company, which is one little mercy -- he could conceivably instead say he would have built a bigger house with more carbon impact, and claimed a carbon credit.) A fair number of these people, though, seem to be set up to make an immense pile of money off the carbon markets, and they all seem to have impeccable political connections. This larger group makes sure that the true believers get big grants, and travel to conferences in Gstaad and Tahiti, and have well-financed platforms from which to speak.
It's that second group we most need to watch. In the old Soviet Union, these people -- the Communist Party members who received positions of power -- were called the nomenklatura. They weren't necessarily the true believers (in fact, a lot of the true Communists, like Beria and Trotsky, ended up dead or in Siberia), but they could mouth the slogans, pass on the Communist Party line, and play the system to get positions and power, dachas, and access to the "special" stores that always had sausage, green vegetables, and toilet paper.
And, of course, there is a third group: the rest of us. We are expected to pay the increased carbon offset costs quietly, cold in our darkened rooms, but warm in our hearts because we're saving the planet.
If you have not yet done so, I urge you read "Lies, Damned Lies. And Medical Science" in the Atlantic Monthly. It documents the work of a Greek team of clinicians and Ph.D.s, including Professor John Ioannidis, who study whether medical-research studies can be trusted. According to the article these studies cannot be trusted.
He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. ... [H]e worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change -- or even to publicly admitting that there's a problem. [snip]
In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science "never minds" are hardly secret. And they sometimes make headlines, as when in recent years large studies or growing consensuses of researchers concluded that mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; or when widely prescribed antidepressants such as Prozac, Zoloft, and Paxil were revealed to be no more effective than a placebo for most cases of depression; or when we learned that staying out of the sun entirely can actually increase cancer risks; or when we were told that the advice to drink lots of water during intense exercise was potentially fatal; or when, last April, we were informed that taking fish oil, exercising, and doing puzzles doesn't really help fend off Alzheimer's disease, as long claimed. Peer-reviewed studies have come to opposite conclusions on whether using cell phone can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.
But beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research. "Randomized controlled trials," which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time. "I realized even our gold-standard research had a lot of problems," he says. Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.
This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. "The studies were biased," he says. "Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there." Researchers headed into their studies wanting certain results-and, lo and behold, they were getting them.
"Science is at once the most questioning and ... sceptical of activities and also the most trusting," said Arnold Relman, former editor of the New England Journal of Medicine, in 1989. "It is intensely sceptical about the possibility of error, but totally trusting about the possibility of fraud."1 Never has this been truer than of the 1998 Lancet paper that implied a link between the measles, mumps, and rubella (MMR) vaccine and a "new syndrome" of autism and bowel disease.
Authored by Andrew Wakefield and 12 others, the paper's scientific limitations were clear when it appeared in 1998.2 3 As the ensuing vaccine scare took off, critics quickly pointed out that the paper was a small case series with no controls, linked three common conditions, and relied on parental recall and beliefs.4 Over the following decade, epidemiological studies consistently found no evidence of a link between the MMR vaccine and autism.5 6 7 8 By the time the paper was finally retracted 12 years later,9 after forensic dissection at the General Medical Council's (GMC) longest ever fitness to practise hearing,10 few people could deny that it was fatally flawed both scientifically and ethically. But it has taken the diligent scepticism of one man [Brian Deer], standing outside medicine and science, to show that the paper was in fact an elaborate fraud.
The now-discredited paper panicked many parents and led to a sharp drop in the number of children getting the vaccine that prevents measles, mumps and rubella. Vaccination rates dropped sharply in Britain after its publication, falling as low as 80% by 2004. Measles cases have gone up sharply in the ensuing years.
In the United States, more cases of measles were reported in 2008 than in any other year since 1997, according to the Centers for Disease Control and Prevention. More than 90% of those infected had not been vaccinated or their vaccination status was unknown, the CDC reported.
"But perhaps as important as the scare's effect on infectious disease is the energy, emotion and money that have been diverted away from efforts to understand the real causes of autism and how to help children and families who live with it," the BMJ editorial states.
Wakefield has been unable to reproduce his results in the face of criticism, and other researchers have been unable to match them. Most of his co-authors withdrew their names from the study in 2004 after learning he had had been paid by a law firm that intended to sue vaccine manufacturers -- a serious conflict of interest he failed to disclose.
Really great read, beginning to end. Thanks.
Thanks for the link, it’s a good article and sounds reasonable to me. Both Ioannidis and Snyder seem to view it from a practical standpoint.
The wisest and most cautious of us all frequently gives credit to stories which he himself is afterwards both ashamed and astonished that he could possibly think of believing . . .It is acquired wisdom and experience only that teach incredulity,
and they very seldom teach it enough. - Adam Smith
BTTT!
I find her columns to be good reads as of late. I like her back-story and perspective as well.
Interesting article.
In my experience, the further one gets from basic research, the less reliable the results are.
In basic research, one designs a set of experiments to test a hypothesis. Every possible factor is kept constant, except for the factor being tested. Even in such a controlled situation, results sometimes are inconclusive, or do not support the hypothesis. At that point, it is time to think about the hypothesis: is it flawed, or was the experiment the correct one to test the hypothesis (sometimes one cannot predict that before doing the experiment)? The hypothesis and/or experiment must be adjusted and tested again. It is a slow process, but one who is highly critical of one’s own results and is willing to change course will eventually get to solid data. I never did prove the hypothesis that I set out to investigate in graduate school, but I got good solid results and some evidence that my overall hypothesis was, at least, on the right track. I have a paper that has been referenced at least 19 times—more than half of scientific articles are never referenced.
Contrast that to the majority of medical “studies.” Most MDs know nothing about experimental design, hypothesizing, or interpretation of results. They use formulaic study designs, in which they measure every possible patient characteristic relevant to their speciality they can think of, and then turn over mountains of measurements to a statistician. The statistician then tests to see if any of the test results are even remotely correlated—and, as the article linked in your post says, that automatically means one out of twenty correlations is false. Also, correlation is not causation, a fact ignored by most study authors (else we would not have the current hysteria over how drinking soda causes obesity). And then they interpret the data and come up with conclusions that fit their preconceived bias, instead of conclusions that fit the data. At best, studies should identify areas of potential future research—but instead, they’re used as the research end-point.
I have major issues with the majority of medical research. Most of it is GIGO. Basic science has its problems, too—just not as many.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.