Skip to comments.A Finger On the Scales
Posted on 01/25/2014 4:49:39 AM PST by Kaslin
A conundrum has arisen. An increasing body of research suggests that much of our academic research is unreliable. True, it might be that these academics who have emerged to challenge the old consensus of reliability in academia are just as fraudulent as the research they attack, but it should put at least a dent in the perception that peer-reviewed studies can be touted as ironclad.
The Economist summarized the crisis in academia last year when they looked at how difficult it is to replicate the results of peer reviewed studies.
There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think...
Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. Dr Daniele Fanelli [University of Edinborough] has looked at 21 different surveys of academics (mostly in the biomedical sciences but also in civil engineering, chemistry and economics) carried out between 1987 and 2008. Only 2% of respondents admitted falsifying or fabricating data, but 28% of respondents claimed to know of colleagues who engaged in questionable research practices.
Study design itself leads to incorrect results, as well - but this has been a problem since the beginning of academia itself. There are many, many problems with treating the work of academics as dogma.
Unfortunately, thats what we tend to do. An appeal to authority is both comforting as an argument and a famous logical fallacy. The work of academics is important and academia itself adds massive value to society, but pundits and politicians bestow an unassailable quality to most of their conclusions.
Economists agree is a phrase common in the modern political lexicon, one thats used to wave away objections to a separate group of people with more education and credentials. Its not always the fallacy that the argument to authority would imply, but its used in much the same manner. Both Republicans and Democrats have employed the phrase freely, and its used by non-politicians as well; on issues like tax cuts, school choice, and even the question of how to measure a consensus.
Inherent in this phrase is that there is debate-ending value to what a group of academics believe on a subject, and that these people are unbiased arbiters of truth. Unsaid is what motivates the academics. Paul Krugman is fond of saying that the facts have a liberal bias. He might be right. Blank-slate unbiased truth-seekers might be drawn to academia and then, once theyve really delved in-depth on the facts, declare themselves liberals due to the overwhelming nature of the evidence. Academics self-identify as liberals at a higher rate than any other profession.
There are other explanations. Moral theorist Jonathan Haidt posits that people are not machines of meta-rationality, and they bring prior intuitions into any area of study. While academia might attract personality types that are more amenable to introspection, that doesnt meant that academics are immune to this universal human trait. Just like everyone else, they arrive into their academic career with predisposed notions of how the world works. Some of them can re-examine themselves and change with evidence. A lot of them do not.
This matters because policymakers in Washington, D.C. often outsource to academics and other experts. Whats more, theres a strong urge to claim that theres a widespread academic agreement on a topic that that is both hotly debated and prone to fluctuation.
Take the letter signed put together by the Economic Policy Institute and signed by 73 economists - five of them Nobel laureates - urging Congress to pass Barack Obamas proposal to raise the minimum wage. Its a powerful argument for a lawmaker to say that theyve got five Nobel Prize winners among dozens of other economists who think that President Obamas proposal is optimal policy in this political environment. But its likely that many of these economists have based their reasoning on flawed research, and many more are looking through a lens of ideological priors that have colored their take on the evidence.
This isnt a partisan issue. More than 80% of economists have been found to favor free trade and raising the retirement age. Ideological priors and value judgments enter the debate before anyone puts a pen to paper to write a study, regardless of the outcome being agreeable to any partisan.
What this means is that theres a high bar to clear. This is not to deny that a true consensus on divisive issues cannot be achieved by academics. Perhaps its at some kind of 80% line (and perhaps global warming - an elephant in the room when it comes to this column and much conservative distrust of academics!), or perhaps its lower. But the emerging field that casts doubt on the conclusions of a lot of academic research, combined with a Haidt-informed view of ideological priors and what we already know about bias and survey design, means that we must be aware of a finger on the scales. Beware politicians waving bare-majority consensus surveys of scientists, or letters signed by an impressive-sounding number of academics.
But all of that? It might just be my priors talking.
They have abandoned the scientific method. Instead of setting out to test theories, they set out to prove their own beliefs, and it’s easy to do so. All you have to do is ignore any exculpatory evidence, or you can simply cook the books.
Which the 0bama administration is great at
As the author suggests, what good are peer reviews when every single one of your peers are cut from the same Marxist cloth?
Every PhD candidate in the academic world already knows this. There are virtually no truths, only fund-able research.
“There are virtually no truths, only fund-able research.”
I was working for a PhD in college when he suddenly realized that he’d spent all his research funds on a particular project and had not actually done any research. By the due date, later that week, he had reams of data and exactly the expected result. (It’s a miracle.)
They start with a desired result and will and will accept nothing less even if it means omitting unfavorable findings or including unproven findings.
Really good scientific research is done using a triple-blind approach.
Those who design the study do not gather data, and those who gather data do not analyze it and draw conclusions.
This process, of course, is quite rare. Its purpose is not to prevent intentional fraud, which is probably pretty rare, but to eliminate scientists finding what they expect to find.
We have a great deal of evidence, stretching over more than a century, that this is a HUGE problem in science.
It’s bad enough when such confirmation bias only helps scientists “prove” their own hypotheses. It’s far worse when (to use AGW as an example) finding A results in fame, fortune and the attentions of admiring female students, while finding B results in ostracism and loss of funding for future research.
I dated a Serbian PhD who was shocked at the complete arrogance of most of her American counterparts.
The way Einstein’s theory that mass/gravity bend light was a beautiful example of how science should be done.
He presented his theory and the mathematics he believed would back it up and then he stepped aside as hundreds of astronomers tested and re-tested the theory. After several years the evidence became overwhelming and forced a fact based consensus.
If you had a telescope and an eclipse you could test it yourself today and get the same result.
Some of the tricks used by the MMGW crowd are “insider” peer review, in which raw data or the means used to collect it, as well as any data “adjustments”, are provided only to a sympathetic peer, effectively a co-conspirator, who will support it as legitimate.
At the lower level are “incestuous” or “round robin” peer reviews, that involve a small circle of friends, lets call them A,B,C,D, and E. This is done to create a large volume of science fraud with many different papers that all agree on the falsehoods.
In the first paper, for example, A is the “lead researcher”, with B and C as “co-authors”, and the paper “peer reviewed” by D and E. In the second paper, B is the “lead researcher”, with C and D as “co-authors”, with “peer review” by E and A.
This way, each of them gets credit as a lead researcher at some point, as well as getting peer review credit.
It is all clearly science fraud, and rotten to the core. It is also heavily subsidized by both government and those individuals who seek to profit financially from the “scientific conclusions”.
The long term effect is that it terribly cripples honest scientific pursuit, replacing science with politically correct magic. And to protect their schemes, they attack honest scientists who dispute them.
Much like their philosophical predecessor Trofim Lysenko.
I would have written the title of the article:
A MIDDLE finger on the scales
Now, if it was really the same scientists doing all those "studies", a referee would almost certainly have objected that the paper looked like a fishing expedition and that a Bonferoni adjustment was needed to the significance tests (do 20 tests, divide the significance level by 20, but report it as the original significance level -- so reporting a p<.05 significance would need the individual test to have p<.0025).
On the other hand, if 20 teams each conducted well-designed, honest studies of a different color of jelly bean (in the silly example), the one that studied green would get to publish a paper with their "positive" finding, while the other 19 would have "no findings" and wouldn't get to publish. The chance finding that green jelly beans cause acne would then be one of the irreproducible results in a peer reviewed paper, even though the study had been well-designed and no fraud was involved.
Quite frankly, I'm glad I'm in mathematics. You can't fake proofs, and even erroneous proofs sometimes lead to important advances (Fermat and Poincaré providing notable examples of this).
Setting out to prove their own theories is how scientists have always worked and always will work. That’s what drives science forward, sort of like the profit motive drives capitalism. Both systems require a countering force —competition — to work right and that is the problem with science right now I think. It is crony science.
Maybe it’s human nature, and maybe Einstein set the bar too high, but he refused to even test his own theories, believing that he would prejudice the results. He left it to others, and he said that, even if his theories passed the tests conducted by others, his theories may still not be valid. he was a scientist.
A real scientist does not set out to prove his theories. he sets out to test their validity, and does so skeptically.
A real scientist is a passionate being who loves his theory and will fight for it because he believes that it’s true. The risk is that he is wrong. This risk is mitigated by the other scientist who is passionately committed to his own theory and will engage in intellectual combat to advance and defend it. Both are scientists. It’s in the friction between them that the truth emerges and advances occur.
A real scientist applies the scientific method of inquiry, and draws conclusions. You’re describing a theologian.
Scientists are people not machines.
If they don’t utilize the scientific method of inquiry, they aren’t really scientists.
Negative findings are a valid finding and should be published.
The scientific method is so trivially obvious that it’s barely worth talking about. The interesting part of science happens well downstream of the scientific method, or upstream depending on how you want to look at it. I would guess that 99% of passionate scientific disputes are between scientists for whom the scientific method is a given.
Yup, but they aren’t, especially not in the social and behavioral sciences. You also didn’t see physics papers about the Higgs boson *not* having one of the predicted masses on the basis of no observations consistent with a Higgs at the appropriate energy after tens of thousands of hours of accelerator time.