Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Most scientific papers are probably wrong
New Scientist ^ | 8/30/05 | Kurt Kleiner

Posted on 08/30/2005 10:29:44 AM PDT by LibWhacker

click here to read article


Navigation: use the links below to view more comments.
first previous 1-20 ... 81-100101-120121-140141-158 last
To: Doctor Stochastic
Not only do many (if not most) scientists not understand statistics, many statiticians don't either.

Yeah, but what about statisticians?

141 posted on 08/31/2005 6:08:01 AM PDT by general_re ("Frantic orthodoxy is never rooted in faith, but in doubt." - Reinhold Niebuhr)
[ Post Reply | Private Reply | To 136 | View Replies]

To: LibWhacker
I am very curious if this article referes to certain branches of science, or all branches. Also, it sounds like it discusses 'studies' and not hard, experimental sciences. That being said, I've seen lots of mistakes in journal articles in chemistry. The mistakes are normally mixing up reaction products and their properties, or some other dumb mistake that doesn't take away from the core or the research.

Anotehr important point: MOST SCIENTIFIC PAPARS ARE BASED ON STUDENT RESEARCH!

142 posted on 08/31/2005 6:13:25 AM PDT by doc30 (Democrats are to morals what and Etch-A-Sketch is to Art.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Agamemnon

That is a really clueless post.

It's just full of ad hom attacks with no substance.

Here let me do the same thing (we'll see how you like it):

[sarc]
Intelligent Design says that life changes through intelligent guidance and selection. This is exactly what inspired Hitler and Eugenics, neither of which can occur without intelligent intervention. Eugenics of course is an example of microevolution, something the creationists and IDiots are keen to support. Also the soviet union was opposed to Darwinian evolution and executed those biologists that supported it. Seems the ID poition is one that communists espouse! Therefore we can clearly see that creationists and IDiots are not true conservatives.
[/sarc]

Oh yes and even ID says we came share common ancestory with chimpanzees - perhaps you didn't know that.


143 posted on 08/31/2005 6:20:16 AM PDT by bobdsmith
[ Post Reply | Private Reply | To 124 | View Replies]

To: general_re

Statisticians (as opposed to statiticians), are just mathematicians, broken down by age and sex.


144 posted on 08/31/2005 8:38:45 AM PDT by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 141 | View Replies]

To: Doctor Stochastic
Statisticians (as opposed to statiticians), are just mathematicians, broken down by age and sex.

Can old, oversexed mathematicians be refurbished after they've broken down, or is this a hypothetical situation?

145 posted on 08/31/2005 8:49:47 AM PDT by general_re ("Frantic orthodoxy is never rooted in faith, but in doubt." - Reinhold Niebuhr)
[ Post Reply | Private Reply | To 144 | View Replies]

To: general_re

That would depend on the details of "furbish."


146 posted on 08/31/2005 9:31:09 AM PDT by Doctor Stochastic (Vegetabilisch = chaotisch is der Charakter der Modernen. - Friedrich Schlegel)
[ Post Reply | Private Reply | To 145 | View Replies]

To: Doctor Stochastic

It's a fact, 90% of statisticians think 90% of satisticians know nothing, lol.


147 posted on 08/31/2005 10:46:46 AM PDT by LibWhacker
[ Post Reply | Private Reply | To 136 | View Replies]

To: doc30

It doesn't refer to certain branches, but I think it probably should. In my experience, the hard sciences are heads and shoulders above the rest, followed by the social sciences. Medicine is dead last.


148 posted on 08/31/2005 11:15:27 AM PDT by LibWhacker
[ Post Reply | Private Reply | To 142 | View Replies]

To: LibWhacker

Why Most Published Research Findings Are False - John P. A. Ioannidis

John P. A. Ioannidis is in the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece, and Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts-New England Medical Center, Tufts University School of Medicine, Boston, Massachusetts, United States of America. E-mail: jioannid@cc.uoi.gr

Competing Interests: The author has declared that no competing interests exist.

Summary

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [1–3] to the most modern molecular research [4,5]. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof.

Modeling the Framework for False Positive Findings

Several methodologists have pointed out [9–11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings.

It can be proven that most claimed research findings are false

As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance [10,11]. Consider a 2 × 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field. R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fields where either there is only one true relationship (among many that can be hypothesized) or the power is similar to find any of the several existing true relationships. The pre-study probability of a relationship being true is R/(R + 1). The probability of a study finding a true relationship reflects the power 1 - (one minus the Type II error rate). The probability of claiming a relationship when none truly exists reflects the Type I error rate, . Assuming that c relationships are being probed in the field, the expected values of the 2 × 2 table are given in Table 1. After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [10]. According to the 2 × 2 table, one gets PPV = (1 - )R/(R - R + ). A research finding is thus more likely true than false if (1 - )R > . Since usually the vast majority of investigators depend on a = 0.05, this means that a research finding is more likely true than false if (1 - )R > 0.05.

What is less well appreciated is that bias and the extent of repeated independent testing by different teams of investigators around the globe may further distort this picture and may lead to even smaller probabilities of the research findings being indeed true. We will try to model these two factors in the context of similar 2 × 2 tables.

Bias

First, let us define bias as the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced. Let u be the proportion of probed analyses that would not have been “research findings,” but nevertheless end up presented and reported as such, because of bias. Bias should not be confused with chance variability that causes some findings to be false by chance even though the study design, data, analysis, and presentation are perfect. Bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias. We may assume that u does not depend on whether a true relationship exists or not. This is not an unreasonable assumption, since typically it is impossible to know which relationships are indeed true. In the presence of bias (Table 2), one gets PPV = ([1 - ]R + u R)/(R + R + u u + u R), and PPV decreases with increasing u, unless 1 , i.e., 1 0.05 for most situations. Thus, with increasing bias, the chances that a research finding is true diminish considerably. This is shown for different levels of power and for different pre-study odds in Figure 1. Conversely, true research findings may occasionally be annulled because of reverse bias. For example, with large measurement errors relationships are lost in noise [12], or investigators use data inefficiently or fail to notice statistically significant relationships, or there may be conflicts of interest that tend to “bury” significant findings [13]. There is no good large-scale empirical evidence on how frequently such reverse bias may occur across diverse research fields. However, it is probably fair to say that reverse bias is not as common. Moreover measurement errors and inefficient use of data are probably becoming less frequent problems, since measurement error has decreased with technological advances in the molecular era and investigators are becoming increasingly sophisticated about their data. Regardless, reverse bias may be modeled in the same way as bias above. Also reverse bias should not be confused with chance variability that may lead to missing a true relationship because of chance.

Testing by Several Independent Teams

Several independent teams may be addressing the same sets of research questions. As research efforts are globalized, it is practically the rule that several research teams, often dozens of them, may probe the same or similar questions. Unfortunately, in some areas, the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation. An increasing number of questions have at least one study claiming a research finding, and this receives unilateral attention. The probability that at least one study, among several done on the same question, claims a statistically significant research finding is easy to estimate. For n independent studies of equal power, the 2 × 2 table is shown in Table 3: PPV = R(1 n)/(R + 1 [1 ]n R n) (not considering bias). With increasing number of independent studies, PPV tends to decrease, unless 1 - < a, i.e., typically 1 < 0.05. This is shown for different levels of power and for different pre-study odds in Figure 2. For n studies of different power, the term n is replaced by the product of the terms i for i = 1 to n, but inferences are similar.

Corollaries

A practical example is shown in Box 1. Based on the above considerations, one may deduce several interesting corollaries about the probability that a research finding is indeed true.

Box 1. An Example: Science at Low Pre-Study Odds

Let us assume that a team of investigators performs a whole genome association study to test whether any of 100,000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease, it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia, with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100,000 = 10 4, and the pre-study probability for any polymorphism to be associated with schizophrenia is also R/(R + 1) = 10 4. Let us also suppose that the study has 60% power to find an association with an odds ratio of 1.3 at = 0.05. Then it can be estimated that if a statistically significant association is found with the p-value barely crossing the 0.05 threshold, the post-study probability that this is true increases about 12-fold compared with the pre-study probability, but it is still only 12 × 10 4.

Now let us suppose that the investigators manipulate their design, analyses, and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results, strictly according to the original study plan. Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specified, changes in the disease or control definitions, and various combinations of selective or distorted reporting of the results. Commercially available “data mining” packages actually are proud of their ability to yield statistically significant results through data dredging. In the presence of bias with u = 0.10, the post-study probability that a research finding is true is only 4.4 × 10 4. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them finds a formally statistically significant association, the probability that the research finding is true is only 1.5 × 10 4, hardly any higher than the probability we had before any of this extensive research was undertaken!

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. Small sample size means smaller power and, for all functions above, the PPV for a true research finding decreases as power decreases towards 1 = 0.05. Thus, other factors being equal, research findings are more likely true in scientific fields that undertake large studies, such as randomized controlled trials in cardiology (several thousand subjects randomized) [14] than in scientific fields with small studies, such as most research of molecular predictors (sample sizes 100-fold smaller) [15].

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. Power is also related to the effect size. Thus research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer or cardiovascular disease (relative risks 3–20), than in scientific fields where postulated effects are small, such as genetic risk factors for multigenetic diseases (relative risks 1.1–1.5) [7]. Modern epidemiology is increasingly obliged to target smaller effect sizes [16]. Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors.

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. As shown above, the post-study probability that a finding is true (PPV) depends a lot on the pre-study odds (R). Thus, research findings are more likely true in confirmatory designs, such as large phase III randomized controlled trials, or meta-analyses thereof, than in hypothesis-generating experiments. Fields considered highly informative and creative given the wealth of the assembled and tested information, such as microarrays and other high-throughput discovery-oriented research [4,8,17], should have extremely low PPV.

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results, i.e., bias, u. For several research designs, e.g., randomized controlled trials [18–20] or meta-analyses [21,22], there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes) [23]. Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test) [24] may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only “best” results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trails [25]. Simply abolishing selective publication would not make this problem go away.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research [26], and typically they are inadequately and sparsely reported [26,27]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [28].

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. This seemingly paradoxical corollary follows because, as stated above, the PPV of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results. “Negative” results may become attractive for dissemination only if some other team has found a “positive” association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations [29]. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics [29].

These corollaries consider each factor separately, but these factors often influence each other. For example, investigators working in fields where true effect sizes are perceived to be small may be more likely to perform large studies than investigators working in fields where true effect sizes are perceived to be large. Or prejudice may prevail in a hot scientific field, further undermining the predictive value of its research findings. Highly prejudiced stakeholders may even create a barrier that aborts efforts at obtaining and disseminating opposing results. Conversely, the fact that a field is hot or has strong invested interests may sometimes promote larger studies and improved standards of research, enhancing the predictive value of its research findings. Or massive discovery-oriented testing may result in such a large yield of significant relationships that investigators have enough to report and search further and thus refrain from data dredging and manipulation.

Most Research Findings Are False for Most Research Designs and for Most Fields

In the described framework, a PPV exceeding 50% is quite difficult to get. Table 4 provides the results of simulations using the formulas developed for the influence of power, ratio of true to non-true relationships, and bias, for various types of situations that may be characteristic of specific study designs and settings. A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time. A fairly similar performance is expected of a confirmatory meta-analysis of good-quality randomized trials: potential bias probably increases, but power and pre-test chances are higher compared to a single randomized trial. Conversely, a meta-analytic finding from inconclusive studies where pooling is used to “correct” the low power of single studies, is probably false if R 1:3. Research findings from underpowered, early-phase clinical trials would be true about one in four times, or even less frequently if bias is present. Epidemiological studies of an exploratory nature perform even worse, especially when underpowered, but even well-powered epidemiological studies may have only a one in five chance being true, if R = 1:10. Finally, in discovery-oriented research with massive testing, where tested relationships exceed true ones 1,000-fold (e.g., 30,000 genes tested, of which 30 may be the true culprits) [30,31], PPV for each claimed relationship is extremely low, even with considerable standardization of laboratory and statistical methods, outcomes, and reporting thereof to minimize bias.

Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias

As shown, the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings. Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a “null field,” one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias.

For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases.

For fields with very low PPV, the few true relationships would not distort this overall picture much. Even if a few relationships are true, the shape of the distribution of the observed effects would still yield a clear measure of the biases involved in the field. This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results.

Of course, investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a “null field.” However, other lines of evidence, or advances in technology and experimentation, may lead eventually to the dismantling of a scientific field. Obtaining measures of the net bias in one field may also be useful for obtaining insight into what might be the range of bias operating in other fields where similar analytical methods, technologies, and conflicts may be operating.

How Can We Improve the Situation?

Is it unavoidable that most research findings are false, or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable. However, there are several approaches to improve the post-study probability.

Better powered evidence, e.g., large studies or low-bias meta-analyses, may help, as it comes closer to the unknown “gold” standard. However, large studies may still have biases and these should be acknowledged and avoided. Moreover, large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high, so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow, specific questions. A negative finding can then refute not only a specific proposed claim, but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria, such as the marketing promotion of a specific drug, is largely wasted research. Moreover, one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [32–34].

Second, most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve. In some research designs, efforts may also be more successful with upfront registration of studies, e.g., randomized trials [35]. Registration would pose a challenge for hypothesis-generating research. Some kind of registration or networking of data collections or investigators within fields may be more feasible than registration of each and every hypothesis-generating experiment. Regardless, even if we do not see a great deal of progress with registration of studies in other fields, the principles of developing and adhering to a protocol could be more widely borrowed from randomized controlled trials.

Finally, instead of chasing statistical significance, we should improve our understanding of the range of R values­the pre-study odds­where research efforts operate [10]. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test [36].

Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [37], usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective, they would still be very useful in interpreting research claims and putting them in context. .....

Footnotes [snip]

Published online 2005 August 30. doi: 10.1371/journal.pmed.0020124.
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=16060722

Copyright : © 2005 John P. A. Ioannidis. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


149 posted on 08/31/2005 8:22:26 PM PDT by Matchett-PI (Science without religion is lame, religion without science is blind'. Albert Einstein)
[ Post Reply | Private Reply | To 1 | View Replies]

Placemarker
150 posted on 09/01/2005 3:46:24 AM PDT by PatrickHenry (Felix, qui potuit rerum cognoscere causas. The List-O-Links is at my homepage.)
[ Post Reply | Private Reply | To 149 | View Replies]

To: Matchett-PI

After reading the entire paper you posted, it seems to me the title is very misleading. It should be "Most study-based medical research papers are probably false." This is very important. Yesterday, Rush decided to trash all of science based on this paper. He basically said science was a buch of B.S. because half of what they publish is wrong. Thanks Rush.


151 posted on 09/01/2005 5:39:36 AM PDT by doc30 (Democrats are to morals what and Etch-A-Sketch is to Art.)
[ Post Reply | Private Reply | To 149 | View Replies]

To: doc30; oldglory; MinuteGal; JulieRNR21; mcmuffin; sheikdetailfeather; gonzo; Web Offset; blackie; ..

"After reading the entire paper you posted [#149], it seems to me the title is very misleading. It should be "Most study-based medical research papers are probably false." This is very important. Yesterday, Rush decided to trash all of science based on this paper. He basically said science was a buch of B.S. because half of what they publish is wrong. Thanks Rush." ~ doc30

I heard what he said and your negative "all-or-nothing" take on it --- ie: "Rush decided to trash all of science ...basically said science was a buch of B.S. because half of what they publish is wrong.." --- is a ridiculous misrepresentation of what he said. Doubt me? As a 24/7 member, I'll be glad to provide you with the transcript if you like. Let me know.

On the contrary, I would bet my bottom dollar that his view of the value of science is more in line with the opinion about it as expressed here:

"...... science has after a fashion demonstrated the soul.

It doesn't have a tremendous amount of evidence there but this is an interesting thing to ponder. When the brains of some people are opened they can touch the brain with electrodes to stimulate different memories and the like.

This is why some people have argued that memories are merely a chemical kind of response and don't have any relationship to a self, a separate soul, a person other than the brain.

But when scientists have stimulated part of the brain and the patient is conscious, the patient can actually tell whether a memory is being stimulated by the scientist or whether the memory is being brought forward out of their own consciousness.

They say, "Hey, you did that. I didn't."

This makes a very powerful point. "You stimulated that memory, I didn't." Who's the "I?"

The "I" was the person inside there, the "I" is the soul.

So there's a distinction between a chemical response that produces a memory and a volitional response that produces a memory. So it is not entirely true that there is not scientific evidence for the existence of the soul because there is some.

But there's another point that's actually quite a bit more important.

That's the fundamental point of whether science is the only road to truth. And there are actually three different ways to refute that. And it's very straight forward.

You can almost sum them up under one concept.

The idea is that if science is the only way to truth then science itself is self-refuting because science is built on a series of truths that cannot be demonstrated by science but must be in place even for science to be valid.

For example, is orderliness in the universe an illusion or is that real?

Is the external world knowable at all?

Are the intellect and the five senses reliable tools to examine the world?

Are values like "be objective" or "report data honestly" appropriate in the scientific endeavor?

Is nature basically uniform?

Do numbers in truth exist?

Do the laws of logic apply to reality?

All of these things are non-scientific questions but they relate to the issue of truth that must necessarily be in place for science even to be practiced.

So the point I'm making is that if you hold the belief that science is the only thing that is a measure of truth, then science is in hot water because science can't justify itself.

Science is not the sole arbiter of truth.

Ethics is another source of truthful information.

Philosophy is another source of truthful information.

History...Do you know that even mathematics is not scientific? Math is used in science, it underlies science, but you cannot prove math scientifically.

So the point is this, its an empty claim by Dr. Sagan that the soul can't exist because no scientific evidence has been produced to support the idea that there is a soul.

There can be other kinds of evidence that are not merely scientific yet be very valid. ......." ~ Gregory Koukl http://www.str.org/free/commentaries/science/saganand.htm

bttt


152 posted on 09/01/2005 7:38:32 AM PDT by Matchett-PI (Science without religion is lame, religion without science is blind'. Albert Einstein)
[ Post Reply | Private Reply | To 151 | View Replies]

To: Matchett-PI
science has after a fashion demonstrated the soul

I don't think that is the case at all. The demonstration you cited is analogous to a doctor striking a reflex in someone's knee. You know your body kicked, but you know you didn't order it to do so. Why would stimulating memories electrically be any different from this?

Science does have limits. Science requires something be 'observable' or 'detectable'. It also assumes that the universe operates under predictable patterns. Outside those limitations, you aren't doing science, you are entering philosophy.

Please provide me with exactly what he did say. All I remember was him talking about some scientific research, then saying it was probably wrong because half of all scientific papers were wrong. I could tell there was some facetiousness in his voice, but there are many listeners who will take such comments as gospel. So now the scientific community needs to clarify what this report means and where it applies. The title is still misleading because the paper itself refers to a specific subset of research. It also brings out the fact that you must know the error assciated with what you are trying to measure, the accuracy of yout techniques and the complexity of sorting causality from correlation and whether the study has sufficient resolution for such a purpose. The field I work in, I have a very clear understanding of what the tests methods and studies can and cannot do, the limits of the physical measurements and what can be deduced from that data. I have had to tell many colleagues that test results do not confirm what they want to say, or support hteir conclusions, but are insufficent to prove them, etc.

153 posted on 09/01/2005 8:10:12 AM PDT by doc30 (Democrats are to morals what and Etch-A-Sketch is to Art.)
[ Post Reply | Private Reply | To 152 | View Replies]

To: doc30

doc30: "I don't think that is the case at all....[That science has after a fashion demonstrated the soul]"

Note the words, "after a fashion".

I deliberately added that part into my excerpt in #152, because I wanted to see which of the two things he stated would be what you would choose to focus on. You focused on what I figured you'd focus on, and ignored what he [Koukl] said was QUITE A BIT MORE IMPORTANT, to wit: "But there's another point that's actually quite a bit more important. ...". Note that Koukl's opinion on science was what I said would be closer to Rush's more balanced approach - as opposed to the "all or nothing" approach you accused Rush of embracing in your previous post.

doc30: "Please provide me with exactly what he [Rush] did say."

If you have listened to Rush for any length of time, you would KNOW that he differentiates between junk science and valid science. As you see in the URL below, he has the transcription of his commentary on that day listed under the catagory of: "JUNK Science update".:

NBC Reports Bunch of Barbra Streisand About Hurricane Katrina vanden Heuvel
http://www.rushlimbaugh.com/home/daily/site_083005/content/junk_science_update.member.html

August 30, 2005

BEGIN TRANSCRIPT

RUSH: Here's Mark in Ft. Lauderdale, Florida. Hello, sir, and welcome.

CALLER: Rush, you called it on global warming. NBC last night. (pause) Rush?

RUSH: We have the sound bite of that, too. Mike, grab audio sound bite #5. I wasn't even going into get this, this was so ridiculous, but since you brought it up and it was a See, I Told You So, here is NBC science correspondent Robert Bazell and his report on the hurricane last night.

BAZELL (Breathlessly): Even with a slight weakening, Katrina was one of the biggest ever, and many scientists say we can expect such storms more often, as global warming increases sea temperatures, around the world!

RUSH: Now, once something like this gets going, folks, there's no stopping it. It's got an inertia of its own, but it isn't true. Hurricane expert Max Mayfield at the National Hurricane Center says it has nothing to do with this. This is part of a normal cycle (story). If you want, I can go get lists for you of the most deadly hurricanes this century, and I can tell you how there are hurricanes long before anybody thought of man made global warming, that had just as much death and destruction as this. (The 1900 Galveston Storm) This is not unprecedented, but most people's historical perspective begins with the day they were born and they judge events within their own lifetime. "Well, it's never been as bad as this. Well, we've never had it as good as this." So to people who have never seen the category four hurricane, "Well, hey, it couldn't have been any worse than this, Rush! Why, it had to be global warming." No. These things have been happening since the beginning of time. There have been worse ones than this when there was nobody talking about global warming, when it hadn't even been created as a political football -- and I have this little story. I knew this is going to come up, so I had this at the top of the stack, right here, folks, and it comes from NewScientist.com. Brace yourselves.


"Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true. John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings. 'We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery,' Ioannidis says. In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong," and this is not an editorial opinion. This is statistically, and it's based on study, and then to follow that, we have this from the Guardian in the UK:

"Some of America's leading scientists have accused Republican politicians of intimidating climate-change experts by placing them under unprecedented scrutiny. A far-reaching inquiry into the careers of three of the US's most senior climate specialists has been launched by Joe Barton, the chairman of the House of Representatives committee on energy and commerce. He has demanded details of all their sources of funding, methods and everything they have ever published." He damn well should! We now know -- even before we knew it; we could assume, accurately so -- that much of this is just opinion. Much of this is bias brought on by the nature of the political leaning of a particular group of scientists or individual scientists, and then you look at where they get their funding and of course it all makes sense to examine what the outcome of their research is. Now, the Guardian says:

"Mr. Barton, a Texan closely associated with the fossil fuel lobby, has spent his 11 years as chairman opposing every piece of legislation designed to combat climate change." You know why? It's not possible! It's a waste of money. It isn't possible. Folks, use logic -- and there's a German government minister, in an Oslo newspaper today (story), who said our failure to sign the Kyoto accords and reduce our pollution is why this hurricane happened and why it's so devastating. That's asinine! It is pure... It's absurd! But just use the logic. You know what this hurricane started out as? Tropical depression ten. Hurricane Katrina vanden Heuvel started at tropical depression ten. By the way, she's not happy. She is not. She's written, she's posted somewhere on a blog (story), she doesn't think this is useful or helpful to name this hurricane after her, and I'll tell you, "Libs, they can dish it out. They cannot take it!" They can dish all day long, but they can't take it. Nevertheless, Hurricane Katrina vanden Heuvel started as tropical depression ten way out there east of the lesser Antilles, and then it dissipated, and the National Hurricane Center said, "Ah, this thing is gone. It was very weak. It's not worth following. We'll keep our eye on it."

It popped up again a week later as tropical depression 12. They said, "It's the same tropical depression, but because there's been some time since it dissipated we're going to name it 12 rather than keep it ten," and so ten then became Katrina, and then it became tropical storm, and then it became Hurricane Katrina, and we watched it all the way out in the Atlantic, approach Florida, go across Florida into the gulf. All this time, we knew that it was a hurricane, and we knew it was headed for parts of America that it could be very destructive. Could we stop it? Is there one thing we could have done? Could we have all driven hybrids last week? Could we have all shut down all oil production and just walked everywhere? Is there anything we could have done? No, folks, there's nothing we could have done to change the direction of that storm, to lessen the intensity. Nothing we could have done. So, on the basis of logic, what have we done to cause it? What did we do to cause it? What is it that made it start where it start, dissipate, and then come back as a roaring hurricane all the way up to category five shortly before landfall?

Well, what they're saying is, "Sea temperatures, Rush! The sea temperatures out there! They are scalding hot, and as you have said yourself, that's like throwing gasoline on a hot fire."

Well, what's making the ocean temperature higher?

"Well, Rush, global warming."

No, I don't think so. Because global warming, if you look at any of these wackos who predict it, global warming, they talk about the polar ice caps melting. That's where they talk about the warming take place, and what would that do? That would send cooler water south, and this is what the hurricane experts are saying. Global warming would actually reduce the incidence of hurricanes, because it would have -- if it actually happened as these wackos say -- there would be a general cooling in the equatorial regions of the planet of water. We're just in a cycle. We are in a cycle where these have happened before. We're in a sunspot cycle (story). The sun's activity may be a little bit more robust than usual. We can't stop that, either. We just have to accept that these things happen, but this incessant desire to blame ourselves on the part of the left, it's always the Blame-America-First Crowd, the blame the capitalists, blame progress, blame technological advancement. "We are the ones to blame for this." Bush is responsible for it because of the war in Iraq, Bush didn't care about Kyoto? It's absurd. It's all patently false, and yet where is the absurdity reported as fact front and center? On NBC. I'm telling you there's no difference between MoveOn.org, George Soros' groups, and the mainstream media in America today.

END TRANSCRIPT

Read the Articles...

(New Scientist: Most scientific papers are probably wrong)
(NY Times: Storms Vary With Cycles, Experts Say)
(Expatica: US pollution partly to blame for Katrina: German minister)
(HP: Messing with Mother Nature - Katrina vanden Heuvel)
(UK Guardian: Republicans accused of witch-hunt against climate change scientists)
(UK Guardian: Climate change sceptics bet $10,000 on cooler world)

How You Can Help...
• Click here for a list of charitable and corporate efforts that need your help


154 posted on 09/01/2005 9:49:17 AM PDT by Matchett-PI (Science without religion is lame, religion without science is blind'. Albert Einstein)
[ Post Reply | Private Reply | To 153 | View Replies]

To: Matchett-PI
Thank you for the entire transcript. Hope it didn't violate any copyright provisions. My big issue is still with the title of the paper. Rush did exploit it in his into to global warming. The paper does issue a warning regarding statisitical analysis, design of experiments and sample size, and that is an important point that many people need to be reminded about. This is the part I have the biggest bone to pick:

Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true

He does not explain that this report is basically about epidimiology and related studies, not highly quantitative papers like those produce in chemistry and physics for example. In my experinces with Rush, he can sometimes spot junk science, other times he just doesn't know what he is talking about and takes something that sounds like junk science and then derides it as such. it amy be entertaining and appeal to basic common sense, but that doesn't make his take on things always right.

Regarding your other points, they are of a highly philosophical nature. And I do agree that there are fundamental limitations on science and I referenced those in my previous post. Science assumes logic applies, science assumes an orderliness (i.e. predicatbility) in the universe and assumes measureability. Those are the basic axioms. This approach, coupled with experimentation, has provided a powerful learning tool for humanity, more powerful than anything else to date. in my own words, if it can interact with the universe it can be measured. If it can be measured, it can be understood.

If you are looking for an absolute Truth, don't look to science, it is only a tool that extracts fact, not truth.

155 posted on 09/01/2005 10:17:11 AM PDT by doc30 (Democrats are to morals what and Etch-A-Sketch is to Art.)
[ Post Reply | Private Reply | To 154 | View Replies]

Placemarker and plug for The List-O-Links.
156 posted on 09/01/2005 7:14:38 PM PDT by PatrickHenry (Felix, qui potuit rerum cognoscere causas. The List-O-Links is at my homepage.)
[ Post Reply | Private Reply | To 155 | View Replies]

To: Gondring

For later perusal


157 posted on 09/02/2005 5:47:48 PM PDT by Gondring (I'll give up my right to die when hell freezes over my dead body!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: PatrickHenry

Most placemarkers probably don't even link to the thread placemarker.


158 posted on 09/02/2005 5:50:58 PM PDT by js1138 (Great is the power of steady misrepresentation.)
[ Post Reply | Private Reply | To 156 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-20 ... 81-100101-120121-140141-158 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson