Carroll points out that female rats were treated to the same signalsand there was no increase in their cancer rates. And a statistically significant increased incidence of brain cancer for male rats was only found for CDMA signals, not GSM.
Thats surprising because, while real-world GSM phones emit more radiation than CDMA phones, the experimental radiation exposure levels were held constant between parallel groups. And since the main difference between GSM and CDMA is their data standard, there should only be different impacts if DNA could be corrupted by binary code.
All that suggests the results could simply be a statistically random variation, especially since, as Carroll points out, even the elevated cancer rates were well within the historical range. Another expert called the study statistically underpowered, with a sample size too small to eliminate that kind of random variation. Theres no way to refute that explanation until more studies can reproduce this ones results.
http://fortune.com/2016/05/29/cell-phone-cancer-study/
Why Most Published Research Findings Are False
A research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.
http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
If the medical profession use higher statistical standards the quality of the articles would increase, but the number of accepted articles would go down. In a world of publish or perish this is not easy.
Thank you for the mini-review!
For a while, I worked at a hospital in the clinical studies department. Every single day, I read study proposals and study results that were utter garbage—often, entire studies were based on gathering patient data and then using high-powered statistics to compare every data element to every other data element within the set. And every time there was a correlation between two data elements, at P < 0.05 significance, the study authors would happily write up a paper for publishing. I wanted to scream—this is not science, and has no business being published as such. A statistical correlation means absolutely nothing if there is not a mechanism linking the two correlated elements.
In my field of biochemistry, it is possible to have very high quality results with small sample sizes. But that’s because everything is controlled, save for the one or two variables under study. If you want to show that a gene is repressed by exposure to a chemical, you only need three samples per treatment group for statistical significance. I could run an entire experiment in a 6-well plate—then repeat the experiment twice, and I would have data suitable for publication.
BTW, I like the way you spelled your moniker with codons in your tagline.
Thanks!