To: mc5cents
I think that what it says is that of all the temperature monitoring stations used to inform government decision making about global warming, almost 70% exhibited built-in monitoring errors that caused them to show temperatures between 2 and five degrees above the actual temperatue, while only 2% exhibited built-in errors that caused them to show temperatures even slightly below the actual temperature. This, in turn, demonstrates a 35-to-1 chance that a given measurement showed temperatures higher than reality, as opposed to lower than reality. If temperature monitoring errors were to arise as a result of random chance, we would expect to see something more on the order of a 1-to-1 chance; that is, for every temperature monitoring station that tended to report temperatures warmer than reality, we would expect to find approximately one station that tended to report temperatures cooler than reality.
You might ask yourself "what would account for the the 35-times-greater chance of reported temps being warmer than they should be, when compared to what we would expect from an 'honest error.'"
14 posted on
01/22/2011 1:27:00 PM PST by
Steely Tom
(Obama goes on long after the thrill of Obama is gone)
To: Steely Tom
To: Steely Tom
almost 70% exhibited built-in monitoring errors that caused them to show temperatures between 2 and five degrees above the actual temperatue, while only 2% exhibited built-in errors that caused them to show temperatures even slightly below the actual temperature. The errors can go either way. It is true that most of sites have heat bias, but the temperature ratings for errors are not all on the hot side, a 2 to 5 degree cool bias is also possible.
23 posted on
01/22/2011 6:19:06 PM PST by
palmer
(Cooperating with Obama = helping him extend the depression and implement socialism.)
To: Steely Tom
It said error, not bias. At the risk of oversimplifying, root mean squared (RMS) error is the squareroot of the sum of bias squared plus variances, properly weighted.
It the stations are unbiased, as likely to overestimate as underestimate, or if at least the ensemble of stations is unbiased, and the contributions from the various stations properly weighted, then the measurements (if not the conclusions) are valid. If all you have is this ensemble, you can only measure the bias and variance of the stations with respect to the ensemble, there is no “ground truth” to compare things to.
I believe the concern is that the ensemble has a time dependent creep or bias and no way (or at least effort) to compare it to some form of “ground truth”. It is not at all clear what ground truth even means.
33 posted on
01/23/2011 7:57:24 AM PST by
Lonesome in Massachussets
(Socialists are to economics what circle squarers are to math; undaunted by reason or derision.)
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson