Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

Skip to comments.

ClimateGate Reaction Part 2: The Computer Models
Dude, With Keyboard ^ | November 30,2009 | Dude, With Keyboard

Posted on 12/15/2009 7:35:22 PM PST by InterceptPoint

NOTE: If any of the following rant strikes you as placing unreasonable limits on climate modelers, then I would like to refer you to The Point. The sigma number of the climate models better take several fingers to count if we must make radical international regulatory changes to our economic activity.

This much we have learned from the Climategate scandal: the computer models used to justify the policy proposals are for crap. Leaving the validity of the underlying science aside, and focusing only on the Climate Research Unit’s computer models, we’ve learned:

1. The starting point, i.e. the raw data is no longer available to be compared. So we can’t try to “re-create” the analysis that led to the currently used climate models and the catastrophic trends contained therein.

2. All the inputs are “derived” inputs based on various reasoning: some data sets need to be expunged because the scientist view them as anomalies that do not fit their thesis (I’ll let you dear reader judge whether or not that is innocent or sound science or something more self serving), and other inputs were adjusted to fit some form of normalization requirements. The bottom line is that the historical computer models are not made of raw data, but rather manipulated data (and I really am using that term in a value neutral manner).

3. The documentation for code is extremely poor and untraceable in some instances

4. Notations of data manipulation are actually documented in some instances but not traced to any reasoning, as far as can be discerned.

5. These models have yet to correctly predict any weather events or climate trends in the intervening years since the models were regularly used (say, starting in the 1995 IPCC for starters)

6. The model code and design history (their source code, the design documentation, functional and technical specifications, etc) that are used as the basis for expensive policy proposals and regulatory regimes were never made available for public third party audits.

7. There is no evidence that the scientific grant givers performed any technical audit of the code quality, system stability, or system accuracy.

Excuse me while I hop on my high horse.

I work in software. I have eleven years of experience in software quality assurance. I have worked for the two largest software companies on earth. I have been a tester, lead tester and/or test manager on products that performed word processing, enterprise level document management and online collaboration, and enterprise resource planning (ERP), specifically manufacturing, accounting and logistics software. I have worked in software development outfits of varying size, from small agile groups which were a bit lacking on the organizational side of things, to large groups that used somewhat rigid waterfall methodologies which were high on discipline and detail and low on adaptability. I’ve worked with numerous off shore resources as well as decentralized teams of remote full time resources.

Moreover, I’ve worked in software development that was required to meet certain government and industry standards from ISO regulations to FDA and GAMP requirements, including working directly with FDA audit consultants. My experience teaches me that:

1. Software development has to be managed and developed by software pros as opposed to experts in other fields that can do some coding when called upon. The experts define the functionality, business need and underlying logic but they do not, or should not do the coding. Otherwise, while you may see innovative solutions and ideas, the execution will typically be quite amateurish and have design flaws up and down the line..

2. Software development that lacks at least some sort of plan > design > document > develop > test > support life cycle is doomed to have significant bugs and ill thought out data models

3. Some sort of document trail on how the code does what it does is vital to long term support.

4. The more variables you throw into a system the higher the quality threshold will be, the risk to code degradation will increase and the need for huge regression cycles will be vital. It would be difficult not to understate the enormous variable load on any climate model.

5. Open source software certainly has weaknesses but also some enormous strengths. The weaknesses are primarily around how open source software is often created by developers for developers. Their “customers” and “partners” are other developers who also have the ability to improve the software. Open source development can be rough, but it also can be the most dynamic. It is especially useful the more niche or small the target audience is. It strikes me as obvious that climate computer modeling should HAVE TO follow an open source model

The CRU source code does not appear to have been open source in anyway, was apparently coded (in FORTRAN!!!) by scientists whose primary expertise is in climate science and not software development. They are a group of individuals who tout their expertise at every turn but their models lack any evidence of any software development methodology above common hackery. And not to put to fine a point on it, but these models are the basis for the theory that a CO2 caused catastrophe is all but a foregone conclusion without radical international regulatory changes to our economic activity.

Lastly, consider the standards that developers who sell to or implement in the pharmaceutical industry (the industry with one of the highest regulatory requirements for their data integrity):

1. At a minimum, pharma companies and their software vendors must be able to demonstrate a secure and traceable data flow

2. They must demonstrate source code control

3. They must demonstrate change management with a document and source code audit trail from plan/design to implementation, complete with version control and user history

4. Typically, they must have some sort of electronic signature control mechanism or a reliable paper solution that traces system changes, and is legally binding

5. All work processes must be fully documented with regards to system access, system usage, and any change to the system itself

We put very rigid controls on pharmaceutical companies and their software vendors to create systems that are secure, reliable and fully documented. This is seen as societal good so that we don’t have our medications tampered with either through incompetence or malicious intent. To put it kindly, there is no evidence that any remote requirements are enforced on the programmers of climate models that were A) likely paid for by taxpayer funded grants and B) are used as a basis for the theory that a CO2 caused catastrophe is all but a foregone conclusion without radical international regulatory changes to our economic activity.

If I were an opinion journalist or a busybody Senator, I might think some minimum requirements would be called for in climate model development BEFORE we go down the path of radical international regulatory changes to our economic activity:

1. All research and data obtained and developed with a taxpayer funded grant should be made publicly available if it will be used as a basis for public policy

2. Any software used or created to model the scientific evidence for the public policy should be required to meet the bar set for the pharmaceutical industry and other industries of equal import

3. Any predictive applications should prove some level of accuracy over a pre-defined time horizon (in years) before being treated as a basis for public policy. The predictive applications must audited for accuracy under a “do nothing” scenario first to show their understanding of the situation.

4. They should at least be able to accurately predict the recorded past.

5. Predictive applications would then need to be audited annually, post policy implementation to show that the predicted benefits of the policy were accurate.

Again, if this doesn’t seem like a reasonable set of standards, then that’s sort of The Point. Either AGW is a nice theory or an easily provable fact. Only the latter, is worth discussing (all together now!) radical international regulatory changes to our economic activity. Of course, this assumes the policy makers that are in line to gain enormous power from the policy proposals actually care about the accuracy of the underlying science. My FDA Validated Magic 8-ball program says “don’t count on it, bud”.


TOPICS:
KEYWORDS: climategate; copenhagen; globalwarming
This may have been posted earlier but if so I missed. It is simply a devastating critique of what the quasi-scientists and coding amateurs at CRU call software development. Well worth a read.

If you read nothing else read this line:

To put it kindly, there is no evidence that any remote requirements are enforced on the programmers of climate models that were A) likely paid for by taxpayer funded grants and B) are used as a basis for the theory that a CO2 caused catastrophe is all but a foregone conclusion without radical international regulatory changes to our economic activity."

And this:

"If I were an opinion journalist or a busybody Senator, I might think some minimum requirements would be called for in climate model development BEFORE we go down the path of radical international regulatory changes to our economic activity:"

1 posted on 12/15/2009 7:35:22 PM PST by InterceptPoint
[ Post Reply | Private Reply | View Replies]

To: InterceptPoint
The reference to "The Point" in the article refers to an earlier post by Dude, With Keyboard. Here is the link:

ClimateGate Reaction Part 1: The Point

2 posted on 12/15/2009 7:40:59 PM PST by InterceptPoint
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint

Bumpty bump!

I started out programming in FORTRAN in the 80’s and even as a student, my routines were better documented.

It’s preposterous!


3 posted on 12/15/2009 7:45:04 PM PST by Incorrigible (If I lead, follow me; If I pause, push me; If I retreat, kill me.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint

Having spent six years working in the automotive airbag industry, I dealt with mathematical modeling of relatively simple devices as electromechanical crash sensors, given three axis accelerometer data. Quite often, the models had fudge factors introduced into them to get the “right result”. A model of something as infinately complex as the entire climate of the planet Earth is absolutely impossible to create with any degree of validity. There are simply too many variables to consider. What a complete farce.


4 posted on 12/15/2009 7:47:07 PM PST by Adams
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint

bump for later


5 posted on 12/15/2009 7:47:45 PM PST by DBrow
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint

Excellent points.


6 posted on 12/15/2009 7:48:04 PM PST by Rocky (Obama's ego: The "I's" have it.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint

I’ve used groundwater modeling software for 15 years now. One of the key requirements for a predictive model is to accurately replicate the available history, and then only the length of time validly replicated can be acceptable for future predictions. Over and over again, these models FAILED to accurately model historical data patterns, as such, the were invalid for any predictive modeling. In this case the old ‘garbage in - garbage out’ adage applies.


7 posted on 12/15/2009 7:50:59 PM PST by Godzilla (3-7-77)
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint
1. Software development has to be managed and developed by software pros as opposed to experts in other fields that can do some coding when called upon.

I don't know if I buy that. A scientist using FORTRAN or MATLAB to perform complex calculations is an entirely different animal than a code jockey writing a computer application. I'm not defending the GW hacks, I'm just saying that the author is mixing apples and oranges.

8 posted on 12/15/2009 7:56:38 PM PST by randog (Tap into America!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint

I’m told it’s not FORTRAN. It’s IDL


9 posted on 12/15/2009 8:09:00 PM PST by darbymcgill
[ Post Reply | Private Reply | To 2 | View Replies]

To: InterceptPoint
As a software professional with 25+ years experience, a Masters in Computer Science (Software & Systems), and over a decade of experience in modeling and simulations... I can say I agree with your analysis. These GCMs are worse than worthless, they are dangerously unreliable.

1. The starting point, i.e. the raw data is no longer available to be compared. So we can’t try to “re-create” the analysis that led to the currently used climate models and the catastrophic trends contained therein.

This is a killer. If the models cannot be run and re-run to verify repeatability, sensitivity analysis, monte carlo analysis... Well, they aren't worth the disks they're stored on.

2. All the inputs are “derived” inputs based on various reasoning: some data sets need to be expunged because the scientist view them as anomalies that do not fit their thesis (I’ll let you dear reader judge whether or not that is innocent or sound science or something more self serving), and other inputs were adjusted to fit some form of normalization requirements. The bottom line is that the historical computer models are not made of raw data, but rather manipulated data (and I really am using that term in a value neutral manner).

This is crazy. In serious modeling and simulation, decisions like these have to be documented and justified. They are usually the subject of intense debate among industry experts.

5. These models have yet to correctly predict any weather events or climate trends in the intervening years since the models were regularly used (say, starting in the 1995 IPCC for starters)

This is one of the more damning bits of evidence against the GCMs. It shows that they are tweaked and tuned to produce a desired outcome long range. They are not tuned for best real-world performance.

6. The model code and design history (their source code, the design documentation, functional and technical specifications, etc) that are used as the basis for expensive policy proposals and regulatory regimes were never made available for public third party audits.

Given the track record of the so-called climate scientists, I wouldn't trust their executables. I'd want the raw data, and all the source for review and analysis, before building and running on my own system. Sorry, their credibility is shot.

7. There is no evidence that the scientific grant givers performed any technical audit of the code quality, system stability, or system accuracy.

Then the so-called scientists cannot even guarantee the GCMs are performing as intended. There could be (almost certainly are) numerous errors in the code.

10 posted on 12/15/2009 8:10:08 PM PST by ThunderSleeps (obama out now! I'll keep my money, my guns, and my freedom - you can keep the change.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: randog
"I don't know if I buy that. A scientist using FORTRAN or MATLAB to perform complex calculations is an entirely different animal than a code jockey writing a computer application. I'm not defending the GW hacks, I'm just saying that the author is mixing apples and oranges.

What you say sounds logical. I've written a lot of code myself that worked just great - so I'm inclined to agree with you. But I don't.

I had the good fortune in previous life to manage a group of really, really professional software developers. The lead software guy was simply a superb software engineer and I watched him and about 10 also very good software guys/gals slowly and methodically build a very complex application that was ultimately submitted and accepted by the U.S. Government (DOD).

They didn't do it like I do.

They spent an amazing amount of time in the design phase. They spent an amazing amount of time in the test phase. The coding in between? That was over in like a flash and was mostly done by the junior members of the team. That was the easy part.

What I got out of that experience was a really deep appreciation for the professionalism of real software developers. They document, they control revisions, they set coding standards and formats and they test and test and test. And they peer review until you think there is nothing left to review.

Trust me. The average MatLab hacker (I'm a closet one myself) really can't hold a candle to real programmers.

11 posted on 12/15/2009 8:10:24 PM PST by InterceptPoint
[ Post Reply | Private Reply | To 8 | View Replies]

To: InterceptPoint

The whole global warming stuff is farcical. We are supposed to believe:

1) These geniuses can determine the temperature of the air just above the Earth, the whole Earth, to within a tenth of a degree. And they can detect trends of a tenth of a degree from one year to the next. Thousands of unmanned weather stations around the world, all carefully maintained, I’m sure, are used. And then there is the issue of how many of them are now found in urban areas where their temperature readings are no longer representative. Then some marvelous mathematics are used to account for the fact that they are not equally distributed over land and sea. Then more fudging with indirect data, satellite measurements, etc., etc. Give me a break.

2) We are supposed to believe that man-made CO2 is causing warming. 0.04% of the air, and most of it from natural sources. Water vapor is also a “greenhouse gas”. At 60 deg F and 50% humidity, water is about 0.9% of the air, or more than 20 times the concentration of CO2.

3) These geniuses have developed computer models. Lots of models. And then the IPCC looks at what all of the models predict, and by mathematics and wishful thinking, decides what projection to use. Maybe an average? Maybe a weighted average? Maybe the curve they had in mind before they looked at the model results? And the models, as anyone would expect who has ever tried to model something truly complex, fail miserably at predicting what is actually happening. In order to match a set of real data, the model has to have “fudge factors” built in. The developer is truly happy when his “baby” spits out the data that was used to develop the program in the first place. But the real test is whether the model can predict a new outcome accurately, based on a new set of data. And there’s the rub. The climate models fail miserably at this. No surprise. Scientists are still debating, quietly amongst themselves, what factors to consider: the sun, the currents, volcanoes...?

And they want to develop a global economic policy based on this witchcraft?


12 posted on 12/15/2009 8:16:41 PM PST by Rocky (Obama's ego: The "I's" have it.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint
(Sound of grey_whiskers purring.)

Concur.

13 posted on 12/15/2009 8:24:24 PM PST by grey_whiskers (The opinions are solely those of the author and are subject to change without notice.)
[ Post Reply | Private Reply | To 11 | View Replies]

To: InterceptPoint

The computer models have routinely been defined as flawed, showing temperature increases that range anywhere from 5 degrees Celsius over the next century to as much as 50 degress Celsius.

Historically, there is no data to prove that such temperature increases are possible. In addition, for me, the fact that the very same people who can’t look out the window and tell us what the weather is currently doing, nor predict what the weather will do tomorrow, want to assure us what the weather will be doing in 100 years.

So, the computer models are, for all intents and purposes, a small part of the overall scam. The bigger problem are the politicians and so-called climatoligists willing to lie and manipulate data to try to justify the whole global warming scam!!!


14 posted on 12/15/2009 8:35:33 PM PST by DustyMoment (FloriDUH - proud inventors of pregnant/hanging chads and judicide!!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: InterceptPoint; rdl6989; Darnright; According2RecentPollsAirIsGood; livius; DollyCali; FrPR; ...
 


Beam me to Planet Gore !

15 posted on 12/15/2009 8:53:27 PM PST by steelyourfaith (Time to prosecute Al Gore now that fellow scam artist Bernie Madoff is in stir.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: randog
I'm just saying that the author is mixing apples and oranges.

I don't think so. I'm a scientist but I don't write code. I pay guys to do it for me. I help the software guys understand complicated calculations by giving the writeups and or excel spreadsheets to help them understand. When I find mistakes then I call them up and they try to fix it. Sometimes when our predictive models don't work I get others to help me and we add more variables in our calculations and the coders do the rest.

16 posted on 12/15/2009 8:59:43 PM PST by I got the rope
[ Post Reply | Private Reply | To 8 | View Replies]

To: Incorrigible
By the 1980s I'd completely abandoned FORTRAN.

It was clunky and difficult to debug ~ because, alas, it wasn't reported out in commonly understood hex-code.

Yes, some of us learned enough about several languages to read hex. I knew a guy who could code in hex. He was the fellow who did all the programs for program-trading ~ SEE: 1987 Stock Market Crash.

It was shocking to find that these models were in FORTRAN.

17 posted on 12/15/2009 9:03:35 PM PST by muawiyah (Git Out The Way)
[ Post Reply | Private Reply | To 3 | View Replies]

To: InterceptPoint
Documentation is the key to long term system maintenance.

I did work as a COBOL programmer for a couple of years. I had folks calling up and asking what my intentions were with respect to various units 20 years later ~ self-documentation inside COBOL required the coder/programmer give his name.

I always told them to "Look inside the quotes for explanatory literals".

They never called back ~ I suspect they didn't know what I was saying. However, the agency finally had everything rewritten and documented in something more up to date ~ at immense cost ~ but my notes were still meaningful over that time. One of the complaints voiced in the Climategate emails was that changes weren't documented.

All I can say is "How utterly stupid of them"

18 posted on 12/15/2009 9:09:44 PM PST by muawiyah (Git Out The Way)
[ Post Reply | Private Reply | To 11 | View Replies]

To: randog; InterceptPoint
I don't know if I buy that. A scientist using FORTRAN or MATLAB to perform complex calculations is an entirely different animal than a code jockey writing a computer application.

I agree with InterceptPoint, and I worked with scientists, and I am a programmer myself. Scientists are great people, they know their stuff and they can do many things, on paper. But give a scientist an editor, and he will whip out a terrible spaghetti code, with cut and paste and

10 GOTO (320,330,340,350,360), ARRAY(J,K)+1

all over the place. The computed GOTO is not a fiction, I knew a scientist who was in love with them. All this is because scientists are people of ideas. They don't care about careful coding, classes, namespaces, libraries, test harnesses... they just take a beeline to their desired output, and they seldom think about the lifecycle of their code.

Nobody can verify correctness of such a code. Nobody can even understand it, and the CRU case has plenty of lament in the HARRY_READ_ME.txt file. You can't even make changes without the whole code unraveling, crashing and burning on you. "Magical numbers" are a popular pitfall. A typical scientist can write code, but it's worse than useless code.

Most jobs require professionals to do them correctly. Sometimes it's because of tools; sometimes it's because of experience; sometimes it's because of education that is required to do it right. You will ever ask an electrician to make a brick wall (if you want it to be straight :-) You also don't want a scientist to crank out the code. The scientist should do science - such as to define requirements, algorithms, testing and validation methods - and he will be defending his decisions before other scientists. The coder will be responsible for making sure the program meets the requirements and passes every validation that they can throw at it. The code should meet the coding guidelines. It should be written in a *proper* language, and not in the language that the scientist happens to know. The code should be under version control from day 0, and all changes should be properly managed. Builds used to produce scientific results should be tagged in the revision control system, snapshotted (along with binaries) and archived. The data will be also under version control, and similarly managed. There is a lot to software development that scientists do not know, just as there is plenty that coders don't know about science. The CRU case illustrates the worst possible scenario, when a bunch of scientists went out of control, imagined themselves software developers, and created this train wreck.

19 posted on 12/15/2009 9:12:57 PM PST by Greysard
[ Post Reply | Private Reply | To 8 | View Replies]

To: SirKit

Climategate ping!


20 posted on 12/15/2009 11:31:49 PM PST by SuziQ
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson