Skip to comments.Reasoning About Climate Uncertainty – Draft
Posted on 03/27/2011 8:58:00 AM PDT by Ernest_at_the_Beach
Here is the main text of the paper (no abstract yet, or reference list).
The challenge of framing and communicating uncertainty about climate change is a symptom of the challenges of understanding and reasoning about such a complex system. Our understanding of the complex climate system is hampered by a myriad of uncertainties, indeterminacy, ignorance, and cognitive biases. Complexity of the climate system arises from the very large number of degrees of freedom, the number of subsystems and complexity in linking them, and the nonlinear and chaotic nature of the atmosphere and ocean. A complex system exhibits behavior not obvious from the properties of its individual components, whereby larger scales of organization influence smaller ones and structure at all scales is influenced by feedback loops among the structures. The epistemology of computer simulations of complex systems is a new and active area research among scientists, philosophers, and the artificial intelligence community. How to reason about the complex climate system and its computer simulations is not simple or obvious.
How has the IPCC dealt with the challenge of uncertainty in the complex climate system? Until the time of the IPCC TAR and the Moss-Schneider (2000) Guidance paper, uncertainty was dealt with in an ad hoc manner. The Moss-Schneider guidelines raised a number of important issues regarding the identification and communication of uncertainties. However, the actual implementation of this guidance in the TAR and AR4 adopted a subjective perspective or judgmental estimates of confidence. Defenders of the IPCC uncertainty characterization argue that subjective consensus expressed using simple terms is understood more easily by policy makers.
The consensus approach used by the IPCC to characterize uncertainty has received a number of criticisms. Van der Sluijs et al. (2010b) finds that the IPCC consensus strategy underexposes scientific uncertainties and dissent, making the chosen policy vulnerable to scientific error and limiting the political playing field. Van der Sluijs (2010a) argues that matters on which no consensus can be reached continue to receive too little attention by the IPCC, even though this dissension can be highly policy-relevant. Oppenheimer et al. (2007) point out the need to guard against overconfidence and argue that the IPCC consensus emphasizes expected outcomes, whereas it is equally important that policy makers understand the more extreme possibilities that consensus may exclude or downplay. Gruebler and Nakicenovic (2001) opine that there is a danger that the IPCC consensus position might lead to a dismissal of uncertainty in favor of spuriously constructed expert opinion.
While the policy makers desire for a clear message from the scientists is understandable, the consensus approach being used by the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. While the public may not understand the complexity of the science or be culturally predisposed to accept the consensus, they can certainly understand the vociferous arguments over the science portrayed by the media. Better characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing the noise and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process. Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding. Further, improved understanding and characterization of uncertainty is critical information for the development of robust policy options.
Indeterminacy and framing of the climate change problem
An underappreciated aspect of characterizing uncertainty is associated with the questions that do not even get asked. Wynne (1992) argues that scientific knowledge typically investigates a restricted agenda of defined uncertaintiesones that are tractable leaving invisible a range of other uncertainties, especially about the boundary conditions of applicability of the existing framework of knowledge to new situations. Wynne refers to this as indeterminacy, which arises from the unbounded complexity of causal chains and open networks. Indeterminacies can arise from not knowing whether the type of scientific knowledge and the questions posed are appropriate and sufficient for the circumstances and the social context in which the knowledge is applied.
In the climate change problem, indeterminacy is associated with the way the climate change problem has been framed. Frames are organizing principles that enable a particular interpretation of an issue. De Boerg et al. (2010) state that: Frames act as organizing principles that shape in a hidden and taken-for-granted way how people conceptualize an issue. Risbey et al. (2005)??? argue that decisions on problem framing influence the choice of models and what knowledge is considered relevant to include in the analysis. De Boerg et al. further state that frames can express how a problem is stated, who is expected to make a statement about it, what questions are relevant, and what range of answers might be appropriate.
The decision making framework provided by the UNFCCC Treaty provides the rationale for framing the IPCC assessment of climate change and its uncertainties, in terms of identifying dangerous climate change and providing input for decision making regarding CO2 stabilization targets. In the context of this framing, certain key scientific questions receive little attention. In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument. Further, impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement. In the WG II Report, the focus is on attributing possible dangerous impacts to AGW, with little focus in the summary statements on how warming might actually be beneficial to certain regions or in certain sectors.
Further, the decision analytic framework associated with setting a CO2 stabilization target focuses research and analysis on using expert judgment to identify a most likely value of sensitivity/ warming and narrowing the range of expected values, rather than fully exploring the uncertainty and the possibility for black swans (Taleb 2007) and dragon kings (Sornette 2009). The concept of imaginable surprise was discussed in the Moss-Schneider uncertainty guidance documentation, but consideration of such possibilities seems largely to have been ignored by the AR4 report. The AR4 focused on what was known to a significant confidence level. The most visible failing of this strategy was neglect of the possibility of rapid melting of ice sheets on sea level rise in the Summary for Policy Makers (e.g. Oppenheimer et al. 2007; Betz 2009). An important issue is to identify the potential black swans associated with natural climate variation under no human influence, on time scales of one to two centuries. Without even asking this question, judgments regarding the risk of anthropogenic climate change can be misleading to decision makers.
The presence of sharp conflicts with regards to both the science and policy reflects an overly narrow framing of the climate change problem. Until the problem is reframed or multiple frames are considered by the IPCC, the scientific and policy debate will continue to ignore crucial elements of the problem, with confidence levels that are too high.
Uncertainty, ignorance and confidence
The Uncertainty Guidance Paper by Moss and Schneider (2000) recommended a common vocabulary to express quantitative levels of confidence based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts. This assessment strategy does not include any systematic analysis of the types and levels uncertainty and quality of the evidence, and more importantly dismisses indeterminacy and ignorance as important factors in assessing these confidence levels. In context of the narrow framing of the problem, this uncertainty assessment strategy promotes the consensus into becoming a self-fulfilling prophecy.
The uncertainty guidance provided for the IPCC AR4 distinguished between levels of confidence in scientific understanding and the likelihoods of specific results. In practice, primary conclusions in the AR4 included a mixture of likelihood and confidence statements that are ambiguous. Curry and Webster (2010) have raised specific issues with regards to the statement Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations. Risbey and Kandlikar (2007) describe ambiguities in actually applying likelihood and confidence, including situations where likelihood and confidence cannot be fully separated, likelihood levels contain implicit confidence levels, and interpreting uncertainty when there are two levels of imprecision is in some cases rather difficult.
Numerous methods of categorizing risk and uncertainty have been described in the context of different disciplines and various applications; for a recent review, see Spiegelhalter and Riesch (2011). Of particular relevance for climate change are schemes for analyzing uncertainty when conducting risk analyses. My primary concerns about the IPCCs characterization of uncertainty are twofold:
Following Walker et al. (2003), statistical uncertainty is distinguished from scenario uncertainty, whereby scenario uncertainty implies that it is not possible to formulate the probability of occurrence particular outcomes. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop in the future. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood. Wynne (1992) defines risk as knowing the odds (analogous to Walker et al.s statistical uncertainty), and uncertainty as not knowing the odds but knowing the main parameters (analogous to Walker et al.s scenario uncertainty).
Stainforth et al. (2007) argue that model inadequacy and an insufficient number of simulations in the ensemble preclude producing meaningful probability distributions from the frequency of model outcomes of future climate. Stainforth et al. state: [G]iven nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . . Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify. Given climate model inadequacies and uncertainties, Betz (2009) argues for the logical necessity of considering climate model simulations as modal statements of possibilities, which is consistent with scenario uncertainty. Stainforth et al. makes an equivalent statement: Each model run is of value as it presents a what if scenario from which we may learn about the model or the Earth system. Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establish a mean value or confidence intervals. In the presence of scenario uncertainty, which characterizes climate model simulations, attempts to produce a PDF for climate sensitivity (e.g. Annan and Hargreaves 2010) are arguably misguided and misleading.
Ignorance is that which is not known; Wynne (1992) finds ignorance to be endemic because scientific knowledge must set the bounds of uncertainty in order to function. Walker et al. (2003) categorize the following different levels of ignorance. Total ignorance implies a deep level of uncertainty, to the extent that we do not even know that we do not know. Recognized ignorance refers to fundamental uncertainty in the mechanisms being studied and a weak scientific basis for developing scenarios. Reducible ignorance may be resolved by conducting further research, whereas irreducible ignorance implies that research cannot improve knowledge (e.g. what happened prior to the big bang). Bammer and Smithson (2008) further distinguish between conscious ignorance, where we know we dont know what we dont know, versus unacknowledged or meta-ignorance where we dont even consider the possibility of error.
While the Kandlikar et al. (2005) uncertainty schema explicitly includes effective ignorance in its uncertainty categorization, the AR4 uncertainty guidance (which is based upon Kandlikar et al.) neglects to include ignorance in the characterization of uncertainty. Hence IPCC confidence levels determined based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts do not explicitly account for indeterminacy and ignorance, although recognized areas of ignorance are mentioned in some part of the report (e.g. the possibility of indirect solar effects in sect xxxx of the AR4 WG1 Report). Overconfidence is an inevitable result of neglecting indeterminacy and ignorance.
The preface of this is not included.....hence I say this is an Excerpt.
This is the reason the models are “adjusted” to “work” and get your next grant.
It's a good succinct statement on the IPPC workings and needs to be repeated often.
Judy's problem is she is playing on the opponents' turf with the opponent's rules. The opposition will merely invent a new probability distribution for the solar variability and she will be forced to admit it as evidence.
Given climate model inadequacies and uncertainties, Betz (2009) argues for the logical necessity of considering climate model simulations as modal statements of possibilities, which is consistent with scenario uncertainty. Stainforth et al. makes an equivalent statement: Each model run is of value as it presents a what if scenario from which we may learn about the model or the Earth system. Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establish a mean value or confidence intervals. In the presence of scenario uncertainty, which characterizes climate model simulations, attempts to produce a PDF for climate sensitivity (e.g. Annan and Hargreaves 2010) are arguably misguided and misleading.
Yuck. Models are GIGO. She is asking if you can run one a bunch of time and produce a probability distribution from the outcomes (or not). That is the wrong question, the correct question is whether the model parameters have any empirical validity. A "hindcast" which supposedly tests models on their ability to match past climate is garbage being used to validate garbage (GVG). The "global average temperature" used in such tests is a convenient fiction since the models have absolutely no validity against any direct empirical measurements (e.g. the models predicted positive AO, yet it is more negative like it was in the 70's).
Finally, Annan is one of the losers on the hockey team (with Michael Mann, Gavin Schmidt and the liar Grant Foster aka Tamino) who routinely publish faulty statistical analyses and other chaff to try to smother any attempt to inject some rigor and sanity into climate science. They are anti-scentists and attempting to debate them scientifically is pointless.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.