Posted on 04/27/2020 7:00:58 AM PDT by Kaslin
The decisions that are being made during this crisis are far too important and complex to be based on such imprecise data and with such unreliable results.
Theres been a lot of armchair analysis about various models being used to predict outcomes of COVID-19. For those of us who have built spatial and statistical models, all of this discussion brings to mind George Boxs dictum, All models are wrong, but some are usefulor useless, as the case may be.
The problem with data-driven models, especially when data is lacking, can be easily explained. First of all, in terms of helping decision makers make quality decisions, statistical hypothesis testing and data analysis is just one tool in a large tool box.
Its based on what we generally call reductionist theory. In short, the tool examines parts of a system (usually by estimating an average or mean) and then makes inferences to the whole system. The tool is usually quite good at testing hypotheses under carefully controlled experimental conditions.
For example, the success of the pharmaceutical industry is, in part, due to the fact that they can design and implement controlled experiments in a laboratory. However, even under controlled experimental procedures, the tool has limitations and is subject to sampling error. In reality, the true mean (the true number or answer we are seeking) is unknowable because we cannot possibly measure everything or everybody, and model estimates always have a certain amount of error.
Simple confidence intervals can provide good insight into the precision and reliability, or usefulness, of the part estimated by reductionist models. With the COVID-19 models, the so-called news appears to be using either the confidence interval from one model or actual estimated values (i.e., means) from different models as a way of reporting a range of the predicted number of people who may contract or die from the disease (e.g., 60,000 to 2 million).
Either way, the range in estimates is quite large and useless, at least for helping decision makers make such key decisions about our health, economy, and civil liberties. The armchair analysts descriptions about these estimates show how clueless they are of even the simplest of statistical interpretation.
The fact is, when a model has a confidence interval as wide as those reported, the primary conclusion is that the model is imprecise and unreliable. Likewise, if these wide ranges are coming from estimated means of several different models, it clearly indicates a lack of repeatability (i.e., again, a lack of precision and reliability).
Either way, these types of results are an indication of bias in the data, which can come from many sources (such as not enough data, measurement error, reporting error, using too many variables, etc.). For the COVID-19 models, most of the data appears to come from large population centers like New York. This means the data sample is biased, which makes the entire analysis invalid for making any inferences outside of New York or, at best, areas without similar population density.
It would be antithetical to the scientific method if such data were used to make decisions in, for example, Wyoming or rural Virginia. While these models can sometimes provide decision makers useful information, the decisions that are being made during this crisis are far too important and complex to be based on such imprecise data. There are volumes of scientific literature that explain the limitations of reductionist methods, if the reader wishes to investigate this further.
Considering the limitations of this tool under controlled laboratory conditions, imagine what happens within more complex systems that encompass large areas, contain millions of people, and vary with time (such as seasonal or annual changes). In fact, for predicting outcomes within complex and adaptive and dynamic systems, where controlled experiments are not possible, data is lacking, and large amounts of uncertainty exist, the reductionists tool is not useful.
Researchers who speak as if their answers to such complex and uncertain problems are unquestionable and who politicize issues like COVID-19 are by definition pseudo-scientists. In fact, the scientific literature (including research from a Nobel Prize winner) shows that individual experts are no better than laymen at making quality decisions within systems characterized by complexity and uncertainty.
The pseudo-scientists want to hide this fact. They like to simplify reality by ignoring or hiding the tremendous amount of uncertainty inherent in these models. They do this for many reasons: its easier to explain cause/effect relationships, its easier to predict consequences (thats why most of their predictions are wrong or always changing), and its easier to identify victims and villains.
They accomplish this by first asking the wrong questions. For COVID-19, the relevant question is not, How many people will die? a divisive and impossible question to answer, but What can we do to avoid, reduce, and mitigate this disease without destroying our economy and civil rights?
Secondly, pseudo-scientists hide and ignore the assumptions inherent in these models. The assumptions are the premise of any model; if the assumptions are violated or invalid, the entire model is invalid. Transparency is crucial to a useful model and for building trust among the public. In short, whether a model is useful or useless has more to do with a persons values than science.
The empirical evidence is clear: whats really needed is good thinking by actual people, not technology, to identify and choose quality alternatives. Technology will not solve these issues and should only be used as aids and tools (and only if they are transparent and reliable as possible).
What is needed, and what the scientific method has always required but is nowadays often ignored, is what is called multiple working hypotheses. In laymens terms, this simply means that we include experts and stakeholders with different perspectives, ideas, and experiences.
The type of modeling that is needed to make quality decisions for the COVID-19 crisis is what we modelers call participatory scenario modeling. This method uses Decision Science tools like Bayesian networks and Multiple Objective Decision Analysis that explicitly link data with the knowledge and opinions of a diverse mix of subject matter experts. The method uses a systems, not a reductionist, approach and seeks to help the decision maker weigh the available options and alternatives.
The steps are: frame the question appropriately, develop quality alternatives, evaluate the alternatives, and plan accordingly (i.e., make the decision). The key is participation from a diverse set of subject-matter experts from interdisciplinary backgrounds working together to build scenario models that help decision makers assess the decision options in terms of probability of the possible outcomes.
Certain models, such as COVID-19, require a diverse set of experts, whereas climate change models require participation from stakeholders and experts. The participatory nature of the process makes assumptions more transparent, helps people better understand the issues, and builds trust among competing interests.
For COVID-19, we likely need a set of models for medical and economic decisions that augment final decision-support models that help the decision makers weigh their options. No experienced decision maker would (or should) rely on any one model or any one subject-matter expert when making complex decisions with so much uncertainty and so much at stake.
Pseudo-scientists only allow participation from subject-matter experts who agree with their agenda. In other words, they often rig the participatory models. Im not saying this is occurring with COVID-19, but it has happened before and could happen again.
Amazing
Indeed models can’t be true until all facts are in.
Human nature drives up to believe in something greater than ourselves. Atheism is a religion as well. Just debate it with an atheist. They are very religious. Atheism requires a leap of faith just like all other religions.
Anyway... a little off topic. Now who is for a lynching. I’ll bring the rope.
... Now who is for a lynching. Ill bring the rope.
Oh, a good mass lynching would be great fun... however
John 8:7
As they morphed with data being collected under the conditions being imposed, they got closer to reality.
The ones that finally predicted about 60K deaths in America seem to be proving out as we approach that number.
Agreed. Public scorning and embarrassment would be the first step.
At this point I'm assuming this idiocy came from a COVID-19 model. :-P
... Public scorning and embarrassment would be the first step.
Then, when unrepentant and denying of culpability
Can we Lynch them then?
Let’s stone them first. :)
Btw, excellent point. They will deny...which aligns with my statement that science is a religion.
Somewhat like when a hurricane approaches and they show models with each on a different track. Thirty models, thirty tracks, yet each using the same barometric pressures, temperature,ocean currents, wind patterns and such. And yet computer models are making some of the most important decisions in our national life.
Somewhat like when a hurricane approaches and they show models with each on a different track. Thirty models, thirty tracks, yet each using the same barometric pressures, temperature,ocean currents, wind patterns and such. And yet computer models are making some of the most important decisions in our national life.
And 1000000000x more complicated is modeling global warming.
The scientists who modeled it and got it wrong should have to explain to all US citizens and the world how and why their models failed. And it should be offered that Americans should never trust them again. The scientists have lost all credibility and should be forced to go through serious scrutiny ever time they make a prediction. The scientists should be punished.
+1.
these types of results are an indication of bias in the data
Not that it affects or negates anything else he says - but that's nonsense. Bias and variance are nearly independent; it's a well known phenomenon that a model have low variance but high bias.
As they morphed with data being collected under the conditions being imposed, they got closer to reality.
No, no - the original predictions were tools of a Deep State plot, and anyone who disagrees is a FeaRper. /s
Welcome anti science crowd!
First of all “model” is a very generic and overused word.
A predictive model is one that tries to extrapolate future events from present and/or historical data. Dynamic predictive models use algorithms based on processes that can influence change. Models are used in about everything from predicting consumer demand, weather forecasting, engineering, and ensuring safety of new products......
All models are not fully correct (black and white), that’s why they call them models. However, they provide guidance, but require expertise in knowing their assumptions, short comings, and range of validity. All models should have some verification and test results for range of applicability. I have been using and developing predictive models for many decades.
I like to think all models lie, and its the user’s job to figure out when and which models are lying. If you don’t trust models, you should never board an airplane, drive a car, take a medication, or listen to a weather forecast. However, never trust user’s/developer’s of models who have an agenda.
If you are not an expert in medical research, and have not seen the predictions of all the virus models, I doubt you have any claim to judging their validity and utilization in policy making. However, the agenda of many of you are clear. That is, current policy is incorrect and we must throw them out and open up the government. This notion is so “fourth world”, and I doubt much would change, other than creating a spike in new cases. The economy was killed by the virus long before policy for shelter-in-place policy was implemented. “Its the Virus stupid.”
Personally, I would like to see the evaluation and range of results and opinions from a team of reputable medical researchers, not just the opinion of some politically motivated doctor who only practices family medicine... and certainly not the opinion of uneducated biased layman. Also, data from economic models must be used in deciding which policy to follow, so experts in this field must be included as well. I have confidence that our elected president and his appointed team are having many heated arguments and reviewing difficult decisions. I do trust the President and his team will make the right decision.
For background, I was one of the earlier signers of the petition challenging the climate change/global warming hypothesis. Their agenda was clear and was used to select models that were supportive of their goals. It may be news to some of you, all models do not predict global warming.
Are we in the dark ages yet!
There is a bias in modelling. Suppose you task someone to develop a model of the effects of carbon on global warming. I guarantee you will find the model predicts global warming as a function of carbon. Now, if you task someone to develop a model of future temperatures, you will get a different answer.
Or, IOW, you want a model of sickness and deaths from a virus, that model will over predict. In effect, you will get what you pay for.
As a matter of fact, it is news to me. Thanks for the info!
The problem with academia is there are too many academics...
Yeah, but it might of been 100-200k if not for the lock downs.
You don't need lockdowns, you need hand washing, social distancing, and masks when distancing is not possible.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.