Posted on 10/16/2016 1:10:18 PM PDT by Bigtigermike
Media cooking polls because they knows that for Hillary to win legitimately she has to get something very close to the Obama coalition to win the election outright.
They know that isn't going to happen, that the Hillary isn't Obama and that the enthusiasm gap isn't there for her in people excited to vote for her. She can't fill up a high school gymnasium. They do know that there is huge enthusiasm gap for those they are going to vote for Trump. So the media is trying to depress enough Trump supporters to not show up and vote to give Hillary some help.
That it's over and so why bother!
They also want to provide enough cover for when the Dems attempt to rig the election with voter fraud that when Trump complains and gear up his lawyers they will say that he and his supporters are crazy because Hillary had it in the bag and it's just sour grapes. They want to voting public to be numb to outright fraud and cooking the election numbers and that even if there is 'some' here and there fraud that Hillary is still the winner no matter what.
Not sure about the rest, but I looked up the Washington Post internals for Florida, 2012 poll they did in mid September. They were using d+3 and had Obama up 5. He won by less than 1%.
I have always said that the Lamestream Media were the Propaganda Wing of the DemocRAT party.
All they are doing is pizzing us off!
Would someone who understands the polling please explain to me the mathematical justification for weighting samples at all?
I do have some math skills - I have a basic understanding of statistics and probabilities.
I can understand why they would want to poll likely voters rather than simply random people, but beyond that I don’t understand the logic of weighting the sample toward or away from any particular demographic.
It seems to me that the goal would be to get as random a sample as possible so as to avoid such a weighting. To weight according to a political party identification or voting predisposition seems ridiculous.
Note that I am not asking WHY anyone would want to distort the polls - that much I understand - and I understand weighting the sample would be a good way to do that.
I’m asking what is their legitimate or ostensible reason for weighting the sample.
Thanks!
Extremely so.
Thanks Rebel_Ace.
If I could boil your answer down to a single phrase, would it be fair to say: “Sample weightings are done to correct for the fact that samples can never be truly random.”?
If so, my follow up question would be: If lack of true randomness is the problem, how does introducing more bias and subjectivity (party affiliation) mitigate that?
I would think they would seek to make the sample more random.
Using your example, throwing a single dart at a map and then polling 1000 people at that location is a bad idea, I agree. So throw 1000 darts and select the nearest person - you’ve just made your 1000 person sample a lot more more random. But there is still bias because rural people are far more likely to be selected than urban people because they cover a much larger area of the map per person. So, instead of throwing darts at a map, you select randomly from SS numbers. Perhaps SS numbers introduces some other bias I haven’t thought of - I wouldn’t doubt it.
But I still don’t see how weighting by party affiliation does anything but ADD bias and defeat randomness.
I would think that if pollsters were being scientific and honest, and they concluded they could not obtain a random enough sample, they would have no choice but to simply let the chips fall where they may, and increase their margin of error accordingly.
Doing otherwise, would be like determining the average height of US adult males by sampling a selection, but making sure your sample included a certain weighting of tall, medium and short males! It’s absurd.
It totally defeats the purpose of trying to get a random sample!
What am I missing?
At the risk of being stubborn (and I HAVE been called that), let me play devil’s advocate:
In a sense, your “correcting for known errors” justification for adjusting samples may work at cross purposes with your “correcting for changes in behavior” justification, or the “out of date records” justification.
Using your examples, let’s say you have data from past years showing distribution for the entire state of 43% D and 38% R. You can’t sample them all, and your limited sample of 1000 results in 53% D and 34% R.
Whoops, you say... but how do you know it is whoops? How do you know this isn’t a reflection of the “changes in behavior” or the “out of date records” effects that you mentioned in your other examples?
If I were a pollster and I got an unexpected result, I’d check my method of sampling for selection bias and try another couple 1000 random samples, to see if I at least got consistant results. If I didn’t, I’d conclude the polls are of no predictive value. If I did get consistant results, but not “expected” based on prior data, I’d conclude that there was indeed a change from the prior distribution.
I would think the ONLY concern of a strictly scientific statistician would be to make sure the sample was large enough and random enough to represent the entire population with a high level of certainty/probability. True, it could never be truly random, never truly representative and there is no guarantee people will tell the truth or even know the truth about how or whether they will vote.
Still, I would think that adjusting the polls to match either past data or “expectations” would be a big no-no and the very last thing a mathematician would do.
Anyway, even if I never completely “get it”, I have learned a lot from your explanations and I do appreciate your patience.
But when the votes were counted, the former California Governor had defeated Carter by a margin of 51% to 41% in the popular votea rout for a U.S. presidential race. In the electoral college, the Reagan victory was a 10-to-1 avalanche that left the President holding only six states and the District of Columbia.
After being so right for so long about presidential electionsthe pollsters findings had closely agreed with the voting results for most of the past 30 yearshow could the surveys have been so wrong? The question is far more than technical. The spreading use of polls by the press and television has an important, if unmeasurable, effect on how voters perceive the candidates and the campaign, creating a kind of synergistic effect: the more a candidate rises in the polls, the more voters seem to take him seriously.
With such responsibilities thrust on them, the pollsters have a lot to answer for, and they know it. Their problems with the Carter-Reagan race have touched off the most skeptical examination of public opinion polling since 1948, when the surveyers made Thomas Dewey a sure winner over Harry Truman. In response, the experts have been explaining, qualifying, clarifyingand rationalizing. Simultaneously, they are privately embroiled in as much backbiting, mudslinging and mutual criticism as the tight-knit little profession has ever known. The public and private pollsters are criticizing their competitions judgment, methodology, reliability and even honesty.
At the heart of the controversy is the fact that no published survey detected the Reagan landslide before it actually happened. Three weeks before the election, for example, TIMES polling firm, Yankelovich, Skelly and White, produced a survey of 1,632 registered voters showing the race almost dead even, as did a private survey by Caddell. Two weeks later, a survey by CBS News and the New York Times showed about the same situation.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.