Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Media oversampling D+ in polls to depress Trump voters that it's over and cover Real rigging
Truth

Posted on 10/16/2016 1:10:18 PM PDT by Bigtigermike

Media cooking polls because they knows that for Hillary to win legitimately she has to get something very close to the Obama coalition to win the election outright.

They know that isn't going to happen, that the Hillary isn't Obama and that the enthusiasm gap isn't there for her in people excited to vote for her. She can't fill up a high school gymnasium. They do know that there is huge enthusiasm gap for those they are going to vote for Trump. So the media is trying to depress enough Trump supporters to not show up and vote to give Hillary some help.

That it's over and so why bother!

They also want to provide enough cover for when the Dems attempt to rig the election with voter fraud that when Trump complains and gear up his lawyers they will say that he and his supporters are crazy because Hillary had it in the bag and it's just sour grapes. They want to voting public to be numb to outright fraud and cooking the election numbers and that even if there is 'some' here and there fraud that Hillary is still the winner no matter what.


TOPICS: News/Current Events; Politics/Elections; Your Opinion/Questions
KEYWORDS: 2016polls; clinton; deceit; fraud; hillary; liberalmedia; rigged; trump; voterfraud
Navigation: use the links below to view more comments.
first previous 1-2021-4041-50 last
To: libh8er

Not sure about the rest, but I looked up the Washington Post internals for Florida, 2012 poll they did in mid September. They were using d+3 and had Obama up 5. He won by less than 1%.


41 posted on 10/16/2016 3:29:05 PM PDT by Murp (!!!!!!!!!!!!!!!!)
[ Post Reply | Private Reply | To 11 | View Replies]

To: Repeal 16-17
The internals are so absurd that the polls can't be claimed to be anything other than propaganda.

I have always said that the Lamestream Media were the Propaganda Wing of the DemocRAT party.

42 posted on 10/16/2016 4:31:04 PM PDT by benldguy (Obama delenda est!)
[ Post Reply | Private Reply | To 27 | View Replies]

To: Robert DeLong

All they are doing is pizzing us off!


43 posted on 10/17/2016 7:20:00 AM PDT by Dana1960
[ Post Reply | Private Reply | To 29 | View Replies]

To: LS

Would someone who understands the polling please explain to me the mathematical justification for weighting samples at all?

I do have some math skills - I have a basic understanding of statistics and probabilities.

I can understand why they would want to poll likely voters rather than simply random people, but beyond that I don’t understand the logic of weighting the sample toward or away from any particular demographic.

It seems to me that the goal would be to get as random a sample as possible so as to avoid such a weighting. To weight according to a political party identification or voting predisposition seems ridiculous.

Note that I am not asking WHY anyone would want to distort the polls - that much I understand - and I understand weighting the sample would be a good way to do that.

I’m asking what is their legitimate or ostensible reason for weighting the sample.

Thanks!


44 posted on 10/17/2016 8:28:29 AM PDT by enumerated
[ Post Reply | Private Reply | To 28 | View Replies]

To: enumerated
"...Would someone who understands the polling please explain to me the mathematical justification for weighting samples at all?..."

Good question. There are actually a few reasons one might want to "structure" a sample to better capture reality, given that trying to get a truly "random" sample over an entire region is difficult.

(1) To correct for "localized" sample bias. Consider that you throw a dart at a map, and decide to poll 1000 people at whatever location you hit. The "locale" is chosen at random. Now consider if your dart hits an SEIU union hall. Ugh. While the locale was selected randomly, you are going to get a clearly biased sample set, say 100% Democrat, which while true for the random locale, is not true of the greater regional reality.

(2) To attempt to capture expected voter turnout. Consider that in 2008, the election of B.H. Obola excited a large number of Black voters that would have normally not participated in the election. Consider now that in 2016 those same voters might not be as energized to show up for Shrillary. So, even those that voted in 2008, and SAY they are going to vote in 2016, the REALITY is that many of them won't. The same analysis can be applied to weak support for Republican candidates. You can tweak the samples to try to legitimately account for this.

(3) To try to correct for bias in the sample collection method. Consider this NON POLITICAL example: Astronomers want to find "exoplanets". They have 2 methods: Star Wobble or Star Transit. Star wobble works with ANY solar system orientation, but ONLY for HUGE, CLOSE planets. Star Transit works for ANY SIZE, ANY DISTANCE planets, but ONLY for solar systems with planes aligned with our line of sight. If you choose "wobble", all you are EVER going to see are HUGE PLANETS. A biased collection method. If you choose "transit", you will miss ALL the systems that are not aligned with ours. So, if you take a poll by land line phone, or cell phone, or on-line on the web, or by mail, the way you choose to contact the people in the sample affects the KIND of sample you are taking.

There are other reasons as well.
45 posted on 10/17/2016 8:49:06 AM PDT by Rebel_Ace (HITLER! There, Zero to Godwin in 5.2 seconds.)
[ Post Reply | Private Reply | To 44 | View Replies]

To: Dana1960

Extremely so.


46 posted on 10/17/2016 9:19:12 AM PDT by Robert DeLong
[ Post Reply | Private Reply | To 43 | View Replies]

To: Rebel_Ace

Thanks Rebel_Ace.

If I could boil your answer down to a single phrase, would it be fair to say: “Sample weightings are done to correct for the fact that samples can never be truly random.”?

If so, my follow up question would be: If lack of true randomness is the problem, how does introducing more bias and subjectivity (party affiliation) mitigate that?

I would think they would seek to make the sample more random.

Using your example, throwing a single dart at a map and then polling 1000 people at that location is a bad idea, I agree. So throw 1000 darts and select the nearest person - you’ve just made your 1000 person sample a lot more more random. But there is still bias because rural people are far more likely to be selected than urban people because they cover a much larger area of the map per person. So, instead of throwing darts at a map, you select randomly from SS numbers. Perhaps SS numbers introduces some other bias I haven’t thought of - I wouldn’t doubt it.

But I still don’t see how weighting by party affiliation does anything but ADD bias and defeat randomness.

I would think that if pollsters were being scientific and honest, and they concluded they could not obtain a random enough sample, they would have no choice but to simply let the chips fall where they may, and increase their margin of error accordingly.

Doing otherwise, would be like determining the average height of US adult males by sampling a selection, but making sure your sample included a certain weighting of tall, medium and short males! It’s absurd.

It totally defeats the purpose of trying to get a random sample!

What am I missing?


47 posted on 10/17/2016 10:02:08 AM PDT by enumerated
[ Post Reply | Private Reply | To 45 | View Replies]

To: enumerated
"...What am I missing?..."

Not everything has to do with "randomness" (or lack thereof) in getting what are essentially bad results from the raw numbers from a poll.

In the examples I gave above, one of the factors to attempt to correct for is anticipated future behavior, in this case, whether or not a PAST voter of a certain category would ACTUALLY REPEAT his or her actions for THIS election. In other words, correcting for "voter enthusiasm".

Another is to correct for well established "known errors" in your sample set. For example, you use a computer to sift through ALL the voter registration records for a large state like Ohio. From these records you learn with high confidence that the overall distribution of Dem and Repubs is 43%D and 38%R for the entire state. Of course, you can't call them all. You take a poll of 1000 randomly selected people, and by chance, you get 53%D and 34%R. Oops, you know that if you want this sample to "represent" the state overall, you will need to "correct" your sample set to bring it in to "match" the established known ratios...

Ahh, but it gets worse...

You know that the State Registration list contains SOME PERCENTAGE of out of date records. People have moved, died, changed parties, what have you. So, your formally "written in stone" percentages of 43%D and 38%R will need some statistical correction, and so on and so forth.

An HONEST polling organization will attempt MODEST corrections to get a clearer, more "realistic" outcome than a purely (pseudo) random sample will produce. UNETHICAL polling organizations will cook the data until they get the result their client is expecting.
48 posted on 10/17/2016 11:57:13 AM PDT by Rebel_Ace (HITLER! There, Zero to Godwin in 5.2 seconds.)
[ Post Reply | Private Reply | To 47 | View Replies]

To: Rebel_Ace

At the risk of being stubborn (and I HAVE been called that), let me play devil’s advocate:

In a sense, your “correcting for known errors” justification for adjusting samples may work at cross purposes with your “correcting for changes in behavior” justification, or the “out of date records” justification.

Using your examples, let’s say you have data from past years showing distribution for the entire state of 43% D and 38% R. You can’t sample them all, and your limited sample of 1000 results in 53% D and 34% R.

Whoops, you say... but how do you know it is whoops? How do you know this isn’t a reflection of the “changes in behavior” or the “out of date records” effects that you mentioned in your other examples?

If I were a pollster and I got an unexpected result, I’d check my method of sampling for selection bias and try another couple 1000 random samples, to see if I at least got consistant results. If I didn’t, I’d conclude the polls are of no predictive value. If I did get consistant results, but not “expected” based on prior data, I’d conclude that there was indeed a change from the prior distribution.

I would think the ONLY concern of a strictly scientific statistician would be to make sure the sample was large enough and random enough to represent the entire population with a high level of certainty/probability. True, it could never be truly random, never truly representative and there is no guarantee people will tell the truth or even know the truth about how or whether they will vote.

Still, I would think that adjusting the polls to match either past data or “expectations” would be a big no-no and the very last thing a mathematician would do.

Anyway, even if I never completely “get it”, I have learned a lot from your explanations and I do appreciate your patience.


49 posted on 10/17/2016 1:25:22 PM PDT by enumerated
[ Post Reply | Private Reply | To 48 | View Replies]

To: Bigtigermike
For weeks before the presidential election, the gurus of public opinion polling were nearly unanimous in their findings. In survey after survey, they agreed that the coming choice between President Jimmy Carter and Challenger Ronald Reagan was “too close to call.” A few points at most, they said, separated the two major contenders.

But when the votes were counted, the former California Governor had defeated Carter by a margin of 51% to 41% in the popular vote–a rout for a U.S. presidential race. In the electoral college, the Reagan victory was a 10-to-1 avalanche that left the President holding only six states and the District of Columbia.

After being so right for so long about presidential elections–the pollsters’ findings had closely agreed with the voting results for most of the past 30 years–how could the surveys have been so wrong? The question is far more than technical. The spreading use of polls by the press and television has an important, if unmeasurable, effect on how voters perceive the candidates and the campaign, creating a kind of synergistic effect: the more a candidate rises in the polls, the more voters seem to take him seriously.

With such responsibilities thrust on them, the pollsters have a lot to answer for, and they know it. Their problems with the Carter-Reagan race have touched off the most skeptical examination of public opinion polling since 1948, when the surveyers made Thomas Dewey a sure winner over Harry Truman. In response, the experts have been explaining, qualifying, clarifying–and rationalizing. Simultaneously, they are privately embroiled in as much backbiting, mudslinging and mutual criticism as the tight-knit little profession has ever known. The public and private pollsters are criticizing their competition’s judgment, methodology, reliability and even honesty.

At the heart of the controversy is the fact that no published survey detected the Reagan landslide before it actually happened. Three weeks before the election, for example, TIME’S polling firm, Yankelovich, Skelly and White, produced a survey of 1,632 registered voters showing the race almost dead even, as did a private survey by Caddell. Two weeks later, a survey by CBS News and the New York Times showed about the same situation.

50 posted on 10/17/2016 1:37:20 PM PDT by Osage Orange (Cover up after cover up...OUR GOVERNMENT is OUT OF CONTROL)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-50 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson