Skip to comments.Parties Wonder Which Side's Polls Reflect Reality
Posted on 11/01/2012 12:34:00 PM PDT by Arthurio
November 1, 2012 | 6:00 a.m.
A few days ago, I sat down with Rob Jesmer, the executive director of the National Republican Senatorial Committee. Jesmer is usually tight-fisted about his polling; he doesn't share it with members of the media when the numbers are good for his candidates, which avoids the inevitably uncomfortable dilemma when the numbers are bad for his candidates. But he wanted to open his books, if only for a peek, to demonstrate a phenomenon happening across the political spectrum these days: His polls look nothing like polls Democrats are conducting.
It's a constant refrain from both sides these days. The two parties, the outside groups that are playing such a big role this year, and even some candidates themselves are so dubious about their own numbers that they are employing two pollsters for one race, using one to double-check the other. What flummoxes them even more is that their own party's pollsters are getting similar results, while the other side is offering a completely different take.
Whether it is the presidential contest or battles for critical Senate and House seats, the smartest pollsters in the business have spent the past three weeks looking at exactly the same data and coming to dramatically different conclusions.
That can be explained, in part at least, by the volatile history of the last three election cycles. The nearly decade-long erosion of trust in government, the economic recession, the collapse of the housing bubble, and the partisan brawls in Washington are all factors that contributed to three straight wave election cycles. Elections in 2006 and 2008 overwhelmingly favored Democrats. The 2010 midterms overwhelmingly favored Republicans.
This year, there's no wave cresting the week before Election Day, meaning that Tuesday's results will reflect the will of a deeply and bitterly divided nation, roughly the same thing we saw in 2004. We're about to see the new political normalbut after six years of dramatic waves, no one really knows what normal is supposed to look like.
Democrats argue that history shows an inexorable march toward a larger and more diverse electorate. In every election since 1992, the electorate has grown. Bill Clinton won when 83 million people cast a ballot in 1992; 100 million votes were cast in 2000, and 129 million in 2008.
And while voters tell pollsters they hate politics and politicians, their actions don't tell the same story. One prominent Democratic pollster suggested a counterintuitive cause: The explosion of cable-news channels means politics is available all the time; the constant, unceasing news cycle keeps people engaged; the increased polarization of the electorate means voters identify more closely with their party, their team; and the proliferation of absentee ballots and early-voting access makes it easier than ever to vote. That which voters say they hate most may actually be what's keeping them engaged.
In every cycle since 1992, the number of African-American and Hispanic voters has gone up as a share of the electorate. In 1992, exit polls showed 83 percent of the electorate was white. By 2008, whites made up just 74 percent of the electorate.
There's no question that President Obama's 2008 campaign, which focused on turning out new low-propensity voters in minority communities, helped inflate nonwhite voters' influence in the electorate. In Virginia alone, the nonwhite share of the electorate spiked from 21 percent in 2006 to 30 percent just two years later, a virtually unprecedented leap. The question is how many of those voters come back to the polls in 2012.
Republicans and Democrats alike believe the African-American vote is unlikely to change between 2008 and 2012. But they differ dramatically on the number of Hispanic voters who will show up at the pollsa key factor in critical battleground states like Colorado and Nevada. Republicans believe turnout will be down, depressed by Obama's failure to pursue immigration reform during his first term. Democrats think the booming number of Hispanic residents means their share of the electorate will only increase.
The same argument applies to younger voters. In 2008, 18 percent of the electorate was made up of voters between 18 and 29 years old. That's higher than the percentage has been in recent presidential years, when the youth vote has made up around 15 or 16 percent. Republicans believe the younger share of the electorate will slide slightly, and that Obama will win fewer of those voters anyway.
The manifestation of these disagreements is evident in polling weights. Most Republican pollsters are using something close to a 2008 turnout model, with the same percentage of white, black, and Hispanic voters as the electorate that first elected Obama. Most Democratic pollsters are a little more bullish on minority turnout, which helps explain some of the difference between the two sides.
Add in a population that's changing its habits and pollsters have to contend with additional confusing factors. The number of Americans without landline phones is growing, particularly among younger voters. Those voters are much more difficult to convince to complete a poll, surveyors say.
What concerns Republicans most is the fact that media polls seem to track more closely with Democratic internals than with the GOP's numbers. Internal surveys conducted for Republican candidates like George Allen in Virginia, Richard Mourdock in Indiana, and Josh Mandel in Ohio draw much rosier conclusions than polls conducted for their Democratic counterparts Tim Kaine, Rep. Joe Donnelly, and Sen. Sherrod Brown. And media surveys, at least in Virginia and Ohio, show Kaine and Brown winning (restrictive Indiana laws make polling prohibitively expensive there).
Republicans say their party is a victim of media biasbut not in the standard "Lamestream Media" sort of way. Pollsters on both sides try to persuade public surveyors that their voter-turnout models are more accurate reflections of what's going to happen on Election Day. This year, GOP pollsters and strategists believe those nonpartisan pollsters are adopting Democratic turnout models en masse.
Regardless of the cause, strategists on both sides acknowledge the difference in their internal polling. Republicans believe Democrats are counting far too much on low-propensity voters and a booming minority turnout that isn't going to materialize on Election Day. Democrats believe Republicans are hopelessly reliant on an electorate that looks far more like their party than the nation as a whole. The day after Election Day, somebody's pollsters are going to be proven seriously wrong.
Deep down, both parties secretly worry it's their side that is missing the boat.
I believe you are 90% correct, but there are a few more factors involved.
Cell phones rarely get polled.
Caller ID helps to filter out unwanted calls and a lot of the poor get polled cannot afford it, and are more than willing to participate in the polls if they think it will get them more.
People with jobs are not at home during the day to answer the phone, and at night...they filter those calls from numbers they don't recognize.
In short, polling by phone has gotten to be the most inaccurate method available, yet it is the cheapest.
Frankly, I never thought that Bloomberg was going to be supporting anyone else. As for Governor Christie, let us put a little 'realpolitic' on the table. In spite of Christie's 2009 win, New Jersey is a deep blue Democrat state. It is presumable that Christie will want to run again in 2013 and after the shellacking given to NJ by Sandy, it is good politics to make nice with the man with the FEMA pen.
Just my opinion, obviously!
On the other hand you are onto something when it comes to members of groups who want their opinion to be more important than it really is ~ let's think about the GBLT, abortion rights and feminazi crowd. They want to get called by a pollster ~ and they answer every single time. Their party line talking points are firmly in their minds so they have no problem answering the questions. The result is that they have a 100% response rate while everybody else has a 9% response rate. That makes them 11 X more important than the others, per person polled!
Recently the gurus behind the GBLT gay marriage initiatives have figured this out. They say the polls say everybody in America wants gay marriage yet every time they run a referendum they lose! Some have suggested duplicitous pollsters are misleading the GBLT crowd so they'll spend all their money and go away.
But you're absolutely correct in surmising that polling by phone is just about the worst method anyone could use. And it's no longer cheap when you need to call 100,000 people to get 9,000 answers so you can set up a baseline on demographics vis a vis an opinion on some issue.
I would always intentionally lie if I was ever polled.
Jersey is rarely hit by hurricanes ~ they are literally in shock.
You are right. I lived in Ocean City (NJ) for a year and then another year at Ft.Dix, so I know the state. As well I have a cousin still living in Cape MAy and another with a vacation home in OC. Seeing those pictures made me very sad but at least my relatives appear to have come through with losses but good spirits!
“The secret to random sample surveys is to keep them random. Once you get into weighting, or attempting to fit results to a turnout model, they are no longer random.”
Absolutely true, but random samples will not work in this type of statistical study. A random sample from a population of humans will have a significant, and often hard to understand or measure, sample selection bias. Sample selection bias is the biggest factor in gutting an otherwise properly constructed statistical study. I remember at school many years ago we had a presentation from a visiting economist about a study she did about the benefits of a government study program providing free pre-natal vitamins. Early on in the presentation I raised my hand and asked whether she had accounted for a sample selection bias (meaning that those who already would have cared about the health of their babies are the ones who would take the time to show up for the free vitamins). I was not trying to be rude, the question just sort of came out. Well, one could tell from the tone in the room for the rest of her presentation that we thought her study was essentially worthless.
A random sample would work if, for example, we want a sample from a series of products being produced at a manufacturing plant. In this case, either a random sample or a periodic sample would be fine.
However, a random sample of voters is not so easy. If we use phones, we will have a bias. If we find voters on the street, we will have a bias, etc. Therefore, the method is to divide the voting population into many sub-categories, get a random sample from each category, and then add the categories together with a weighted average. And then we have the rub: how does one allocate the weights to the various samples? This is where the years of experience and prior data is vital to the poll. But still, weighting accurately something that will happen in the future is really not so easy to do.
The most accurate method is to get a random sample of voters as they are leaving a voting booth. Of course, even in this case we may have sample selection bias (a Republican voter at a Democratic box may, for example, not want to participate in the survey; or the voting patter of early or absentee voters may be very different than the pattern on election day; etc.).
By the way, in our Texas county a young election worker for a local candidate worked up our early voting turnout numbers. By looking at the turnout at our local voting precinct level (voting box level), and comparing to prior turnout, we can tell that the precincts with a higher percentage of republicans are turning out higher than the precincts with a lower percentage of republicans.
Now, an older method with quite a bit of success was called Augury.
Supposedly based mostly on the flight of birds, the augur, or priest, would turn loose a bunch of birds from a box and watch which way they flew ~ although many have proposed it was more sophisticated than that based on the observation that if you keep a box of birds overnight they will defecate upon release from the box ~ as birds usually do when they take off to fly somewhere.
By doing this in a plaza filled with regular stones the Augur could simply walk around and note how many white spots were in which stones ~ and how they clustered, or didn't cluster. Direction of flight could readily be inferred this way as well ~ if questioned by higher authority.
The Auguries could be used to displace laws considered INAUSPICIOUS!
I believe my observation on the fall of the effluvia has probably not been associated with political prognostication before, but that could been a bit of secret knowledge among the Augurs.
It's certainly not secret among pollsters ~ that the flow and accommodations of the BS may well be more important than the head counting.
But whatever, as soon as you stray from the question of randomness and introduce the value of eliminating bias from a random poll, you begin degrading the quality of the statistical validity of your poll ~ and become, to a degree, not different from the Roman Augurs!.
The Roman Augurs over a period of time ~ centuries actually ~ sought to avoid being unduly influenced by NEGATIVE OUTCOMES which might show up from time to time as they applied their art to determine that which was Auspicious and that which was not.
There seem to have been some rules or observations built up that guided the Augur in dealing with the negative outcomes. To wit;
Against the negative auspicia oblativa the admitted procedures included:
1. actively avoiding to see them.
2. repudiare refuse them through an interpretative sleight of hands.
3. non observare by assuming one had not paid attention to them.
4. naming something that in fact had not appeared.
5. choosing the time of the observation (tempestas) at one's will.
6. making a distinction between observation and formulation (renunciatiatio).
7. resorting to acknowledging the presence of mistakes (vitia).
8. repeating the whole procedure.
I do believe you would recognize these principles at work in modern use of what had been random sample surveys of opinion ~ love that 'interpretive sleight of hands' ~ that sucker is still a major way of dealing with bad news ~ before the election anyway.
I’m worried about overconfidence. Some may not bother to go to the booths at all because they believe the polls show that Romney will win in a landslide and figure they don’t need to vote. I think polls should be outlawed 3 months before an election. The only purpose of polls is to discourage the other side from voting. We’re obsessed with polls. Just look at the breaking news sidebar. Half of it is nothing but polls.
I always lie when on the rare occasions I take a poll.
I’m a conservative Republican who is clearly going to vote for -bama and wants queer marriage. LOL. I believe, given the poll internals of how many “conservatives” are voting -bama that I must not be the only one like me.
You did bring up a very good point though, that I had never considered before: although the bulk of America answers polls at about a 9% rate, those voters like the queer lobby and radical abortionists, WANT to take part in expressing their feelings to pollsters, and that they therefore are on the order of 10x oversampled.
I have to remember this and internalize it. I imagine that the best pollsters already have that figured out, but they may not have any good way to re-normalize their polls even though they know it.
Very nice explanation of sample selection bias.
I would like to hear you discuss a little more how to balance the various samples in order to get an accurate reflection of reality.
This is not my area, but the way it is generally done is via proprietary models based on a very carefully guarded history, prior surveys and voting results, correlated with other voting patterns.
For your reading, I just found a very good article summarizing the process:
On a side note: I am convinced that many liberal pollsters are nothing more than political propaganda hacks. On the other hand, I am not sure what to think about various polls. Nate Silver, certainly no dummy, but also a potential political hack, has a very accurate track record (and his few misses have been about equal missing in favor and against a Republican); and I am concerned about his analysis. However, again, Nate could be completely genuine in his modeling, but could be missing differences in voter turnout and motivation.
Think of it this way: Say I run a paint department at a hardware store. The paint is all white. When a customer wants a certain color I use a machine to add different formula of pigments to make a particular color. Even if I enter the instructions into the machine correctly, if the machine (unknown to me) deposits more of a certain pigment than I thought, the outcome will not be as I expected. Thats what polling is like: trying to accurately determine the fractional weighing among up to a hundred different pigments. Yes, within each pigment the sample is random, but those many results need to be mixed together, and it is in this weighted average where the practice is as much art as it is science.
I do know based on our local analysis that higher percentage R precincts seem to be voting at a relatively higher pace than the lower percentage R precincts (certainly a good bump for our local R candidates).
Thanks for the attempt to answer my question. I am sure that many other FReepers are as interested in this as I am.
I read that article you linked. I find it quite interesting in that it appears that pollsters have the same question I have.
Sounds like anyone who figures out a good way to re-normalize election polling properly will make a big name for themselves.
If we were dealing with a bunch of mechanical parts out of a stamping machine, or virtually any other physical system, that re-normalization is not usually very difficult.
Humans are a pain in the neck.
Very true. The lack of discussion of even an outside chance of a -bama landslide is one of the most telling parts of this subject. I am looking forward to a very good GOP day on Tuesday and Wednesday.
Sandy and Benghazi will also play into adding to that very good day with via a gossamer cast to reality, though they may not be able to be measured.
I don’t know what it is exactly, but I’m getting some jitters here in the last few days. I guess it’s so important that the idea of Obama pulling it out gives me serious concerns.
Let’s hope your take on it and mine from a few days back prevail.