Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

How was the the Newsweek poll conducted? How do you interpret this data?
PR Newswire ^ | 10/03/2004 | Whitman

Posted on 10/03/2004 9:00:21 AM PDT by timbuck2

SAMPLE SIZE/MARGIN OF ERROR FOR DEBATE VIEWERS SUBGROUPS: 770 Debate viewers (those who say they watched at least some of the debate) (plus or minus 4.1)

369 Men (plus or minus 6) 401 Women (plus or minus 6)

265 Republicans (plus or minus 7) 274 Democrats (plus or minus 7) 215 Independents (plus or minus 8)

NOTE: Data ares weighted so that sample demographics match Census Current Population Survey parameters for gender, age, education, race and region. Sample sizes listed above are unweighted and should NOT be used to compute percentages.

So how were the percentages generated?

(Excerpt) Read more at prnewswire.com ...


TOPICS: Your Opinion/Questions
KEYWORDS: bush; data; debate; kerry; newsweak; newsweek; polling; polls
Navigation: use the links below to view more comments.
first previous 1-2021-4041-42 next last
To: Blake#1

I don't believe they are authetic. RAT operatives are masquerading as independents to move public opinion and seize a disproportionate platform. Cripes, watch ten minutes of "Washington Journal" on C-SPAN.


21 posted on 10/03/2004 9:25:10 AM PDT by Barlowmaker
[ Post Reply | Private Reply | To 19 | View Replies]

To: timbuck2

http://www.nationalreview.com/comment/graham200410011024.asp This is MUST reading by Tim Graham at NR

"Shortly before the debate began, Newsweek national editor Jon Meacham suggested on MSNBC that journalists are tired of Bush being in the lead, and so will try to narrow the race. Meacham foresaw "the possibility that President Bush has peaked about a month too early. Because we all need a narrative to change." Chris Matthews asked: "Is that your prediction?" Meacham replied: "I think it's possible that we're gonna be sitting around saying, 'Well you know Kerry really surprised us.' Because in a way the imperative is to change the story." "


22 posted on 10/03/2004 9:25:12 AM PDT by SE Mom
[ Post Reply | Private Reply | To 18 | View Replies]

To: Publius6961

Publius,
First of all, I believe Bush is still up significantly and did not suffer much damage from his performance which I thought was fine. My point is that if we can hold the pollsters feet to the fire, they will be less able to hoodwink the public and make them believe that the race is tied after 90 minutes of debate when in fact Bush is still way ahead. You know that the Dem base will get fired up again and fundraising will spike and more ads will be run. These polls dictate what the MSM will write and only provide them cover to write their biased hit pieces designed to rally the Dem troops.

-T


23 posted on 10/03/2004 9:25:46 AM PDT by timbuck2 ("The true danger is when liberty is nibbled away, for expedients, and by parts." -Edmund Burke)
[ Post Reply | Private Reply | To 15 | View Replies]

To: timbuck2

I wouldn't sweat this. The real story is in the state polls.


24 posted on 10/03/2004 9:26:40 AM PDT by meatloaf
[ Post Reply | Private Reply | To 1 | View Replies]

To: Owen

Ah! Okay. It is starting to sink in. Only Rasmussen tosses sample responses until he gets what he believes to be the correct GOP/DEM/Indy ratio. Well, Rasmussen just shot up on my list of reputable polls. How can you have a spike in Dem response rate and then state that Bush has fallen behind? It is absolutely ridiculous!

-T


25 posted on 10/03/2004 9:30:26 AM PDT by timbuck2 ("The true danger is when liberty is nibbled away, for expedients, and by parts." -Edmund Burke)
[ Post Reply | Private Reply | To 11 | View Replies]

To: timbuck2
Can somebody help me understand these sample

It is not uncommon to set quotas for certain demographic groups that don't necessarily represent the poplulation as a whole. This is done, so the groups can be compared to one another. The reason this is done is so the numbers in each comparison group are sufficiently large enough to allow statistically significant comparisons.

In order to extrapolate the data to the population as a whole, each record (respondent) is weighted up or down to reflect the respondents actual proportion of the entire population, using census data.

For example, if I set a quota of 100 women and 100 men, and the actual proportion of the population is not 50/50, but 60/40, women respondents would be weighted 1.2 and men would be weighted .8.

When analyzing the data, and comparing women to men, the data is left unweighted. When looking at the entire population, the weights are applied.

That being said, I don't trust polls by Newsweak. Polls are controlled by the people that pay for them.

Their methodology, if applied correctly, appears to be sound, however.

26 posted on 10/03/2004 9:31:33 AM PDT by Strider
[ Post Reply | Private Reply | To 1 | View Replies]

To: timbuck2

Newsweak pollsters reachable here:

http://www.psra.com


27 posted on 10/03/2004 9:33:49 AM PDT by jimbo123
[ Post Reply | Private Reply | To 1 | View Replies]

To: timbuck2

From: http://polipundit.com/

Poll Methodology - A 2004 Guide


There has been intense interest in the polls this year, and the recent disagreement about the range of position has only highlighted discussion. Some people like to support a poll with results they like, without any sort of examination about why that poll is different from others. And some reject polls on a charge of outright bias or prejudice, which I can understand, given the partisan comments from supposedly objective people like John Zogby and Larry Sabato, but I must caution the readers to be careful to consider the evidence before accepting or rejecting a poll.


Let’s start with the obvious; more information is better, especially if it is relevant to how the numbers were driven. By relevant, I mean two things: The information should show valid evidence to support the poll’s main conclusion, and the information should be consistent with past polls, so that trends and historical benchmarks may be seen. To that end, I discovered that in terms of methodology, we can separate the polls into three broad types – the polls which provide demographic internal data, the polls whose questions show mood in the main issues, and those polls which refuse to provide internal data.


The best way to find out how the polls developed their methodologies, is to look for that information. Some publish their methodologies at the bottom of their poll releases, others are so proud of their methodologies, they wrote up special articles to explain their process. Others did not have their methodologies handy, but responded when I asked them how they did their polling. And others, well, they were neither forthcoming nor cooperative, and that speaks for itself. This article allows you to get to know the polls all over again, this time starting form the inside. I figure, this guide will help you figure out for yourself, whose word is worth listening to, and who is nothing but hooey. I am listing the polls in alphabetical order. All telephone polls referenced employ Random-Digit-Dialing (RDD); RDD is used to pre-select Area codes and exchanges, then use a randomizer to select the last 3 or 4 digits, depending on the poll. When I say ‘pure’ RDD, I mean that the respondent poll is new; some polls appear to use an initial pool of respondents for future polling, and I will note this where it shows up. All references to “Margin of Error” reflect a standard 95% confidence level by the polls. When I reference ‘NCPP’, I mean the National Council on Public Polls, who published guidelines for demographic weighting and internal responsibility, which they expect their members to follow. Another national group for pollers is the American Association of Public Opinion Research (AAPOR), but they appear to be much smaller, and have looser standards than the NCPP. It’s worth noting, though, that neither the NCPP nor AAPOR appears to have any deterrent in their policies; there is no specified penalty for not meeting their standards, nor any formal auditing authority. That, of course, is one reason I’m doing this review.


ABC News/Washington Post: This agency uses a call center for its polling. The subcontractor at present is TNS of Horsham, Pa. The poll is performed by telephone, calling roughly 1,200 “randomly selected adults nationwide”, from which self-identified registered voters are polled for the report’s information. The respondent poll is pure RDD for each poll. ABC/WP says their Margin of Error is +/- 3 points. The ABC/WP poll cites results by gender, race, age, education, and sometimes also by income groups. Regarding the weighting of their poll data, ABC says, “Final data are weighted using demographic information from the Census to adjust for sampling and non-sampling deviations from population values. Respondents customarily are classified into one of 48 cells based on age, race, gender and education. Weights are assigned so the proportion in each of these 48 cells matches the actual population proportion according to the Census Bureau’s most recent Current Population Survey” The weighting appears to be in line with NCPP guidelines.


American Research Group: ARG’s methodology for national polling was not published, but from their primary polling in the spring, ARG stated “The results for this tracking survey are based on completed telephone interviews among a statewide random sample of likely primary voters in [the state]. Telephone households are selected by adding random digits to telephone exchanges selected in a random probability-proportionate-to-size sample of the state. Respondents in the telephone households are randomly selected and then screened for voter registration (including intent to register on or before primary day) and likelihood of voting in the primary.” On September 22, ARG released a nationwide compilation of state polls, which revealed they use a 53-47 weighting of women to men, and a party weighting of 41.4% Democrats, 35.5% Republicans, and 23.1% Independents. These do not conform to 2000 or 2002 exit polling, nor the 2000 Census, and are not in line with NCPP guidelines. ARG’s respondent poll may sometimes be pure RDD, but at other times appears to be a reserved poll from previous polls, in order to track possible opinion shifts in the same pool. ARG does not explain whether this is the case, and did not respond to a request for clarification.


Associated Press/Ipsos : The Associated Press Poll is conducted by Ipsos-Public Affairs. The poll is a telephone poll of randomly selected numbers, with a sample of roughly 1,500 adults nationwide, produced between 1,200 and 1,300 registered voters ,whose responses produce the poll’s information, along with a smaller number of self-described ‘likely’ voters, who are defined as voters who voted in 2000 and are 8-10 on a likelihood of voting (1-10 scale), or who did not vote in 2000, but are ‘10’ this year. The respondent poll is pure RDD. Ipsos weights its poll, but does not detail the breakdown in its Press Release or Questionnaire, though some demographic information was released in their latest poll. Instead, Ipsos concentrates on the trend of questions measuring a degree of support on key issues, such as Overall Leadership, Foreign Policy, the Economy, Domestic Issues, and Terrorism. Ipsos’ reported Margin of Error is +/- 2.5 points for adults, +/- 2.7 points for registered voters.


Ayres McHenry: “Ayres, McHenry, & Associates belongs to the American Association of Public Opinion Research, and the American Association of Political Consultants, where Ayres serves as a member of the Board of Directors.” That’s all they have. Nothing about weighting or breakdown of samples, which is contrary to the AAPOR’s written Code of Professional Ethics and Practice. Ayres McHenry did not respond to a request for more information. As they are a Republican-sponsored firm, this agency does not provide any supporting evidence for its statements, and should be not generally be considered a reliable indicator of voters’ true opinion.


Battleground Poll : The Battleground Poll uses two firms for its interviews and analysis; the Tarrance Group and Lake, Snell, & Perry. The Battleground Polls started in 1991, so they have some history to track. The Battleground Poll did not publish its methodology, but the Tarrance Group was kind enough to answer a request for more information (hat tip to Brian Nienaber); Lake, Snell, & Perry did not respond to a request for information. Overall, the Battleground Poll uses a “stratified” sample design, and pure RDD for respondent pooling. Battleground explains their weighting thusly; “quotas are set by state and by gender based on voter registration statistics or voting age population, where applicable. Prior election turnout statistics are also used to set these quotas. For the 2004 Battlegrounds, we have been applying a weight to the demographics of race and party identification. Race is weighted to White=80%, African Americans=10%, Hispanics=6%, and Other races=4%. Party identification is weighted to even with Republican=42.3%, Democrat=42.3%, and Independent=15.4%.” Note that the demographics are consistent with 2000 Census, and the party weighting presumes parity. Battleground release the demographic breakdown of their respondents, but does not publish polling results by demographic groups. Like the Associated Press, most questions reflect a trend of national mood on the major issues. The sample used is for 1,000 registered voters who self-describe as “likely”. Tarrance estimates their Margin of Error to be +/- 3.1 points.


CBS News, and CBS News/NY Times : Telephone interviews with adults in the continental United States. Phone calls are randomly generated within system-selected area codes and exchanges. CBS goes to some length to brag about their methodology, and you know what? They should. While CBS and the NY Times tend to over-weight the poll in favor of Democrats, their demographics not only follow NPCC guidelines by matching the 2000 Census, they also publish their demographics regularly, and have for the last five years. If you don’t like their numbers, at least you can take them apart to see where they came from, and this with no subscription fee or doubletalk to hide the trends. Obviously, the “60 Minutes” guys and Dan Rather have nothing to do with the polling at CBS. The polls are consistent and complete, and frankly, very impressive in their detail and history. CBS/NYT generally calls about 1,000 adults in each survey, with around 78-80% as registered voters. The respondent poll is pure RDD. Their cited Margin of Error is +/- 3 points.


CNN/USA Today/Gallup: This poll uses random telephone interviews, with around 1,000 adults on average, around 76-80% registered voters responding. Announced Margin of Error is +/- 4 points. Demographics details are available, but generally only to Gallup subscribers. The weighting matches NPCC guidelines. The respondent poll is pure RDD.


Democracy Corps : This Democrat-sponsored polling agency (James Carville is one of the owners, that should tell you a lot) uses Greenberg, Quinlan, Rosler (GQR) for it’s interview sampling. They do not respond to queries, and do not explain their methodology. Note that NPR uses the same sub-contractor for their polling. This agency should be recognized as partisan and biased by design.


Fox News/Opinion Dynamics : Opinion Dynamics Corporation conducts a national telephone poll 1,000 self-described ‘likely voters’ from random contacts. Fox includes internal details by gender and party affiliation, but not race. Their website says “Generally, Fox News poll results are not weighted. The national probability sample, if conducted properly, should accurately reflect national attitudes. However, particularly because the survey is often conducted over only two nights (limiting the opportunity for callbacks), some demographic deviation is possible. Opinion Dynamics Corporation has a constantly updated database of demographic information about the national samples and, if one should deviate significantly from past averages, weighting is used to bring the sample into conformity with other samples”. In English, that suggests that Fox will weight some polls, but not others, which is a strike against consistency. There is no information to determine whether or not the respondent poll is pure RDD or pre-selected. The same website admits that Fox weights their polls by gender, 47% Men and 53% Women, even though this is not in line with NPCC guidelines, Census data, or consistent with Exit Polls from past elections. Neither Fox News nor Opinion Dynamics responded to a request for clarification.


Gallup: The gold standard of opinion polling. Gallup presents demographic and trend data for every poll they have anything to do with. Whether on their own or in combination with other groups (the CNN/USA Today/Gallup poll, for example), Gallup insists on consistent procedures to insure consistency. Their respondent poll is pure RDD for the Presidential Trial Heats. Gallup weights their polls in line with NPCC guidelines, and releases internal data on race, gender, party affiliation, age, region, education, economic strata, union/non-union, veteran/non-veteran, religious preference, and sexual orientation. Gallup polls are random telephone interviews, with around 1,000 adults on average, around 76-80% registered voters responding. Announced Margin of Error is +/- 4 points. The down side to the demographics details, is that they are generally only available to Gallup subscribers. With a 69-year track record, Gallup is able to show an impressive record for their predictions and tracking.


Harris: The Harris Poll is one of the oldest polls in the nation, after Gallup. For some reason, though, Harris not nearly as successful as their older sibling, and I think I know why. They like to ask questions, but they don’t answer them. The Harris Poll is a random telephone poll, as most of the polls are, interviewing roughly 1,000 adults nationwide in each poll, and producing around 80% registered voters from that pool. The respondent poll for their telephone interviews is pure RDD. Harris also has an Interactive Poll, but there is no established benchmark for the accuracy of the Interactive poll, nor do they explain their methodology for the Interactive poll; I suspect it is similar to their telephone poll, since they produce similar results, but cannot confirm this possibility. Harris weights their responses by the NCPP guidelines, for age, gender, race, education, number of adults, number of voice/telephone lines in the household, region and size of place, in order to “align them with their actual proportions in the population”. Harris cites a +/- 3 point Margin of Error. Unfortunately, when it comes to releasing their information, well, they don’t. I’ve been part of the Interactive polling as a respondent, and even then, they are parsimonious with hard data. From the lack of response I’ve had from them, I get the strong impression they are all about chasing the corporate patrons, and only put out the occasional public poll to keep their name in the press. OK, that’s their right, but other polls can chase sponsors, without looking like the Information Age version of Ebenezer Scrooge (pre-Ghost Visit). My advice? Ignore these guys, unless they start putting some hard data behind the headlines in their releases.


Investor’s Business Daily/Christian Science Monitor: The Christian Science Monitor is a long-established, well-respected name, but they have no experience in polling. Investor’s Business Daily is a publication I’d never heard of, until they showed up with their releases. They began polling in February 2001, using something they called “Indices” for various factors they considered important. The Indices are developed using random nationwide telephone interviews with approximately 900 adults each month. The respondent pool appears to be pure RDD, but there is no confirmation. They seem very impressed with themselves. I’m not impressed. Since they don’t release much hard data at all, and pretty much diddly to support their claims, and their ‘indices’ don’t seem to follow any established method for determining public opinion, my opinion of IBD is rather like the old Monty Python skit about an especially bad brand of wine: “This is not a poll for enjoying, it’s a poll for laying down, and avoiding”.


Investor’s Business Daily/TIPP : See Investor’s Business Daily/Christian Science Monitor, above.


LA Times : The Los Angeles Times wants to be a big-time newspaper. I write it that way ,because while they want the glory, they don’t seem to feel as though they should have to earn it. The LA Times uses telephone interviews nationwide, of at least 1,500 adults, using pure RDD sampling. They produce a subset of registered voters, at around 77-80% of the adult number. The LA Times says the samples are “weighted slightly to conform with their respective census figures for gender, race, age, and education”, which may or may not be in alignment with NCPP guidelines. The Times’ announced Margin of Error is +/- 3 points. The Times releases details by party alignment and gender, and appears to over-weight Democrats.


Marist College Institute for Public Opinion: Marist is a college up in New York, who produces polls on the Presidency. Marist does not release a lot of details, however, including their methodology. Their website notes that “MIPO adheres to the current Code of the National Council on Public Polls and the AAPOR Code of Professional Ethics and Practices”, which at least suggests they use the 2000 Census for their weights, although this does not speak to party alignment or sampling methodology. They haven’t put anything out for a long time, so it may not matter, but if they pop up again, the fact that they don’t back up their statements with supporting data should be a warning sign.


NBC News: NBC News uses a sub-contractor for its polling. Princeton Survey Research Associates (PSRA), whom NBC contracts for some of their polls, was kind enough to provide specific details by email at my request (hat tip to Evans Witt). The July poll where Kerry picked Edwards for his running mate, NBC used Braun Research, Inc. for the interviews, using a sample designed by Survey Sampling International, LLC. The sample was relatively small (504 registered voters), but used pure RDD. NBC says that “statistical results are weighted to correct known demographic discrepancies”, which “parameters came from a special analysis of the Census Bureau’s 2003 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States that had a telephone”. NBC estimates their Margin of Error at +/- 5 points, due to the smaller sample size compared to normal poll pools. NBC does not release demographic breakdowns of votes in their polls. NBC also examined their response rate, which is an often overlooked factor in poll analysis. NBC states “the response rate estimates the fraction of all eligible respondents in the sample that were ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates: Contact rate – the proportion of working numbers where a request for interview was made – of 47 percent Cooperation rate – the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused – of 32 percent Completion rate – the proportion of initially cooperating and eligible interviews that were completed – of 99 percent Thus the response rate for this survey was 15 percent.”


Newsweek : Like NBC, Newsweek has also used Princeton Survey Research Associates International (PSRA) to do their polls, and again, they are pure RDD telephone interviews, of roughly 1,000 registered voters nationally. They seem to weight by NCPP guidelines. Newsweek did not publish the response rates from respondents, but they are very good about including the demographic response in their releases, including party support, gender, non-whites, geography, and by age groups. Newsweek reports their Margin of Error to be +/- 4 points.


NPR-POS/GQR : NPR uses a sub-contractor for their polls, Greenberg Quinlan Rosner Research (GQR) and Public Opinion Strategies (POS) for National Public Radio (NPR). GQR also does work for the Democracy Corps firm, indicating a loose possible relation between the two polls. NPR presents its results for “likely voters”, defined as “registered voters, voted in the 2000 presidential election or the 2002 congressional elections (or were not eligible) and indicated they were almost certain or certain to vote in 2004”. Pure RDD was used for the pool selection. GQR interviews around 800 voters, and reports a Margin of Error at +/- 3.5 points. NPR does not release demographic responses, and did not respond to a request for further information.


Pew Research Center: Pew conducts its research using the same sub-contractor as NBC News and Newsweek, Princeton Survey Research Associates International. PSRA performs a pure RDD pool of respondents, interviewing a national sample of adults by telephone, for roughly 1,000 or 2,000 respondents, of which 78-80% are registered voters. As with other PSRA work, it appears NCPP guidelines are followed for weighting. Pew publishes an extensive report, reflecting not only national mood on key issues, but demographic response by party response, gender, age, and regional groups. Like the CBS News poll, I find the political weighting a little bit off, but I can’t complain about their work ethic or standards. Pew is very consistent, and is particularly useful for measuring shifts in demographic trends. Pew estimates their overall response Margin of Error is +/- 2.5 points, and +/- 3.5 points for registered voters.


Quinnipiac University: This school in Connecticut performs polls on “politics and public policy in New York, New Jersey, and Connecticut”, as well as the occasional national poll, such as the Presidential Election. They use pure RDD with their on-campus Institute to contact roughly 1,000 registered voters or more nationally by telephone, over a five or six day period. They release results by overall weighted response, party affiliation, by gender, and by black/white racial group responses. Quinnipiac does not detail their methodology for weighting demographics, and did not respond to a request for more information. Quinnipiac estimates their Margin of Error at +/- 3.1 points, less than that if the sample size is larger.


Rasmussen: Rasmussen Research performs more national polls than anyone else right now, with a poll taken every day. Unfortunately there is no methodology released to the general public. Not the size of the respondent poll, nor whether the sample is randomly developed, there is no weighting method cited, and there is no breakdown of respondents’ responses, which might allow analysts to compare Rasmussen’s results with anyone else. Scott was kind enough to respond to a request for more information, but only to say that he is very busy right now, and will answer later. By the time this article went to publishing, no information on his methodology had been provided, so I must regard this poll to be unsupported in its claims. There is no evidence to confirm whether or not the response is weighted, and if so how, or whether any standardized methods are employed in this poll.


Survey USA: Survey USA is a unique polling agency. On the one hand, they do not perform national polls on the Presidential race, yet they poll in almost every state on the Presidential race. Survey USA has been around since 1992, and they love to punch out state polls Survey USA and Zogby are in a horse race for who will put out the most state polls this year. I also included Survey USA in this list, because they have strong opinions about polling methodology, and they printed an extensive article, far too long for me to copy here, so read it here.


Survey USA uses pure RDD for their telephone polls,usually between 500 and 1,000 self-identified “likely voters”. While Survey USA does not define “Likely Voter” for their methodology, they do take pains to emphasize that their polls “are conducted in the voice of a professional announcer. SurveyUSA is the first research company to appreciate that opinion research can be made more affordable, more consistent and in some ways more accurate by eliminating the single largest cost of conducting research, and a possible source of bias: the human interviewer.”. This suggests they use an automated voice, which is certainly original. Survey USA opines that human error in pronunciation, diction and unintended inflection leads to flaws in the voter response. Survey USA uses weighting in line with NCPP guidelines. The reports are specific to states, but lack demographic breakdowns or votes by demographic group. Survey USA estimates their Margin of Error to be +/- 4.5 points.


It’s interesting to note three additional comments made by Survey USA. First, Survey USA makes a point of the need to verify results, disparaging “call-in” polls as unscientific, and strongly suggests Internet polling is about as useless as the “call-in” polls. Survey USA notes that “only a few large research companies employ their own telephone interviewers. Almost all small pollsters, and even some of the nation’s most prominent firms, outsource all of their interviewing to a third party.” This appears to imply that contractors are not as valid as independent firms, but from my review of the polls, a number of the contractors are equal or superior to established polls which are better known, Princeton Survey Research Associates International in particular.


Finally, Survey USA mentioned a practice I had heard about, but which is impossible to prove: “curbstoning”. This is, as Survey USA explains, where a pollster “may not interview a respondent at all, but just make up the answers to questions”. It’s rare, says Survey USA, but the problem is, unless you check your pollsters carefully, you really don’t know if they are putting down the real response or not, or whether they are putting down what they think their boss wants to hear. I agree that I think this practice is not very common, as I believe poll clients really do want honest numbers, so they can see where they stand, but it is important to recognize that this problem exists.


TIME : TIME magazine hires a contractor, Schulman, Ronca, & Bucuvalas (SRBI) to perform their polling, including the interviews. They average roughly 1,000 registered voters, and 850-900 self-described “likely voters”. Pure RDD is used for the contact. SRBI follows NCPP guidelines for demographics and weights the part affiliation, as follows: Likely voters, 34% Republican, 35% Democrat, 22% Independent. Registered voters 31% Republican, 32% Democrat, 26% Independent, which shows a rough parity. TIME does not publish results by demographic group response, but to measure the mood in key questions, and reflects trends by asking the same consistent questions. SRBI estimates their Margin of Error is +/- 3 points for registered voters, +/- 4 points for likely voters.


Wall Street Journal: The Wall Street Journal does not do its own polling, but co-sponsors polls with other groups. Earlier this year, the WSJ was partnered with NBC News, but is now partnered with Zogby. No additional information was available from the Journal.


Zogby: Back in 1996, pollster Zogby hit the bullseye in predicting the results of the Presidential election. In 2000, they were close again, though their aggregate error tied them with 5 other national polls. In 2002, Zogby appeared to show a lean in favor of the Democrats, and he was way off in his mid-term election predictions. This year, at the end of the spring, John Zogby actually came out and predicted John Kerry would win the election, which appeared to indicate his bias had reached the point of full-blown partisanship against the President, reflected in a growing number of opinions made out of personal preference, rather than on the evidence. Zogby’s refusal to show his work, only magnifies the apparent distortion of his results.


Zogby runs two polls; a telephone poll and an Interactive Internet poll. Unlike almost every other poll, Zogby’s telephone poll is not RDD. Zogby describes his list as follows: “The majority of telephone lists for polls and surveys are produced in the IT department at Zogby International. Vendor-supplied lists are used for regions with complicated specifications, e.g., some Congressional Districts. Customer-supplied lists are used for special projects like customer satisfaction surveys and organization membership surveys. Telephone lists generated in our IT department are called from the 2002 version of a nationally published set of phone CDs of listed households, ordered by telephone number. Residential (or business) addresses are selected and then coded by region, where applicable. An appropriate replicate1 is generated from this parent list, applying the replicate algorithm repeatedly with a very large parent list, e.g., all of the US. Acquired lists are tested for duplicates, coded for region, tested for regional coverage, and ordered by telephone, as needed.” Zogby notes that “regional quotas are employed to ensure adequate coverage nationwide.” That is, Zogby takes pains to insure that his respondent poll is not random.


As for his weighting, Zogby states “Reported frequencies and crosstabs are weighted using the appropriate demographic profile to provide a sample that best represents the targeted population from which the sample is drawn from. The proportions comprising the demographic profile are compiled from historical exit poll data, census data, and from Zogby International survey data.”


In other words, Zogby uses his own polls to drive some of his demographic parameters, a practice not approved, much less recommended, by either the NCPP or the AAPOR.


All in all, Zogby’s habit of confusing his personal opinion with data-driven conclusions, his admitted practice of manipulating the respondent pool and his demographic weights, by standards not accepted anywhere else, along with mixing Internet polls with telephone interview results, forces me to reject his polls as unacceptable; they simply cannot be verified, and I strongly warn the reader that there is no established benchmark for the Zogby reports, even using previous Zogby polls, because he has changed his practices from his own history.


Except for some specific polls whose practices earned remarks for their excellence or a distinct lack of it, I have tried not to rank or grade the polls. I would also recommend the reader read through the polls himself, to determine which is most thorough in its work and results. But hopefully, this guide will help sort through who is chasing the money, and who is serious about their work.



28 posted on 10/03/2004 9:37:32 AM PDT by conservativecorner
[ Post Reply | Private Reply | To 1 | View Replies]

To: Owen

Rasmussen: Rasmussen Research performs more national polls than anyone else right now, with a poll taken every day. Unfortunately there is no methodology released to the general public. Not the size of the respondent poll, nor whether the sample is randomly developed, there is no weighting method cited, and there is no breakdown of respondents’ responses, which might allow analysts to compare Rasmussen’s results with anyone else. Scott was kind enough to respond to a request for more information, but only to say that he is very busy right now, and will answer later. By the time this article went to publishing, no information on his methodology had been provided, so I must regard this poll to be unsupported in its claims. There is no evidence to confirm whether or not the response is weighted, and if so how, or whether any standardized methods are employed in this poll.


29 posted on 10/03/2004 9:39:05 AM PDT by conservativecorner
[ Post Reply | Private Reply | To 11 | View Replies]

To: conservativecorner

Great info. Thanks.


30 posted on 10/03/2004 9:39:44 AM PDT by Barlowmaker
[ Post Reply | Private Reply | To 28 | View Replies]

To: timbuck2

Re: Reflection of Census Data

Don't know, but I did find a compelling reason to believe the poll is more skewed than advertised.

Here is the sequence to open the call:
- Call the person.
- Ask them if they are a registered voter.
- Believe them.

Nesweek:
- Called 1144 people.
- 1013 said they are registered.
- Newsweek believed them.

Results:
- About 12% of respondents said they are NOT registered.
- In reality, according to the 2000 Census about 30% of the 18+ population is not registered:

http://64.233.167.104/search?q=cache:dxrXPUtcJJcJ:www.census.gov/prod/2002pubs/p20-542.pdf+registered+voters+as+Percentage+of+adult+population&hl=en
(scroll down to Page 6 of the document)

Newsweek's sample is FATALLY FLAWED, because statistically 18% (30% reality minus 12% reported) of the total sample of 1144 who said they were registered really aren't (or a little over 200 people).

Now you might argue that Newsweek always has this problem with their polls, which would be true. But by having their poll concentrated on the three West coast states (they had 2 hours to call the West Coast and only one to call the Mountain states), they almost had to end up with a sample with a higher percentage of minorities. That same Census bureau source shows that registration rates among minorities are less than the 70% nationwide registration rate (67.5% for Blacks, 52% for Asians, 57% for Hispanics). SO THEY MOST LIKELY SPOKE TO EVEN MORE THAN THE USUAL NUMBER OF LYING UNREGISTERED VOTERS (people who said they are registered but aren't).

All in all this Newsweek poll is a disgraceful screwed-up mess (even moreso than usual), and even with all the screw-ups, as noted in #10 above, BUSH's sample-adjusted lead is GREATER than it was three weeks ago.


31 posted on 10/03/2004 9:41:24 AM PDT by litany_of_lies
[ Post Reply | Private Reply | To 12 | View Replies]

To: litany_of_lies
Newsweek's sample is FATALLY FLAWED, because statistically 18% (30% reality minus 12% reported) of the total sample of 1144 who said they were registered really aren't (or a little over 200 people).

Wow!

You just made me feel a whole lot better about this poll.

How quickly did they conduct this poll? There is a lot of pressure on interviewers to get completes. This is especially true, when the poll has to be done quickly.

A scenario where the interviewer puts the respondent in the poll, whether they say they were registered on not, is not hard to believe. I've seen it.

A percentage of the calls are supposed to be monitored by the shift supervisor, but time pressures lead to problems with the results. That's a fact of life in the research business.

32 posted on 10/03/2004 9:54:28 AM PDT by Strider
[ Post Reply | Private Reply | To 31 | View Replies]

To: conservativecorner

I think Rasmussen responded after publication. Dales bought the Rasmussen premium service and says it is legit methodology.


33 posted on 10/03/2004 10:01:30 AM PDT by Owen
[ Post Reply | Private Reply | To 29 | View Replies]

To: Strider

Having done some telemarketing in the past to get people to try to come to focus groups, I can tell you there is a great deal of pressure to find respondents. In our case, we bent the rules or led the person we called on a little bit to get them to say they were a user of a certain product or whatever.

I would think there would be a lot of pressure to take a person at their word when they say they are registered. I think there is also an "embarrassment factor," where people don't want to admit they aren't registered, plus an "importance factor," where people who know they won't be eligible to give their opinion lie about their registration status so they can. (and feel important).


34 posted on 10/03/2004 10:07:27 AM PDT by litany_of_lies
[ Post Reply | Private Reply | To 32 | View Replies]

To: litany_of_lies
Good points.

Yes, focus group recruiting is the worst. There are a lot of short cuts taken. I quit bidding on those contracts, because I couldn't do it right and compete on price with those who would cut corners.

35 posted on 10/03/2004 10:12:33 AM PDT by Strider
[ Post Reply | Private Reply | To 34 | View Replies]

To: Strider

Interesting Strider. And I assume you are saying that Newsweek did not apply any weightings?

-T


36 posted on 10/03/2004 10:15:01 AM PDT by timbuck2 ("The true danger is when liberty is nibbled away, for expedients, and by parts." -Edmund Burke)
[ Post Reply | Private Reply | To 26 | View Replies]

To: timbuck2
I don't know exactly what they did. You can proceed in a number of ways.

The only thing that absolutely has to be done is to weight back to the population as a whole, when looking at the total column, if you over or under sample certain groups.

37 posted on 10/03/2004 10:19:28 AM PDT by Strider
[ Post Reply | Private Reply | To 36 | View Replies]

To: conservativecorner

Just a general comment:

An average of all thepolls will likely give you the best snapshot at any given time. There have been statistical studies done that show an average of statistical methods yield the most accurate results. RealClearPolitics uses this approach.

-T


38 posted on 10/03/2004 10:23:01 AM PDT by timbuck2 ("The true danger is when liberty is nibbled away, for expedients, and by parts." -Edmund Burke)
[ Post Reply | Private Reply | To 28 | View Replies]

To: timbuck2
Of course...MSM are working to make skerry look like he's got a chance. These numbers are skewed bigtime. However...this will NOT help skerry and the DNC...it will help the Republicans. If Republicans and others think for one little minute that skerry could win...you'll see more and more of them voting!!!

Appears the RATS are stupid...if they think this helps their guy....they are sadly mistake... ;o)

39 posted on 10/03/2004 10:24:44 AM PDT by shield (The Greatest Scientific Discoveries of the Century Reveal God!!!! by Dr. H. Ross, Astrophysicist)
[ Post Reply | Private Reply | To 1 | View Replies]

To: timbuck2
How was the the Newsweek poll conducted?

It was conducted like this -
In the editorial department @ Newsweak:
Editor to staff: "We need to continue to manufacture stories that will sensationalize and keep the American sheeple in suspense. This is how we get our ratings and sell ragazines. Now go out there and make it happen!"

Staff:"We're on it boss. When do we get a raise?"

40 posted on 10/03/2004 10:35:19 AM PDT by slimer
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-42 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson