![]()
Well, if you are Fox News only looking at Rasmussen.
The reason the "margin of error" is a fool's hope in this case is because it decreases as the number of polls and the number of people polled goes up. Even if every one of those polls has a margin of error of say, 4 points, the average of all of the polls has a margin of error much lower.
The error is cumulative if you are going to try and lump them all together. You don't gain certainty by grouping polls together and averaging the error for each. Averaging error is the real fool's hope.
Also, what % of polls show the incumbent with 50% or more. That is telling in and of itself.
Sorry, but you're simply mistaken.
There is a degree of correlation of error among polls, so it's impossible to say with certainty exactly what the reduction of error is when there are multiple polls. But the more polls there are that point in a given direction, the less chance there is that all of them are off by a significant amount in the other direction.
The error is cumulative if you are going to try and lump them all together. You don't gain certainty by grouping polls together and averaging the error for each. Averaging error is the real fool's hope.
Also, what % of polls show the incumbent with 50% or more. That is telling in and of itself.
Charles is correct and you are absolutely wrong. The margin of error is effectively determined by the number of people in the sample. Put simply, multiple polls reduce the margin of error due to the greater sample size.
![]()
Well, if you are Fox News only looking at Rasmussen.
Each poll is a different event with separate criteria. You can't lump them together and average the error and call that your overall error. By doing that you are saying you have more certainty, when you don't. You have artificially reduced the error you are likely to encounter. You are wrong, sir.
Each poll is a different event with separate criteria. You can't lump them together and average the error and call that your overall error. By doing that you are saying you have more certainty, when you don't. You have artificially reduced the error you are likely to encounter. You are wrong, sir.
In particular, Ive looked at all states in our database in which there were at least three distinct polling firms that conducted surveys in the window between 10 days and three weeks before the election. Like Real Clear Politics, I used only the most recent poll (the one closest to the 10-day cutoff) if the polling firm surveyed the state multiple times during this period. I used the version of the poll among likely voters if it was available, defaulting to registered voter numbers otherwise.
![]()
In the table, Ive listed all cases in which the race was within the single digits in the polling average. If you focus on those cases where a candidate held a lead of two to three percentage points, he won the state in all six out of six cases, although the sample size was small.
Historically, this two- to three-point range has been something of an inflection point. Poll leads of 1.5 percentage points or less have been very tenuous and have not conveyed much advantage.
On the other hand, there was not a single instance in the database where a candidate lost a state when he held a lead of more than 3.5 points in the polling average at this point in time. (Bill Clinton, in 1992, lost Texas despite leading George H.W. Bush there by that margin.)
It is possible to generalize these findings by means of a probit regression model, where the independent variable is the candidates lead in the polling average and the dependent one is whether he won or lost the state.
![]()
That analysis implies that a lead of 2.4 percent in the polling average (Mr. Obamas current edge in Ohio in the FiveThirtyEight model) would translate to a win in the state 82 percent of the time. This percentage is similar to, but slightly higher than, the FiveThirtyEight forecast, which gave Mr. Obama a 76 percent chance of winning Ohio as of Friday.
It is important to emphasize that this analysis covers cases in which there were at least three distinct polling firms active in a state; you will find more frequent misses in cases where there were just one or two polls.
In Ohio, however, there are not just three polls: roughly a dozen polling firms, rather, have surveyed the state over the past 10 days.
There are no precedents in the database for a candidate losing with a two- or three-point lead in a state when the polling volume was that rich.
Instead, the biggest upsets in states with at least five polls in the average came in 2000, when George W. Bush beat Al Gore in Florida, and in 2008, when John McCain beat Mr. Obama in Missouri. Mr. Obama and Mr. Gore had held leads of 1.3 percentage points in the polling averages of those states.
The error is cumulative if you are going to try and lump them all together. You don't gain certainty by grouping polls together and averaging the error for each. Averaging error is the real fool's hope.
Also, what % of polls show the incumbent with 50% or more. That is telling in and of itself.
Sorry, you are the one mistaken. Having multiple polls, each with a result within the error of the poll does not make the lot of them point with any more certainty to one outcome or the other. All it does is come to an average with that much more uncertainty.
http://krugman.blogs.nytimes.com/2012/09/11/margin-of-error-error/One point is that the margin of error is a 95 percent confidence interval, which is a pretty strict test. But anyway, the key point missing here is that there have been multiple polls showing an Obama bounce; six if I have it right, the four trackers plus CNN and now ABC. This means that in effect we have a much larger sample than in any one poll, and hence a much smaller margin of error.
http://web.mit.edu/newsoffice/2012/explained-margin-of-error-polls-1031.htmlOverall, Berinsky counsels, the best strategy is not to focus on any particular poll, but to look at a rigorous aggregation of poll results, such as those conducted by Pollster.com or Real Clear Politics. Such averages smooth out the variations and errors that may exist in any given poll or sample. In the 2008 election, he says, a simple average pretty much gave you the [actual] result.
If you average it all together, Berinsky adds, it all works out.
http://www.oswego.edu/~srp/stats/metapoll.htmThis would be akin to saying that combining the information from all five polls actually results in a margin of error that is WIDER than the margin of error for one of the constituent polls (the CNN poll). That's clearly absurd.
http://www.stats.org/faq_margin.htmGenerally, the margin of error is proportional to the inverse of the square root of the number of people taking it.
http://www.uncp.edu/home/acurtis/Courses/ResourcesForCourses/MarginOfError.htmlMargin of error decreases as the sample size increases.
As the number of people surveyed goes up, the margin of error goes down.
As you can see in the table at right, a very small sample of 50 respondents has a ± 14% margin of error while a large sample of 2,000 has a margin of error of ± 2%.
It's not just that there's a margin of error, it's also that the polls set up to sample likely voters with approximated distributions of race, age etc. You can't know who will actually vote until the election is over
The error is not cumulative. Imagine I'm making a measurement in the lab. I take 10 measurements, and I find a mean of 100 with a standard error of 10. An hour later, I measure the same object another 10 times and I get 100 again with a standard error of 10. If my test subject is not expected to have changed, then I have effectively made 20 measurements, with a mean of 100 and a standard error of 7.07 (10/sqrt(2)).
It's not just that there's a margin of error, it's also that the polls set up to sample likely voters with approximated distributions of race, age etc. You can't know who will actually vote until the election is over
Charles, each of these examples is for identical datasets. That is like looking at data from the same poll where they increased the size.
As I said, each pollster uses different criteria and different questions to conduct their polls. Each comes up with a calculated polling error which can be different even if the sample size is the same. When you combine all these mixed polls you can't get more confidence given they are all completely separate events.
That's like writing two different computer programs to answer a question. You know the first one is wrong 5% of the time. You know the second one is wrong 5% of the time. When you ask the question to both, the error isn't going to be 5%, its going to be more. Each answer has a 5% chance of being wrong. So when you combine the answer there is no way you only have a 5% chance that answer is wrong, given that each could have been incorrect due to error within each program program.
If they are using different methods, then the odds are that any systemic error being made by one will be neutralized by another.
Even if they are all making the same systemic error, the sampling error is lower because of the larger sample size.
Charles posted some good links. One that even shows the math involved to find an aggregated error. Why are people still arguing out their ass?