Saturday, October 02, 2004

Polls, polling. I am taking up a new thread because of the way it seems polls are being shifted from analytic tools that provide a reflection or snapshot of a moment in time, to blatantly political instruments.

In grad school, I studied statistics and how to use them and how not to use them with professors like Houston Stokes and Herb Walberg. The current state of reporting on political polls has raised several red flags for me.

The way margin of error and confidance interval are bandied about often presents a "winner" in a poll that really shows a tie. This CJR article has a fascinating look at Canadian reporting on polls, where more information about how the poll can accurately be interpreted is included.

Another troubling issue that I am just beginning to track was brought to me by a student who works for a local radio station. In Illinois, the Senate race and Presidential race were initially reported to be very one-sided, and essentially, not contested or in doubt. This was causing local broadcasters (radio and television) to feel they were having a revenue shortfall owning to a dearth of political ads. The curious student asked if radio or television stations could sponsor polls to make the predicted outcome of certain races appear closer than they really are.

Using legitimate sampling techniques such as stratified samples, pollsters certainly can conduct a poll where data isn't manufactured, but where the basic assumptions of the sample render the results meaningless or misleading. Without proper explanation of the underpinnings of sampling, of margin of error and confidence intervals, the reporting of such poll results would be biased and unfair whether that was deliberate or based on sloppy reporting. Anyway, this article provides a basis for understanding how error of measure and confidence interval ought to be presented to one's audience. More on this topic to follow. CJR Campaign Desk: Archives

No comments: