Now that election season is finally over, Pat Welsh compares the political polling process to media ratings. What are the issues that face both Arbitron and the political pollsters? What aspects behind surveying radio listeners and voters should we keep in mind while reading the results?

4985227As I write this, interest in the U.S. general election is reaching a fever pitch and there’s suspense about who will be the next President, which party will control each branch of Congress and who will come out on top in thousands of other local races.
          As you’re reading this, the election will be over and winner and losers will be revealed. But even if the suspense is now over, with all the talk of polls, voter turnout models, margins of error, get out the vote efforts and more still fresh in our minds, this is a good time to talk how media research and political polls are different – and the same.
          The 24-hour news cycle means that we’re going to be bombarded with political polls. Various news organizations sponsor some of the most widely publicized polls. The New York Times and CBS conduct one poll together, ABC News and The Washington Post partner for another, and Fox News, NPR . On the other hand, venerable, unaffiliated organizations such as Pew Research and Gallup do their own widely-circulated polls. And the candidates and parties themselves are constantly conducting polls that are not publicized.
           Political polling is fundamentally different than media ratings in one important sense. The pollsters have to predict not only who will win, but who will vote. After all, almost everyone listens to the radio (93% listen at least once each week) or watches TV. But the 2008 Presidential election was considered to be a high turnout election, with 61.6% of eligible voters participating.
           Still, political polls and media ratings have a lot in common. Below are some of the issues that crop up in polling. Many of them are recognizable to those of us who look at Arbitron numbers regularly. But there are also some unique challenges in political polling.
          Declining participation rates – I’ve been writing about this topic since the late ‘80s! Political observers say that, among the best polling companies, only 10% of the people they reach will agree to participate. This has only been made worse in the last few days (as I write this) due to Hurricane Sandy and the devastation it brought to the Northeastern U.S. Millions of people have better things to so at a time like this than talk to a polling company. The same is true for media ratings services.
          Cell-phone-only households – Political pollsters grapple with this just as Arbitron has. Any respectable political polling company sets aside a certain percentage of its sample for people who don’t have landlines. The trick is to get the percentages right, knowing that this can vary quite a bit by demo and ethnicity. In reality, we don’t know for sure how many of these there are, but 30% (overall) is the number most often quoted.
          Different survey, different sample – Each sample from most of these polls contains a different set of people. The pollsters try to balance their samples based on age, sex, etc., but each time they field a new poll, it’s with a completely different group of people. This is just like many ratings services. Think of Arbitron’s diary service, which uses a different sample of people each week.
          Tracking the same individuals and their changing preferences – Both Nielsen’s TV People Meter and Arbitron’s Portable People Meter for radio measurement include a group of people who can remain in the survey for a long time. In the case of the PPM in radio, those respondents can be on the panel for as long as 2 years. Most political polls do not go back to talk to the same people again. Campaigns may do this more in private polling and focus groups, but it’s not a high-profile topic.
          Voter Turnout Models – The biggest challenge for the political pollsters – one that ratings services don’t have to worry about – is predicting who will vote. You always see references to “registered voters” and ‘likely voters.” Pollsters have to infer, based on other responses, if a poll respondent will actually end up voting or not. They each have their own formula for determining this. These formulae use criteria such as age, geographic location and how often a particular voter says that he/she has voted in the past.
          Political Affiliations – Most pollsters struggle with the idea of party affiliation in their polls, as well as age and sex considerations. They do this to try to make their polls comparable to one another and to the projected breakdown of likely voters. Some create a secret sauce that will project the proper percentages of Republicans, Democrats and Independents will actually participate. But there’s no agreement on what those percentages should be. Other pollsters believe that they should not balance for party affiliation and let the chips fall where they may.
           As we head into the home stretch this year, this issue seems to be the cause of some of the wide discrepancies between poll results. Gallup, in its Presidential tracking poll, has been predicting a slight Republican skew to the electorate. The CBS/New York Times poll uses a Democratic-skewing model.
           Think about this the next time you see anomalous ratings results such as when a country station loses numbers and a rock station gains in the same demo. Did the country station lose number to the rocker, or did the sample just skew more towards rock fans?
          Robopolls – This refers to automated phone calls. These are not used for ratings in the US, but political pollsters do use them. The big problem with them is that the law prohibits making robocalls (ones generated by a computer) to a cell phone.
          Margin of Error – This is one of my favorite topics. It gets a lot of press in political polls, but almost none in media ratings. Political polls and media ratings are surveys. And it’s understood that even a well-conducted survey yields an estimate, not an exact number. That’s where the margin of error comes in. Just as political polls are accompanied by a margin of error, all radio ratings also have a margin of error. It’s just that no one ever talks about this. But, as I’ve been saying for years, there’s a reason why the word estimates appears more often that the word Arbitron in any Arbitron ratings survey. For a little light reading on the topic, click here.
          Exit Polls and Election Results – Elections have real results. Pollsters are merely predicting, while media ratings in effect declare the winners themselves. Exit polls, in some ways, are the closest analogs to media ratings. By sampling selected precincts on Election Day, pollsters can find out characteristics of the electorate, such as percentage of minority voters, how many young voters turned out, etc.
           There’s both an art and a science in political polling. It has challenges that are different than media ratings, but there are enough similarities to make it worth looking at. Those of us who work in the media can find other ways of looking at ratings results to glean more information about what happened. And to find out more about political polling and election results, I highly recommend Nate Silver’s Fivethirtyeight blog on the New York Times website.


Pat Welsh, Senior Vice President/Digital Content, Pollack Media Group, can be reached at 310 459-8556, fax: 310-454-5046, or at pat@pollackmedia.com