Paragon Media’s Larry Johnson delves into the topic of research for this week’s Programming To Win column. How effective and accurate are your audience samples? Do you have a statistically significant sampling of the population? What about phone research versus online surveys? And is bad research better than no research at all?

by Larry Johnson

4427974

Larry Johnson

After reading a recent report from a well-respected research organization, I can understand how some may be left with a sinking feeling whether the research they are conducting is accurate.  With radio going through hard times, there is a push to drive research costs down by doing Internet surveys.  However, according to a March 2010 American Association for Public Opinion Research (AAPOR) report, Internet surveys may not yield valid, reliable results.  So, the question is whether accuracy is being sacrificed for cost.
Proper research depends on random (or probability) sampling.  That is, everyone has an equal chance of being selected for the survey.  With that definition of random sampling, let’s take a look at the AAPOR’s findings:  

           “The majority of studies comparing results from surveys using nonprobability online panels with those using probability-based methods (most often RDD telephone) often report significantly different results on a wide array of behaviors and attitudes.”
So the question becomes:  Are the results from online panels so different from telephone studies that the non-random (a.k.a. nonprobability) online sample is simply inaccurate?  Some may argue that any information is better than no information.  That argument is simply wrong; it’s where I draw the line.  Remember the adage, “Garbage in garbage out?”  Here’s another one; “Bad research is worse than no research.”
One must add to this discussion the dilemma about random sampling in a cell phone age.  That dilemma may open up some possible solutions to cash-strapped radio operators when having to decide between phone and online research methods.
The whole idea of a random sample – everyone has an equal chance of being selected – has become diluted over the decades.  A truly random phone survey, even before the advent of cell phones, dictates using Random Digit Dialing (RDD).  RDD is a system where working prefixes are called randomly thus including non-listed telephone numbers.  RDD is very expensive compared to using lists from sampling companies.  Any custom research list; e.g., Adults 25-54 or Women 18-49, have only listed numbers.  By using vendor provided lists, researchers stray from random sampling in order to control cost.  This is probably not a bad financial compromise. Over the years, we’ve seen results from lists parallel radio ratings results.
What is objectionable is that some research companies only sample an indefensibly small number of people and then weight the data to try to mirror the population.  For example, the sample may only include 13 Males 18-24 that are then weighted to represent the necessary 80 or so Males 18-34 that should have been sampled.  This is simply unethical.  You should always ask if your sample is being weighted before you purchase a research project.
For those who find straying from a RDD sample abhorrent, now throw in the fact that cell phones are expanding exponentially, especially among the young.  Arbitron uses RDD sampling, and they’ve done a good job including cell-phone-only (CPO) households in proportion to the recommendations of the National Health Interview Study.  Arbitron also uses addressed-based sampling to assure that CPO households are proportionately represented in the ratings.  A nice side benefit of addressed-based sampling is that Arbitron has narrowed the gap between the percentages of each age/gender category (e.g. Men 18-24) they actually interview and the percentage each category constitutes among the population.  However, address-based sampling is very expensive and spikes already high research costs even further.
With even traditional telephone sampling being out of reach for many cash-strapped radio operators, the push has been to do online surveys.  Yet online surveys too often are self-selected samples; e.g., online respondents select themselves to participate.  This can be a far cry from the random sample where everyone has an equal chance of being selected.  Radio databases are not only self-selected samples, but they also tend to be almost exclusively the station’s P1s, which provide only a narrow view of the hyper-core audience tastes.
The AAPOR critique of online surveying is interesting and, at the same time, troubling.  Indeed, anything but RDD sample (now supplemented with address-based sampling for CPO households) is sacrilegious to many researchers.
Yet we are at a time of reckoning.  Does the choice come down to research that allegedly can’t be projected to the population for a future outcome (e.g., ratings and/or election polling) or no research at all?
While online sampling may be fine for projects like a public station surveying their contributors, trying to project online sampling to a general population creates a disconnect.  There’s no doubt that we are in a state of ferment when it comes to providing clients with cost efficient, reliable research.  Here are some recent observations that may shed some light on defensible, cost-efficient research procedures.
The AAPOR report suggests that only Random Digit Dialing (RDD) sampling can be valid.  Yet with the rapid proliferation of cell phones and mobile devices (e.g. Blackberries and iPhones), many segments of the population simply aren’t available by phone.  Even Arbitron is revisiting in-person interviewing to make their address-based sampling complete.  With young people and Latinos rejecting landlines at an increasing rate, thus making the cost of reaching these people prohibitive, one must ask how representative a telephone sample really is?
Coming from an academic research background, I was shocked and surprised to hear what Republican pollster Dr. Frank Luntz had to say in his address to The Commonwealth Club of San Francisco.  Luntz said that he preferred Internet sampling to traditional sampling because of the difficulty of completing interviews with hard-to-reach segments of the population in a telephone study.  Yikes!
For over a decade we have fielded alternative samples via radio stations’ databases with web surveys to compare the differences.  More recently we have supplemented telephone samples with cell phone surveys and online surveys.  More often than not, there are statistically significant differences between the samples.
In 2010, Paragon completed surveys among young ethnic groups in a major market where we used traditional phone sampling for half the respondents, and online Internet sampling for the other half.  We ran significance tests to see if the two groups varied in their answers.  The major findings of the study held for both online and phone respondents.  There were few significant differences within individual questions.  Perhaps the gap between telephone and online samples within young ethnic groups is not as great as that gap between older non-ethnic groups.
I too am wary of online samples provided by vendors who traffic in blatantly self-selected samples and respondents who participate in multiple surveys during a typical month in order to enhance income.  However, with Internet list providers having honed their samples over the years, it seems that the world of Internet sampling may be at or near the accuracy of phone sampling.  Internet sampling is much less expensive.  Often times the financial reality comes down to doing online sampling or doing no research at all.
Let’s watch similar studies comparing online to telephone sampling to determine if the investments in online research are a good bet for getting valid results.
One could argue that Internet surveys – even with their shortfalls – reach segments of the population that are hard to reach, thus providing a cost-efficient sample as good as a telephone study in the New Media age.
If you want to see the AAROP report, go to http://aapor.org/AM/Template.cfmSection=AAPOR_Committee_and_
Task_Force_Reports&Template=/CM/ContentDisplay.cfm&ContentID=2223 

Larry Johnson is President/North American Radio for Paragon Media Strategies. Reach him at 831-655-5036 or via e-mail at ljohnson@paragonmediastrategies.com.