For this week’s Programming To Win, Richard Harker dives into Nielsen Audio’s radio ratings and reminds us all that their data is an estimate. Harker takes a closer look at some recent New York ratings data to illustrate his point.
By Richard Harker
You know the drill. Each month Nielsen Audio releases the PPM numbers for forty-eight of the top markets, making a handful of Program Directors very happy, a larger number of Program Directors disappointed, and maybe even a few suicidal.
The rankers seem so clear cut, so black and white. There’s a number one station followed by number two, and so on. Every station has it’s place all the way down to the bottom of the page.
What few radio people realize is that the number one ranked station might actually be the fifth most listened to station. And the seventh ranked station may have more listeners than even a station in the top five.
In a rush to analyze the numbers, we tend to forget that ratings are estimates. There’s a reason years ago Arbitron was forced to add a warning that, “PPM ratings are based on audience estimates and are the opinion of Arbitron and should not be relied on for precise accuracy.” Somehow that admission gets lost as some PDs pop the champagne corks, while others are stepping on the ledge.
Take a look at the graph shown here. It is a ranker, but probably not in a format you’ve seen before. It shows a Nielsen Audio New York ranker, but displayed as a graph. The red hash marks represent the official 6+ share estimates for the top eleven stations.
The gray lines above and below the red bars are what you should pay attention to. They tell us how high or low the ratings could have been and still be a reasonable estimate of the size of a station’s audience.
I’m about to get a little deep in the statistical weeds here, but stick with me.
The vertical lines above and below the red hash mark represent what’s called the confidence interval, essentially the range of possible shares you might expect with a certain confidence, in this case 95%.
Remember, ratings are estimates. And by definition, estimates can (and often do) differ from the number of listeners you actually have. The amount of variance is determined by the number of PPM meters in a market. The more meters in a market, the less the estimates vary from the real audience numbers (which is one reason Nielsen decided to increase the number of meters in smaller markets).
So while we talk about a station having (say) a 6.8 share, it would be more accurate to say the station has an estimated share of 6.8, although it might be higher than a seven, or as low as a five.
It might surprise you that the published share estimate of a station might be off by as much as a share or two either way, but that’s the nature of estimates. They can be off, sometimes by a lot.
When we take into account the range of uncertainty for each station’s ratings, some interesting things become clear.
Back to the graph.
The upper numbers show the highest estimate and the lower numbers shows the lowest estimate for each station. For example, for the month shown WAXQ is somewhere between a 3.1 and 5.3, WFAN is somewhere between a 2.6 and a 4.5, and so on.
Take a look at the horizontal dotted line running from WLTW to WKTU. The line shows that the ratings of all six top ranked stations in New York overlap when we take into account each station’s confidence interval.
In other words, any of the top six stations could be ranked number one or number six. Statistically speaking, WKTU has the same chance of showing up number one as WLTW. And WLTW has the same chance of being sixth as WKTU.
Think about political polling. A poll might show one candidate ahead of another, but the pollster will note that the race is too close to call because the numbers are, “within the margin of error.” The only difference is that Nielsen never calls a race too close to call.
So what can we be certain of when we look at rankers? When two station’s confidence intervals don’t overlap, we can confidently say that the one station has a larger audience than the other.
For example, based on this ranker, we can say with 95% certainty that WCBS-AM has fewer listeners than WLTW. It often takes about six places in a ranker before you can be relatively certain that the one station has more listeners than the other.
Look at the line running from the third station to the eleventh. All nine stations fall within the same margin of error, meaning that theoretically the third ranked station in New York could actually be eleventh, or the eleventh ranked station could be third.
Fortunately, we don’t see huge swings in the 6+ numbers from month to month, so the theoretical confidence intervals are probably a bit overstated, but we know that confidence intervals increase as sample sizes decrease.
The full week 6+ numbers are the most reliable estimates Nielsen produces because they include all the meters. But what about day-parts or demographics? What happens then?
Nielsen Audio tightly controls the release of ratings, and we are not allowed to show anything but full week 6+ shares, but anyone who has looked at age cells or day-parts knows what happens.
The monthly swings can be severe with stations continually moving up and down rankers. Our clients frequently show us five and six rank swings even in key demos from month to month. You’ve probably seen it in your market.
Think about what confidence intervals mean about monthly trends. If a top ranked station can theoretically have its share estimate move up or down a full share without gaining or losing listeners, then what about the monthly moves we see?
It means that most monthly changes in share or rank are just wobbles within the expected range of estimate uncertainty. In fact, when we take into account confidence intervals virtually no stations move enough from one month to the next to indicate real change.
Which means all the analysis that you see explaining why a station went up or down is mostly guessing.
You can determine the confidence intervals for stations in your market by going to the eBook, clicking on the Methodology tab, and then the Audience Estimate Reliability tab.
It takes a little arithmetic to get the results, but it’s a good exercise to prepare for those inevitable bad months. You can show your GM that you really didn’t have a bad month. You’re still within the margin of error.
And you just might be within the margin of error to be number one!
Richard Harker is President of Harker Research, a company providing a wide range of research services to radio stations in North America and Europe. Twenty-years of research experience combined with Richard’s 15 years as a programmer and general manager helps Harker Research provide practical actionable solutions to ratings problems. Visit www.harkerresearch or contact Richard at (919) 954-8300.