Programmers are inundated with data nowadays and in this week’s Programming To Win column, Pat Welsh examines a few of the different types of data out there. What should you take from your ratings numbers? How about M Score, music research and your website traffic? Welsh suggests what is and isn’t important in each category of data.

By Pat Welsh

Pat Welsh

Pat Welsh

Data is one of the biggest buzzwords in business. Especially in the digital media world, data has significant cachet. Data analysts are highly prized and companies talk about how much data they’re collecting on users, advertising engagement, etc. Think of all the data collected about you: online marketers strive to serve you relevant ads, political campaigns want to find your hot-button issues and your grocery store wants to give you personalized pet food coupons.
Radio programming is both an art and a science. In the early years, arguably, the art predominated, while the science portion has taken over in the last few decades. Programmers have poured over research data of all types, trying to get an edge. Especially as we’ve developed digital tools to build our radio brands, we’ve become swamped with it.
Paradoxically, some of the data we’ve relied on over the last few decades has become less reliable, just as we’ve developed these new ways of generating even more data. For example, all forms of research have become increasingly difficult to conduct simply because of an increasing reluctance by people to participate in any type of survey.
What data do you actively seek out? What’s the most reliable and relevant data that others are using? And what are the potential pitfalls to this data?

Ratings Data
If you want to win an academic debate, you can point out that the word “estimates” appears more often than the word “Arbitron,” and for good reason. Rating numbers are not set in stone. But, in the real world, we have to live with what shows up in “the book.” Still, you have to be realistic about evaluating these numbers, determining what’s real – requiring action – and what’s just statistical noise.
The advent of the PPM – like every other digital technology – has unleashed an overwhelming torrent of data. With weekly results rolling in, and analysis at the molecular level (okay, the minute-by-minute threshold), it’s hard to know what to consider beyond the usual top-line share and cume numbers for prime dayparts and demos.
One of the most interesting reports that I see is the one that shows average meter counts. The numbers tend to correlate pretty well with the share numbers. But the average meter counts – data that you have to pull manually – are available two weeks before the PPM weekly is published.
Average Daily Cume first appeared with the advent of the PPM, but many programmers still don’t use it. It’s fundamentally different from weekly cume; in some ways it’s closer to TSL than to Weekly Cume. Studies by Arbitron have shown that daily cume is a vital component of successful stations in the PPM.
In diary markets there’s (mercifully) less information, but there’s still plenty to wade through. The paradox is that the deeper you dive into it, the less stable the numbers become. Slicing and dicing the data finely means reducing the sample size, sometimes to dangerous levels. One clue that you’re working with tiny numbers is when you see that every station’s share is a whole number of all the others. In other words, when share numbers look something like this: 2.3, 4.6 (2.3 x 2), 6.9 (4.3 x 3), etc.
Another simple – but often overlooked fact – is that the sample from one (diary) survey is independent of the next one. Comparing two books to see “where the listeners went” can be futile. They’re using different people in each case. If you made big changes, or if the competitive situation changed, then you need to look at what just happened. But sometimes the numbers shift simply because Arbitron is polling different individuals.

Music Data from M Scores
M Scores are made up of overlaying song airplay with PPM data, telling you which songs generate net tune-in or tune-out of meters. This is the kind of information that every programmer coveted when he/she first realized the granularity of PPM data. But, even a few years into it, many programmers still aren’t sure what M Scores are telling them. As with the PPM itself, we know the what but not the why.
A case in point: recently I looked at three months of weekly M Score rankings for a client. One song from a superstar artist (who shall remain nameless) ranked dead last for 3 out of 4 weeks, followed by a #1 ranking the 5th week. What can we make out of that? This song was released late last year. Is anyone buying the fact that its popularity is suddenly undergoing a big renaissance?
Studies are still needed on how M Scores evolve over time, from the time of a song’s release to its ultimate chart success and beyond. That could give us a lot more insight into how to use this data. Ironically, some programmers currently use this data more to enhance the art than the science of creating a music library.

Music Research
This valuable tool was red-lined a long time ago by many companies. If you’re lucky enough still to have traditional call-out, congratulations. If you’ve replaced it with Internet testing, realize what you really have. Call-out is supposed to tell you what the market as a whole liked (or at least what likely prospects liked). If you’re using your station database, then you’re only getting P-1s; you’re not getting a read on what potential listeners think. Also, it’s not a blind test; respondents aren’t giving an unbiased opinion.
The other problem with all forms of research – including ratings – is the reduced quality of the sample. It’s becoming much more difficult to get people to participate and you have to be on guard for people who cut corners. Most national research firms serving our business are reputable, but some of the smaller local companies that do recruiting have been known to take shortcuts, resulting in poor quality samples.

Web Traffic
I’m fascinated by the fact that web traffic numbers are all over the place. They don’t correlate that well with station ratings. The level of digital competence varies a lot more than the level of programming competence. I’m amazed that more stations don’t look at web stats to program those sites. How often do users hit station blogs, jock profiles, pictures of hot girls, etc.? You already work hard to find which songs are the most popular and play them the most; why not do the same with web content?
Some of this is a self-fulfilling prophecy. The content that’s linked above the fold on the home page is usually the most heavily viewed. But what other nuggets can you get from digging deeper into the page rankings? Which of the high ranking ones don’t live on your home page? Should these items be linked on the home page to drive even more engagement?

Summary
I’m barely scratching the surface of this topic. The best advice is to try to decide what you really need to know, find out where you can get it, and make sure that what you think you have is reliable. That’s the scientific method…but don’t forget the art.


Pat Welsh, Senior Vice President/Digital Content, Pollack Media Group, can be reached at 310 459-8556, fax: 310-454-5046, or at pat@pollackmedia.com.