With all the different ways to take the temperature of a song in this day and age, how important is good old fashioned music research? Richard Harker examines some of the other options out there, such as PPM data and crowd-sourcing, in this week’s Programming To Win column.

Richard Harker

Richard Harker

By Richard Harker

For over 30 years now, leading music stations have been creating music libraries with the help of Auditorium Music Tests (AMT) for library product and Call-Out for new music.
But given new developments like PPM and the success of Internet music services like Pandora, is music research still as useful today as it was in the past?
Does it still provide broadcast stations an edge over the competition, even new Internet competitors? To answer the question, let’s look at some of the alternatives.
Pandora’s Music Genome Project
With 80 million registered users, and 30 million regular users, Pandora’s success is undeniable. And Pandora doesn’t test its music.
The music chosen for each of Pandora’s 1.4 billion channels is decided using Pandora’s Music Genome Project (MGP). Musicologists code each song in the music library using 400 musicological variables such as music key, instrumentation, and tempo.
A user creates a new station by choosing a “seed” artist or song. Using the seed’s MGP codes, the service then chooses what other songs to play based on the profile similarities to the seed song.
Pandora is succeeding using MGP, but it is succeeding because of MGP?
Can broadcast radio better compete against Pandora by emulating Pandora and using its own version of coding, or should radio offer a better designed alternative product?
On-line ratings show that Pandora has very low Time Spent Listening (TSL). The service may be extremely popular, but users don’t seem to spend much time with it.
Pandora’s average session length is about 50 minutes. Compare that to Clear Channel’s 70 minutes or Cox Radio’s 90 minutes.
If Pandora’s MGP were a better approach to picking songs than the music testing done by its broadcast competitors, we would expect above average TSL, not brief listening spans well below its broadcast stream competitors.
So should radio stations use musicological similarities rather than popularity to schedule music, therefore eliminating the need to test music? Pandora’s low TSL suggests not.
Radio already beats Pandora TSL with playlists crafted through music research.
Crowd Sourcing
An alternative to Pandora’s academic top-down approach to picking songs is turning over control of the music to listeners, crowdsourcing the music.
First popularized by James Surowiecki in his book Wisdom of Crowds, crowdsourcing says that a diverse group of individuals can make more accurate decisions than individuals or even experts.
In our case, it raises the question whether the collective wisdom of the audience might be better able to choose the right songs to play. If we turn over the decision to our audience, we don’t need to test the music beforehand.
While it is an intriguing alternative to pre-programming the music, the reality of crowdsourcing is considerably different from the concept.
The problem is that within any crowd, active participants make up a very small proportion of the crowd.
For example, a study of Amazon users found that only 5% of Amazon users ever vote on any product. What happens instead is that a handful of Amazon users rate hundreds of products.
Wikipedia, an often cited example of crowdsourcing, is anything but. Only 1% of the site’s users edit half of Wikipedia’s content.
The only way crowdsourcing could work for radio is if we had a significant proportion of the audience contributing.
If Amazon can’t get more than 5% of their users to rate their products, if Wikipedia relies on just 1% of its users to do half the work, can we expect the majority of a radio station’s listeners to help crowdsource the music?
Not likely.
Any Program Director who has tried to run an all-request hour knows how difficult this is. Only a small handful of listeners actually end up requesting a song.
Request lines were crowdsourcing before crowdsourcing existed, and they never provided a consistently reliable source of music information.
Jelli is the best known radio crowdsourcer. Jelli’s creator, Jateen Parekh, calls it social broadcasting and radio democracy.
In the same way all-request hours are popular despite limited active participation, Jelli works not because a large proportion of the audience participates, but instead because a large proportion of the audience vicariously participates through passive listening.
That’s why Jelli’s radio democracy can be valuable specialty program, and a good way to differentiate your station.
In the end, however, turning over the music to a handful of active listeners via crowdsourcing is just as risky as an all-request hour that really plays requests.

Letting PPM decide
Arbitron’s ability to generate minute by minute PPM data has created interest in using ratings to make programming decisions.
If the ratings go up, it must mean that listeners like what they are hearing. If the ratings go down, it must mean they dislike what they are hearing. So PPM should be able to tell us what songs to play.
At least that’s the argument, but how true is it?
First there’s the issue of panel sizes. Most radio stations outside the highest rated stations in the largest markets generally have no more than a handful of panelists listening at any given time.
A single panelist turning off his or her radio, walking out of a room where the radio is playing, or even a bus driving by the car can give the illusion of massive tune-out.
PPM panels (at least in the largest markets) are theoretically large enough to reliably estimate audience levels in broad demographics during entire dayparts, but as you slice and dice PPM numbers looking at narrower demos and smaller time increments, the reliability of the numbers plummets.
`The panel sizes are just too small to provide reliable estimates within brief time periods like the length of a song or spot set..
But the problems go well beyond panel size.
Theoretically, the meter can identify an encoded source in seconds. However, because of noise interference (that bus) and audibility issues, Arbitron gives the meter up to three minutes to figure out what the source is.
This window means that the meter may not always register changes in exposure in precise synchronicity with actual changes in exposure.
This means what appears to be a reaction to one element may actually be the reaction to an element before or after it. This is why listeners appear to sit through commercials, and seem to react to things minutes after they occur.
While it would be great if PPM could give us a minute by minute report card, the device was not designed, nor the technology implemented in such a way to be accurate to a single minute.
So despite the fact that many programmers have embraced the idea that they can make programming decisions based on PPM, it is not the breakthrough programmers hoped for.
The Bottom Line
Radio programmed by professionals using research, remains the gold standard attracting over 200 million users, seven times Pandora’s active user base.
Each week listeners consume 3.5 billion hours of broadcast radio across fewer than ten-thousand stations compared to Pandora’s 200 million hours across 1.4 billion stations.
Consumption of commercial radio is over 17 times Pandora consumption.
Professionally crafted programming based on research continues to be radio’s edge even today when listeners have more choices than ever.
Music research in the hands of a creative program director will continue to help create a compelling product that can successfully compete against tomorrow’s threats.
Unique players like Pandora, Jelli and other specialty online formats will have an impact beyond their audience size by offering listeners something different. They will raise the expectation bar for every station and every format.
Ultimately, however, the heavy-lifting will be done by formats playing popularity-based music chosen with the help of music research.
While music research has served radio well for over 30 years, it has continually evolved to keep pace with an ever growing sophistication of radio and its audience.
Music research will continue to evolve to maintain its key role in a radio station’s success. As options proliferate, radio stations will have to better target, super-serving a well-defined audience.
And music research will be ready.
As more listening migrates to the Internet, radio groups will be able to expand their stable of offerings, serving multiple niches across stations rather than a few stations serving broad audiences.
As radio adapts to the new reality of the Internet, research, and music research in particular will play a critical role in helping radio continue to be audio entertainment’s benchmark.


Richard Harker is President of Harker Research, a company providing a wide range of research services to radio stations in North America and Europe. Twenty-years of research experience combined with Richard’s 15 years as a programmer and general manager helps Harker Research provide practical actionable solutions to ratings problems. Visit www.harkerresearch or contact Richard at (919) 954-8300.