Imagine. You’re among the very fortunate. Despite challenging economic times and that never-ending budget-slashing, your company has found the money for a music test. But now sand traps are everywhere because it is easy to not truly benefit from this investment, so think about maximizing your next AMT by being mindful of “The Five S’s.”

Ralph Cipolla

Ralph Cipolla

by Ralph Cipolla

Imagine.  You’re among the very fortunate.  Despite challenging economic times and that never-ending budget-slashing, your company has found the money for a music test.
So, that’s the good news.  But the sand traps are everywhere because it is easy to spend money on a music test, and not truly benefit from this investment.  Worse, a poorly executed music test virtually guarantees that management won’t authorize a follow-up AMT next year.  There are several points where the process can break down, so think about maximizing your next AMT by being mindful of “The Five S’s.” 

R Strategy   R Screening   R Selection   R Sorting   R Substantiation

 

  1. Strategy. Maybe it’s not your fault.  You don’t have the market data and can’t really know what you should be testing, and the demographics of the audience that you should be surveying.  In some cases, research dollars would be better spent on a market-wide perceptual study that can help you better understand the opportunities and threats facing your station.  First things first.  
  2. Screening. If you have a solid sense for who you should be researching and the types of songs you should be testing, the challenge comes down to screening the AMT strategically.  You should be able to identify your music strategy by confidently answering these questions:

My station’s ideal airplay percentage by decade is: ___% ‘60s, ___% ‘70s, ___% ‘80s, ___% ‘90s, and ___% ‘00s 
My station’s primary (secondary and tertiary) sounds are: _________ 
My station’s 10-year target audience is defined by this age range: ___ to ___
My station’s gender target is ___% Male, ___% Female
According to ratings data, my cume-sharing competitors are: ___________________

 

Information pulled from audience ratings can help answer some of these questions, as can the aforementioned perceptual research and a meeting with your strategic team.  Don’t guess at any of the above metrics because an AMT using shaky parameters is only going to yield less confidence in the results.

 

  1. (Hook) Selection. Too often, programmers get the first two steps right, but then the process of choosing which songs to test becomes a random act.  If in the past, you have surfed over to your hook provider’s website and started to “click” on titles that seemed interesting to test, rethink that strategy.  There is a trick to building a solid hook selection process, and it comes right out of “GIGO” – or “Garbage In, Garbage Out.”  Here are some guidelines:

    Pick the hits.  A look at your format’s airplay will reveal which songs are consensus tracks – the ones played on just about every station in your format, and which are low-percentage depth tracks with little chance of making your library.  You might also want to examine an adjacent format, especially if a station of that type is creating ratings pressure or presents a growth opportunity.
    Test what’s getting played.  Make sure you pick what you play, and identify tracks that a close competitor is playing that you’re not.  Too often, stations avoid testing portions of their own playlists, on the (faulty) assumption that many of those titles always score well, so why test them again?
    Check prior data.  This trumps format airplay and even local competitor spins.  If you have research data from past AMTs that is no more than two years old, and you are essentially the same station you were back then, take the time to review the history.  If a gold song was 60% unfamiliar in your last test, the chances of it becoming well-known in the past year are slim to none.  If it finished in the bottom quarter in overall popularity scores last time, it won’t likely be a big tester in your new test.  Don’t waste hooks on “dogs.”
    Select strategically.  If you have a good sense for your “core” sounds and eras, you can easily focus your selections on songs from those genres and years.  Likewise, be careful with Secondary sounds and eras, “spice” tracks, and songs that are off your musical radar, regardless of hit-appeal or market airplay.  Oftentimes, stations test songs that are well out of format, and sometimes they score well.  Then what do you do?  If you’re not going to play a song anyway, don’t test it.

By going through this process, you can ensure that your hook selections are consistent with your strategy, without wasting hooks (and money) on songs that have failed miserably in the past.  By the same token, your list will be rich with songs that are hits and have consensus airplay.  The more songs that actually score well, the more flexibility you have during the sort.  You might even have enough strong-testers to create a “shuttle system” to keep your station sounding fresh in between tests.

  1. Sorting. You can sort a test by parsing through all those data columns, looking for an excuse to put a song in the library.  Or you can use the data to find reasons to keep bad songs out of your library.  It’s like American Idol – every song gets a chance to earn its way in, but a little Simon Cowellesque attitude will serve you well.  Force songs to perform up to your criteria or they don’t get on the air.  Sort with this in mind – Every single spin makes a marketing statement – it either supports or weakens your brand. 
  2. Substantiation. Ronald Regan said of the Soviets – “Trust, but verify.”  Beyond strategy, screening, selection, and sorting – there is still one vital stage that can screw things up, and that comes down to substantiation.  After the sort, your job is far from done.  Here are the key steps:
  • Report back – Have a concise answer for your GM (who probably approved the test) when she asks “So, what did we learn from the AMT?”  A simple summary of these items will impress him/her, while forcing you to look beyond the rows of data, and into the overall narrative of the test:

    High-scoring eras, sounds, and artists, and under-performing eras, sounds, and artists – who were the winners and losers?
    The best and worst-testing songs among the total sample, P1s, cumers, other stations’ fans/cumers, gender and age cells – what are the balancing challenges and solutions that can help the music reach its ratings goals?
    Number of titles in and out, and net library change by percentage – so what actually changed, and how has the music library been modified by the test?
    Sample hours, before and after – how is the station different now?  

  • Verify – It sounds fundamental, but it often gets lost in the rush to “get that new test data in before the book starts.”  Before the VP of Programming pays a visit, and asks “What are we doing?” do the following:

    Test it.  Schedule, edit, and review test logs before it all hits the air.  Confine your mistakes to your office; get it right the first time on-air.
    Inform the staff.  Tell them about any clock changes, music shifts, etc.  They won’t know to drop one song instead of another unless you tell them something has changed.
    Compare airplay to strategy.  It’s never enough to just review music logs.  Sometimes actual airplay bears little resemblance to the clocks or the strategy.  Why?  Jocks mess with the log, drop the wrong song, or flip a song for time purposes.  You can take it to the bank: the songs that get dropped will be the best-testing gold tracks from primary-sound core artists.
    Listen to your station. Make the time.  All that data, log-checking, and monitoring is great, but there’s no substitute for just listening.  Make sure the post-test version of your station passes your own subjective standard.  You know how your station is supposed to sound.  Don’t allow all those numbers and music scheduling software changes to hi-jack your station’s sound or identity.

That last point is critical because this article has been dominated by data-driven ideas and common-sense suggestions.  However, music testing is a tool to help you fine-tune your station – not alter its overall sound.  As the “Brand Manager” you need to use your head and your gut, employing a combination of art and science.  Don’t let the data overshadow the station’s essence.

An AMT is an investment in the station’s future, but to maximize its value, preparation, strategic thinking, and attention to detail will make the difference between a test that only partially achieves the goals, and one that truly freshens and energizes the brand.

Jacobs Media consultant Ralph Cipolla is a music architect, specializing in format design and implementation. In his past life, Ralph worked for RCS, in addition to programming some of America’s finest Rock and Classic Rock stations. Since joining Jacobs Media, he has created a suite of music analytical tools, including SmartHook and SmartSort – software programs that help stations strategically select hooks and post-analyze music tests.