In the August 2012 Marketing: Sports posting, SLRG’s Jon Last speaks to the key elements required to measure return on sports marketing objectives.
We rightfully spend a lot of time in the sports marketing business, defining and selling the unique touch points and entrée to a valued and often elusive audience provided by aligning brands with athletes and properties. I’ve ranted previously in this space about how not enough of us take the time to go beyond simple GRP equivalencies and eyeball counting to evaluate sports marketing efforts. I’ve also acknowledged that to do this correctly isn’t easy, yet I’ve also maintained that, done correctly, it can be quite insightful and value additive. So, as we get ready for the onslaught of back-to-school preparation, what better opportunity to outline a basic primer for those who want to gain a better understanding of the impact of their activation and use that learning to improve upon it.
Targeting the Right People
In a much earlier posting, I marveled at how one seeking effectiveness measurement can render useless even the most well-conducted research by utilizing convenience samples, allowing respondents to self select or relying upon flawed incentives that create response bias. But let’s start with an even more basic tenet of effective measurement — assuring that the sampling frame is properly defined and obtained. That begins with an understanding of who the activating brand cares about reaching and assuring that they are the ones being sought in surveys. The survey research world is fraught with “professional respondents,” those that “yes” (or “no”) you to death and those who find ways to speed through surveys. Any measurement worth its salt has put in respondent quality protocols that mitigate this.
This should also be accompanied by an appropriate comparative context against a control population of demographically and behaviorally similar people. I recall a test we conducted for a property trying to attract a very high-end brand. They were lamenting the fact that a prior informal survey among their audience had yielded a single-digit purchase interest score and felt that such information would be devastating to share with the brand they were courting. We asked if that was really a bad number, when compared to other marketing vehicles that the brand was utilizing. Of course, that information wasn’t readily available. But when we executed a more properly designed study with a control sample that targeted those engaged in other forms of marketing, the “bad number” became a good one…and the sponsorship was sold.
Aligning Measurement With Objectives
The above observation is a good precursor to the second fundamental construct of how we measure activation impact, and harkens back to another previous posting … . If you don’t know what you are trying to achieve, how can you effectively measure it? These are conversations that need to happen at the onset. And I’d maintain that no two programs and no two sponsors are alike. That’s why I’ve often taken issue with large syndicated measurement efforts or normative scores, because even if sponsor X and sponsor Y are in the same product category, let’s hope that they have each sought to create a unique and differentiated positioning against target markets that may also have some variation. It’s imperative to understand those differences, as nuanced as they may be, so that the measurement effort can account for that. A good measurement program considers the creative and strategic brief of a brand’s sports marketing objectives and builds that into the measurement instrument. We’re not doing a good enough job if we don’t consider relevant message points that are being communicated as well as the reasonable expectations of the activation against different audience targets.
Framing the Right Inquiries
I’ll save the dissertation on proper study/question design for another place, but as a starting point there are a number of impact measures beyond reach that any effort shouldn’t overlook. The basics are to measure for target audience (vs. control) shifts in brand awareness, perception and association/alignment with specific key brand essence elements. But, again, it’s incomplete to seek these if you aren’t doing so within a similar competitive and sponsor brand blind framework to what you’d see in the market place. Here’s where well-designed measurement can really reap meaningful insights. It’s commonplace for us to observe statistically significant shifts in target consumer’s association of an activating brand with key desired attributes versus association of those same attributes with a competitive brand, in a properly designed test. Again, too many outside measurement efforts neglect this component and thus inhibit an ability to go beyond the “reach scoreboard” to gain a more constructive understanding of which message points broke through and which require future amplification.