In this recent series of posts, I’ve been mapping out a complete re-formulation of the use of Voice of Customer (VoC) and traditional customer opinion research in the organization. It’s a part of the analytics puzzle that is shamefully under-utilized in most enterprises – especially given how inexpensive it is to put the people and technology in place to do a much better job. There are parts of this that I’ve been thinking and writing about for quite some time. Other parts are inspired by recent real-world examples – especially the use the Obama campaign made of customer attitudinal research in 2012. The way the data science team used survey research in that campaign was fundamentally different than (and significantly more effective) the traditional research paradigm and is highly relevant to digital marketing.
The first post in the series described why I think that campaign’s use of survey research was importantly different. In last week’s post, I described the creation of a full Customer Intelligence System. Essentially, that system is designed to apply the principles of transactional data warehousing and reporting to the customer attitudes space. The goal is to achieve standardization of the data across multiple sources (from survey to social to call-center), deeper segmentation, and full enterprise visibility into customer attitudes.
Today, I wanted to tackle one of the unexpected ways you can use a full Customer Intelligence System.
The Targeting Problem
Pre-Semphonic, I spent many years doing database marketing in the Credit-Card industry. I learned that three pillars drive database marketing success: targeting, creative and offer. Probably none is more important than offer. If you’re putting lipstick on a pig, you have to be one fine artist or you have to be able to locate the very small segment of pig admirers to be successful. But a good offer/product will often succeed even with poor creative and haphazard targeting. Offer matters.
But offer is sometimes (not always) outside our domain as analysts, and even a good offer can be dramatically improved with the right targeting and good creative.
As a digital marketer/analyst, it’s your responsibility to optimize each of these pillars. We all know the old adage: “You can’t improve what you can’t measure.” So here’s the question to digital marketers. Can you measure for your digital campaigns whether the offer, the creative or the targeting was responsible for their success or failure? If not, then how can you hope to optimize each or even know which component needs optimization?
Digital marketing is, of course, highly measurable. We generally know how many visitors we sourced, how expensive it was to reach them, how many engaged with the content, and how many converted.
Suppose you have Campaign X and Campaign Y. Campaign X is for beauty products and Campaign Y is for sporting goods. You have the following information:
Campaign Reach Clicks Engaged Converted
X 100K 3K 1K 200
Y 100K 3K 500 50
It doesn’t take a genius to see that X is out-performing Y. The real question, though, is what do about it. Is Y worse than X because the offer isn’t as good, because the targeting isn’t as good, or because the creative isn’t as good?
It’s pretty much impossible to answer that question with this data. Detailed examination of the Engagement behaviors MIGHT shed some light on whether or not the Offer or Creative for Y was a problem. But nothing in the behavioral stream will help you decide whether or not the targeting was good or bad.
That’s a problem.
I think it’s fair to say that in the vast majority of enterprises I work with – even those spending millions of dollars on digital marketing and using highly sophisticated optimization techniques – no one has any idea whether their digital campaigns are well targeted.
That’s a problem.
And you can probably see where I’m headed for a solution.
Suppose you targeted a very small tight survey to digital responders to recover 1-2 core demographics and perhaps even some exploration of the offer/creative…you might end up with data like this:
X converts slightly better for its target audience (Female) than Y does for its target audience (male). But the difference is slight (8% to 7%). The big difference between the two campaigns is the poor quality of the targeting for campaign Y. In campaign Y, more than half the target responders are outside the target.
By adding demographics to the analysis, we're able to pinpoint what's going wrong with Campaign X. When the campaign reaches its audience it works just fine. But it's not reaching its audience most of the time.
This is, obviously, an artificial example. But when we’ve been able to measure targeting precision for campaigns it has been consistently illuminating – often revealing mismatches between expectation and reality.
Digital marketers make lots of bad assumptions – and one of the them is that click-through is a sufficient indication of targeting precision to live with. All of our experience suggests otherwise. We’ve frequently seen campaigns with 95% bounce rates. If click-through was proof of targeting, that would never happen.
Using survey data, you can measure targeting precision – the degree to which the respondent audience matches the target audience for a campaign. With behavioral integration, you can also separate out the impact of targeting from creative and offer. You know not only how accurate your targeting was, but whether or not the campaign worked when the right audience DID respond. This is a huge benefit to optimization.
Brand Marketing
There’s another use-case where targeting precision isn’t just an add-on to measuring campaign effectiveness but can be the primary technique. If you’re spending heavily on digital brand marketing – campaigns that have no immediate conversion outcome – you’re single best optimization point may be targeting precision. Measures of site engagement may be interesting, but for Brand Marketing it is often more important that the RIGHT people see the message than that the WRONG people engage with the message.
Many of our clients have logged-in sections of their site. I regularly see digital campaigns targeted toward prospects where half or more than half of the click-throughs logged-in. That’s a good check, but what if you don’t have a log-in behavior to look at?
Much of the same thinking applies to social campaigns. If you are running marketing campaigns to acquire Facebook Fans, then you need to make sure they really are useful relationships to have. Yes, that does mean more than demographics. But at least demographics are a start.
If you’re primary focus is digital brand marketing or social
acquisition, then you simply cannot do your job well unless you measure
targeting precision.
It’s one of the many ways a re-focused survey research program can add tremendous value to your digital marketing.
Comments