More thoughts from X Change on VOC Online Survey and Web Analytics Integration
[Before I dive into today’s topic, I wanted to highlight Jared Waxman’s comment on my last survey post. Jared made a really interesting point about a different way to think about sample size. I totally agree with his take. and it’s a different perspective than you usually get.]
My last post on online survey and behavioral integration made the point that the demands of behavioral analysis (which typically involve fairly small subsets of the total population) make large sample sizes an absolute requirement if you really want to take advantage of behavioral integration and not just say you have it. But the news about behavior integration isn’t all bad – even if you’re primary interest is on the survey side. Larger sample sizes are a burden. But one of the biggest concerns I heard during X Change was the validity of their online survey samples - and that's something that behavioral integration can help establish.
How do you know if your online survey sample really is representative?
From a pure survey perspective, you can take a methodological approach. A good methodology goes a long way toward getting you a good sample. But the online world is not as settled from a methodological perspective as the offline world and I heard lots of concern about the online surveys actually deployed at the enterprise level.
In the traditional survey world, key demographic variables can be used as a check on the validity of your sample. If you’re polling a presidential election and your gender, age or ethnicity break-downs don’t match the registered voter (or likely voter) population, you know you have issues.
But demographic variables in the online world provide no similar re-assurance. You have no independent measure of your actual demographics. Even if the online survey demographics match your broader audience profile, that’s no guarantee – it might even be suspicious. Of course, you may be able to match your online survey demographics with traditional research that includes a segmentation of online.
Even a good match here is no guarantee that you aren’t mis-sampling significant types of traffic. You may be missing out, for instance, on your natural search non-customer base and you’d never realize it with this type of check.
If you are doing behavioral integration, however, there is a simple and powerful way to validate your sample. Simply run a set of behavioral profiles against your survey population. Obvious variables to look at include pages viewed, acquisition sources, loyalty, and success event counts. With a large sample size, you can even check on geography.
In addition to the high-level behavioral variables, it's not a bad idea to check usage of key content types. It's possible, for instance, that your "job-seeking" population looks rather like your "shopping" population from a macro-perspective of sourcing, page views or time-on-site. But it would be distressing to find that your increased satisfaction scores are being driven by an increase of job-seekers coming to your site in a down-economy. Of course, surveys have an internal check (visit reason) that might catch this; but the behavioral measure is probably a more reliable indicator in this particular case.
If the behavioral profiles for your survey sample match that of your broader population, it’s an extremely powerful confirmation that your sample is valid. This is a great way to once and for all quash doubts in your organization about whether an online survey is really representative.
This works even if you are pulling a criteria-based sample. If you’ve made the decision that you’re only going to survey visitors with 5+ pages, you can compare the survey population to the total population of 5+ page viewers. As a bonus, you can also get a good sense of what you’re missing when you do that.
On the other hand, if you detect significant mismatches in the behavioral profiles, this can indicate problems in your sampling.
If you do find problems in your sample, you shouldn’t assume that you can just weight the population to fix the problem. Suppose that you find your survey sample consistently undercounts your natural search sourced population. It would be a mistake to simply weight what natural search respondents you have up to a representative number and assume you’ve fixed the problem.
You may only be sampling engaged natural searchers or natural searchers from brand terms. These populations may be fundamentally different than the broad natural search population.
You may need to weight by a combination of factors (e.g. pages viewed & source or pages viewed & source & visit number) or figure out how to actually adjust your sample.
Either way, using behavioral data as an independent confirmation of survey sample is a powerful tool. Not only can it help you figure out if you are getting a good sample, it’s a tremendous tool for PROVING to your organization that you are getting a good sample.
With organizations increasingly relying on the data collected from online opinion research, behavioral validation of the sample population is something you really should consider. After all, it’s really bad if your sample is wrong and people are relying on your numbers. It’s almost as bad if your sample is good and people don’t trust the numbers. And even if your sample is good and people rely on your numbers, an independent confirmation and the peace-of-mind it brings is still worth having.
Is it worth doing a behavioral integration just to validate your sample? If you take your research seriously (and you should), then I think it is. There are many other benefits to integration, but this is an oft-overlooked and quite substantial win that is available without all that much effort or cost. It’s not often I get to write a sentence like that!
Hi Gary,
Good point! In fact, my research indicate that online surveys are often "...significant differences between respondents and other visitors: they tend to be more engaged with the website, they see different content and they even come from different geographical areas."
This conclusion is based on an emprical study which compares the behavior of survey respondents (43,154)with the behavior of all visitors (8.6 million) on websites from 12 different countries.
You can read the entire study here:
http://theartofwebanalytics.com/?p=20
Best regards
Christian
Netminers
Posted by: Christian Vermehren | October 22, 2009 at 05:41 AM
Really, really good idea. Never thought of it. Very excited about trying it out.
Posted by: Jacques Warren | November 06, 2009 at 11:09 AM