Before I dive into today's topics, there are a few items worth mentioning:
- If you're going to be out at Omniture Summit and would like to meet, drop me a line!
- X Change 2012 Registration is now OPEN! Click here to register and remember, it's first come, first serve when it comes to Huddle Selection
- X Change Europe 2012 Registration is also OPEN!
- I'm doing a free Webinar on "Tracking the Full Customer Journey: Data Model for the Digital World" - it's a distillation of my recent whitepaper. Click here to register.
With all that to look forward to, I'm going to look back just a little bit today and re-visit one of my eMetrics presentations. Over the past year, we've been working with the HP store on a variety of research and analytics projects. At eMetrics, HP's Andrew Bakonyi and I co-presented on a subset of those projects (mostly because they are the ones that I've personally been involved with). Our theme was a simple one - the power of online opinion research to support customer and business analysis beyond the Web site.
HP faced some pretty stiff challenges in 2011: Executive team changes, major shifts in business strategy, and fairly dramatic systemic change in the core business. As we navigated those challenges, we found that online opinion research gave us a high-impact, low-cost, very quick method of understanding the scope of the problem and identifying better business and customer communication strategies.
We tackled three different analysis projects from 2011 in the presentation. The first was a rush analysis in the wake of the leaks last year around HP's Executive Team and strategic direction. The potential ramifications for HP Store were huge. But while the view from inside was bleak, we wanted to understand the immediate impact on HP's business and brand. Naturally, in the wake of a crisis, time to response is critical. We produced a custom survey instrument and deployed it on the HP Store site in the week following the announcements. To gather data quickly, we boosted the ask rate far beyond what we would normally do - collecting 3,000 surveys in five days. One week later, we'd turned around a complete analysis of consumer knowledge, attitudes, and impact around the news.
Survey design was probably the most difficult part of this project. Each of projects Andrew and I discussed required completely new survey instruments. In every case, we essentially threw out the existing site experience survey and used a much shorter, custom-designed survey. These surveys were typically between 10-13 questions with significant branching. The only questions we kept from the ongoing site experience survey were the core demographic/firmographic/visit-intent questions. Not only were these important in their own right, consistency on these questions gave us the ability to track whether the visitor population was shifting in the wake of the news and whether or not our survey was representative of previous HP site experience.
In the survey, we first explored whether consumers were aware of the HP announcements and, if aware, whether they understood them. For consumers that were aware, we then explored how the news was perceived. Did it raise concerns around company issues (stability, innovation), brand, or product (warranty, reliability)? If it did, how deep were those concerns. For consumers who weren't aware of the news, we wanted to find out what impact exposure would have. So we gave them controlled capsule summaries of key news points and then followed up with impact questions similar to those given the already aware group. This was delicate stuff and quite challenging to put together in a way that we felt was exploratory without being leading.
One of the nice aspects of opinion research is that it can be a powerful corrective to the "inside" view. What feels like a cataclysm in Santa Clara may be more like a mild temblor in Dubuque. Many folks were only loosely aware of the announcements and even among the aware, the perception was by no means uniformly negative. We found a similar pattern when we pushed the 'unaware" by priming them with the news.
Interesting patterns also emerged when we looked at how/when the news did impact brand perception. As we used to find so often in political opinion research, much of what you choose to hear in the news depends on what you already believe. On the whole, brand advocates (we used questions about ownership and consideration set to define this group) were the least likely to take the news as bad. But there were some issues around which Brand Advocates tended to react more negatively than average. It's another old lesson from politics - knowing what might alienate your base is every bit - perhaps even more - important as knowing how to persuade the undecided.
The research also gave us considerable insight into which of several factors (warranty, reliability, company stability, product innovation) were most impacted for different types of consumers. Again, it's no surprise that Corporate Buyers, Consumer Laptop Buyers, and Tablet Buyers had quite different levels of awareness, concern, and focus point. But in the wake of a crisis, there's a tendency to over-compensate and go to broadcast messages. The ability to segment communications is perhaps even more important in a crisis than in more tranquil, business-as-usual times.
In the second project, we focused on the extent to which tablets were cannibalizing laptop sales. Tablets are, rightly, considered a new category. But their explosive growth has coincided with a flattening in the global laptop market and it's an open question how closely the two are related. Our survey instrument for this project focused in on the subset (fairly modest it turned out) of HP Store Laptop buyers who expressed some interest in a Tablet as an alternative.
In the survey, we then explored their likelihood to choose a tablet over a PC and which of a variety of different features or laptop improvements (smaller and lighter form-factor, touch interface, fun package of standard applications, price discounts, etc.) were most likely to shift the consumer into the laptop camp.
This type of research has multiple functions. First, it's designed to help answer the basic question around the degree of cannibalization. Perhaps even more important, it's designed to help understand how to position the laptop versus the tablet for key segments of deciders (and whensuch positioning is even necessary). This type of research is a natural table-set for site testing. As with the initial project, we created the survey here very quickly and inexpensively (a few weeks time - this wasn't a crash project). It was much cheaper to deploy the survey than to test multiple creative alternatives on the site. And by exploring the options by segment, we can make site testing significantly more focused and productive.
Indeed, one of the major themes of our presentation was the close relationship between online survey research and site testing. Not only can online research drive testing alternatives, it should provide the core attitudinal framework within which creative is developed.
This project also illustrated one of our other, less sanguine, themes: the limitations of online survey research. Within each of the projects we discussed, there were significant questions unanswered. In no case could we say anything about the impact on broader consumer attitudes. Since we're sampling on-site, we're getting a population that is inherently skewed. Was there a population so put-off by the news from HP that they never visited the store? Are most consumers who buy Tablets not even bothering to research laptops? Given our sample limitations, there's no way to know. This doesn't mean that online survey research is useless. It does mean that you have to be aware, honest, and up-front about what you don't know and can't answer.
It's also why we're exploring more aggressive use of off-site survey research with HP - whether from private panels or independent sites like CNET. Supplementing the on-site view with off-site, independent populations becomes much more important when you start using online opinion research for more than site experience questions. But even when you're deploying on a panel, the opportunity to survey your site visitors shouldn't be missed. You can go far deeper with a far larger sample (and behavioral ties) on your own site than is ever possible offsite.
Our third project delved into the drivers of consumer choice around laptop selection. In a market rapidly approaching saturation (at least domestically), it's possible to grow only by taking market share. That puts a tremendous premium on understanding how consumers are categorizing each brand. In this project, we explored a variety of dimensions including brand consideration set, feature consideration set, key choice drivers, and segmentation by consumer type, role, and attitudes.
In some ways, this was the most classical and broadly interesting of the projects we did in 2011 with HP. It was a perfect illustration of the difference between collecting site experience information via online surveys and doing real customer research.
The results helped us categorize the HP brand in relation to a wide set of competitors. It helped identify which feature sets were important to consumers within any given consideration set (Apple vs. HP buyers are different than ASUS vs. HP buyers or Acer vs. HP buyers). Perhaps most interesting was the set of questions we used to get potential buyers to self-segment themselves along a Price/Value/Image scale. By comparing these self-segmentations with the features buyers also identified as important, we were able to identify some significant mismatches - cases where buyers were reluctant to categorize themselves a certain way despite their actual drivers of choice. Not every image conscious buyer is necessarily comfortable with a laptop called "ENVY" and not every shopper looking for the cheapest machine will describe themselves as a discount shopper.
Again, this is invaluable information for steering a testing program and understanding what types of creative tests might actually pay dividends.
When Andrew and I reflected on these three projects, a couple of points really stood out. First, the types of survey we used were fundamentally different that traditional site experience survey instruments. Second, each project demanded quite a different set of questions - putting a real premium on flexible survey deployment. Third, online research is very inexpensive and flexible relative to many other mediums. None of the projects here took more than 3-4 weeks of work (not counting collection of course) and none of them cost more than 10K. When you've got a reasonable technology for creating and deploying online surveys in-place, it's a shame not to use it for true customer research. Finally, every project illustrated at least some of the limitations to on-site intercept survey research. When you only sample site visitors, you can only speculate about what you are missing. There are ways around this problem - and we believe that the combination of on-site survey intercepts with 3rd Party or Panel based survey research provides a compelling overall solution to delivering deep online customer research that is fast, flexible, and cost-efficient.
I've summarized some of our broader thoughts around survey opinion research into a short two-page Point-of-View document. If you're interested, just drop me a line and I'll send it your way.
Comments