My new blog site is open at http://measuringthedigitalworld.com
For the latest posts, please go to the new site.
Thanks!
Gary
My new blog site is open at http://measuringthedigitalworld.com
For the latest posts, please go to the new site.
Thanks!
Gary
November 08, 2015 in Web Analytics | Permalink | Comments (0)
In the last couple of months, I’ve been writing an extended series on digital transformation that reflects our current practice focus. At the center of this whole series is a simple thesis: if you want to be good at something you have to be able to make good decisions around it. Most enterprises can’t do that in digital. From the top on down, they are setup in ways that make it difficult or impossible for decision-makers to understand how digital systems work and act on that knowledge. It isn’t because people don’t understand what’s necessary to make good decisions. Enterprises have invested in exactly the capabilities that are necessary: analytics, Voice of Customer, customer journey mapping, agile development, and testing. What they haven’t done is changed their processes in ways that take advantage of those capabilities.
I’ve put together what I think is a really compelling presentation of how most organizations make decisions in the digital channel, why it’s ineffective, and what they need to do to get better. I’ve put a lot of time into it (because it’s at the core of our value proposition) and really, it’s one of the best presentations I’ve ever done. If you’re a member of the Digital Analytics Association, you can see a chunk of that presentation in the recent webinar I did on this topic. [Webinars are brutal – by far the hardest kind of speaking I do - because you are just sitting there talking into the phone for 50 minutes – but I think this one, especially the back-half, just went well] Seriously, if you’re a DAA member, I think you’ll find it worthwhile to replay the webinar.
If you’re not, and you really want to see it, drop me a line, I’m told we can get guest registrations setup by request.
At the end of that webinar I got quite a few questions. I didn’t get a chance to answer them all and I promised I would - so that's what this post is. I think most of the questions have inherent interest and are easily understood without watching the webinar so do read on even if you didn't catch it (but watch the darn webinar).
Q: Are metrics valuable to stakeholders even if they don't tie in to revenues/cost savings?
Absolutely. In point of fact, revenue isn’t even the best metric on the positive side of the balance sheet. For many reasons, lifetime value metrics are generally a better choice than revenue. Regardless, not every useful metric has to, can or should tie back to dollars. There are whole classes of metrics that are important but won’t directly tie to dollars: satisfaction metrics, brand awareness metrics and task completion metrics. That being said, the most controversial type of non-revenue metric are proxies for engagement which is, in turn, a kind of proxy for revenue. These, too, can be useful but they are far more dangerous. My advice is to never use a proxy metric unless you’ve done the work to prove it’s a valid proxy. That means no metrics plucked from thin air because they seem reasonable. If you can’t close the loop on performance with behavioral data, use re-survey methods. It’s absolutely critical that the metrics you optimize with be the right ones – and that means spending the extra time to get them right. Finally, I've argued for awhile that rather than metrics our focus should be on delivering models embedded in tools - this allows people to run their business not just look at history.
Q: What is your favorite social advertising KPI? I have been using $ / Site Visit and $ / Conversion to measure our campaigns but there is some pushback from the social team that we are not capturing social reach.
A very related question – and it’s interesting because I actually didn’t talk much about KPIs in the webinar! I think the question boils down to this (in addition to everything I just said about metrics) – is reach a valid metric? It can be, but reach shouldn’t be taken as is. As per my answer above, the value of an impression is quite different on every channel. If you’re not doing the work to figure out the value of an impression in a channel then what’s the point of reporting an arbitrary reach number? How can people possibly assess whether any given reach number makes a buy good or bad once they realize that the value of an impression varies dramatically by channel? I also think a strong case can be made that it’s a mistake to try and optimize digital campaigns using reported metrics even direct conversion and dollars. I just saw a tremendous presentation from Drexel’s Elea Feit at the Philadelphia DAA Symposium that echoed (and improved) what I’ve been saying for years. Namely that non-incremental attribution is garbage and that the best way to get true measures of lift is to use control groups. If your social media team thinks reach is important, then it’s worth trying to prove if they are right - whether that's because those campaigns generate hidden short-term lift or or because they generate brand awareness that track to long term lift.
Q: For companies that are operating in the way you typically see, what is the one thing you would recommend to help get them started?
This is a tough one because it’s still somewhat dependent on the exact shape of the organization. Here are two things I commonly recommend. First, think about a much different kind of VoC program. Constant updating and targeting of surveys, regular socialization with key decision-makers where they drive the research, an enterprise-wide VoC dashboard in something like Tableau that focuses on customer decision-making not NPS. This is a great and relatively inexpensive way to bootstrap a true strategic decision support capability. Second, totally re-think your testing program as a controlled experimentation capability for decision-making. Almost every organization I work with should consider fundamental change in the nature, scope, and process around testing.
Q: How much does this change when there are no clear conversions (i.e., Non-Profit, B2B, etc)?
I don’t think anything changes. But, of course, everything does change. What I mean is that all of the fundamental precepts are identical. VoC, controlled experiments, customer journey mapping, agile analytics, integration of teams – it’s all exactly the same set of lessons regardless of whether or not you have clear conversions on your website. On the other hand, every single measurement is that much harder. I’d argue that the methods I argue for are even more important when you don’t have the relatively straightforward path to optimization that eCommerce provides. In particular, the absolute importance of closing the loop on important measurements simply can’t be understated when you don’t have a clear conversion to optimize to.
Q: What is the minimum size of analytics team to be able to successfully implement this at scale?
Another tricky question to answer but I’ll try not to weasel out of it. Think about it this way, to drive real transformation at enterprise scale, you need at least 1 analyst covering every significant function. That means an analyst for core digital reporting, digital analytics, experimentation, VoC, data science, customer journey, and implementation. For most large enterprises, that’s still an unrealistically small team. You might scrape by with a single analyst in VoC and customer journey, but you’re going to need at least small teams in core digital reporting, analytics, implementation and probably data science as well. If you’re at all successful, the number of analytics, experimentation and data science folks is going to grow larger – possibly much larger. It’s not like a single person in a startup can’t drive real change, but that’s just not the way things work in the large enterprise. Large enterprise environments are complex in every respect and it takes a significant number of people to drive effective processes.
Q: Sometimes it feels like agile is just a subject line for the weekly meeting. Do you have any examples of organizations using agile well when it comes to digital?
Couldn’t agree more. My rule of thumb is this: if your organization is studying how to be innovative, it never will be. If your organization is meeting about agile, it isn’t. In the IT world, Agile has gone from a truly innovative approach to development to a ludicrous over-engineered process managed, often enough, by teams of consulting PMs. I do see some organizations that I think are actually quite agile when it comes to digital and doing it very well. They are almost all gaming companies, pure-play internet companies or startups. I’ll be honest – a lot of the ideas in my presentation and approach to digital transformation come from observing those types of companies. Whether I’m right that similar approaches can work for a large enterprise is, frankly, unclear.
Q: As a third party measurement company, what is the best way to approach or the best questions to ask customers to really get at and understand their strategic goals around their customer journeys?
This really is too big to answer inside a blog – maybe even too big to reasonably answer as a blog. I’ll say, too, that I’m increasingly skeptical of our ability to do this. As a consultant, I’m honor-bound to claim that as a group we can come in, ask a series of questions of people who have worked in an industry for 10 or 20 years and, in a few days time, understand their strategic goals. Okay…put this way, it’s obviously absurd. And, in fact, that’s really not how consulting companies work. Most of the people leading strategic engagements at top-tier consulting outfits have actually worked in an industry for a long-time and many have worked on the enterprise side and made exactly those strategic decisions. That’s a huge advantage. Most good consultants in a strategic engagement know 90% of what they are going to recommend before they ask a single question.
Having said that, I’m often personally in a situation where I’m asked to do exactly what I’ve just said is absurd and chances are if you’re a third party measurement company you have the same problem. You have to get at something that’s very hard and very complex in a very short amount of time and your expertise (like mine) is in analytics or technology not insurance or plumbing or publishing or automotive.
Here’s a couple of things I’ve found helpful. First, take the journey's yourself. It’s surprising how many executives have never bought an online policy from their own company, downloaded a whitepaper to generate a lead, or bought advertising on their own site. You may not be able to replicate every journey, but where you can get hands on, do it. Having a customer’s viewpoint on the journey never hurts and it can give you insight your customers should but often don’t have. Second, remember that the internet is your best friend. A little up-front research from analysts is a huge benefit when setting the table for those conversations. And I’m often frantically googling acronyms and keywords when I’m leading those executive conversations. Third, check out the competition. If you do a lead on the client’s website, try it on their top three competitors too. What you’ll see is often a great tableset for understanding where they are in digital and what their strategy needs to be. Finally, get specific on the journey. In my experience, the biggest failing in senior leaders is their tendency to generality. Big generalities are easy and they sound smart but they usually don’t mean much of anything. The very best leaders don’t ever retreat into useless generality, but most of us will fall into it all too easily.
Q: What are some engagement models where an enterprise engages 3rd party consulting? For how long?
The question every consultant loves to hear! There are three main ways we help drive this type of digital transformation. The first is as strategic planners. We do quite a bit of pure digital analytics strategy work, but for this type of work we typically expand the strategic team a bit (beyond our core digital analytics folks) to include subject matter experts in the industry, in customer journey, and in information management. The goal is to create a “deep” analytics strategy that drives toward enterprise transformation. The second model (which can follow the strategic phase) is to supplement enterprise resources with specific expertise to bootstrap capabilities. This can include things like tackling specific highly strategic analytics projects, providing embedded analysts as part of the team to increase capacity and maturity, building out controlled experiment teams, developing VoC systems, etc. We can also provide – and here’s where being part of a big practice really helps – PM and Change Management experts who can help drive a broader transformation strategy. Finally, we can help soup to nuts building the program. Mind you, that doesn’t mean we do everything. I’m a huge believer that a core part of this vision is transformation in the enterprise. Effectively, that means outsourcing to a consultancy is never the right answer. But in a soup-to-nuts model, we keep strategic people on the ground, helping to hire, train, and plan on an ongoing basis.
Obviously, the how-long depends on the model. Strategic planning exercises are typically 10-12 weeks. Specific projects are all over the map, and the soup-to-nuts model is sustained engagement though it usually starts out hot and then gets gradually smaller over time.
Q: Would really like to better understand how you can identify visitor segments in your 2-tier segmentation when we only know they came to the site and left (without any other info on what segment they might represent). Do you have any examples or other papers that address how/if this can be done?
A couple years back I was on a panel at a Conference in San Diego and one of the panelists started every response with “In my book…”. It didn’t seem to matter much what the question was. The answer (and not just the first three words) were always the same. I told my daughters about it when I got home, and the gentleman is forever immortalized in my household as the “book guy”. Now I’m going to go all book guy on you. The heart of my book, “Measuring the Digital World” is an attempt to answer this exact question. It’s by far the most detailed explication I’ve ever given of the concepts behind 2-tiered segmentation and how to go from behavior to segmentation. That being said, you can only pre-order now. So I’m also going to point out that I have blogged fairly extensively on this topic over the years. Here’s a couple of posts I dredged out that provide a good overview:
http://semphonic.blogs.com/semangel/2012/05/digital-segmentation.html
and – even more important - here’s the link to pre-order the book!
That’s it…a pretty darn good list of questions. I hope that’s genuinely reflective of the quality of the webinar. Next week I’m going to break out of this series for a week and write about our recent non-profit analytics hackathon – a very cool event that spurred some new thoughts on the analysis process and the tools we use for it.
November 04, 2015 in Web Analytics | Permalink | Comments (0)
Tags: agile, agile methods, analytics, controlled experimentation, customer experience, digital, digital analytics, digital tranformation, Ernst & Young, EY, Gary Angel, Measuring the digital world, testing, VoC, Voice of Customer, web analytics
The key to effective digital transformation isn’t analytics, testing, customer journeys, or Voice of Customer. It’s how you blend these elements together in a fundamentally different kind of organization and process. In the DAA Webinar (link coming) I did this past week on Digital Transformation, I used this graphic to drive home that point:
I’ve already highlighted experience engineering and integrated analytics in this little series, and the truth is I wrote a post on constant customer research too. If you haven’t read it, don’t feel bad. Nobody has. I liked it so much I submitted it to the local PR machine to be published and it’s still grinding through that process. I was hoping to get that relatively quickly so I could push the link, but I’ve given up holding my breath. So while I wait for VoC to emerge into the light of day, let’s move on to controlled experimentation.
I’ll start with definitional stuff. By controlled experimentation I do mean testing, but I don’t just mean A/B testing or even MVT as we’ve come to think about it. I want it to be broader. Almost every analytics project is challenged by the complexity of the world. It’s hard to control for all the constantly changing external factors that drive or impact performance in our systems. What looks like a strong and interesting relationship in a statistical analysis is often no more than an artifact produced by external factors that aren’t being considered. Controlled experiments are the best tool there is for addressing those challenges.
In a controlled experiment, the goal is to create a test whereby the likelihood of external factors driving the results is minimized. In A/B testing, for example, random populations of site visitors are served alternative experiences and their subsequent performance is measured. Provided the selection of visitors into each variant of the test is random and there is sufficient volume, A/B tests make it very unlikely that external factors like campaign sourcing or day-time parting will impact the test results. How unlikely? Well, taking a random sample doesn’t guarantee randomness. You can flip a fair coin fifty times and get fifty heads so even a sample collected in a fully random manner may come out quite biased; it’s just not very likely. The more times you flip, the more likely your sample will be representative.
Controlled experiments aren’t just the domain of website testing though. They are a fundamental part of scientific method and are used extensively in every kind of research. The goal of a controlled experiment is to remove all the variables in an analysis but one. That makes it really easy to analyze.
In the past, I’ve written extensively on the relationship between analytics and website testing (Kelly Wortham and I did a whole series on the topic). In that series, I focused on testing as we think of it in the digital world – A/B and MV tests and the tools that drive those tests. I don’t want to do that here, because the role for controlled experimentation in the digital enterprise is much broader than website testing. In an omni-channel world, many of the most important questions – and most important experiments – can’t be done using website testing. They require experiments which involve the use, absence or role of an entire channel or the media that drives it. You can’t build those kinds of experiments in your CMS or your testing tool.
I also appreciate that controlled experimentation doesn’t carry with it some of the mental baggage of testing. When we talk testing, people start to think about Optimizely vs. SiteSpect, A/B vs. MVT, landing page optimization and other similar issues. And when people think about A/B tests, they tend to think about things like button colors, image A vs. image B and changing the language in a call-to-action. When it comes to digital transformation, that’s all irrelevant.
It's not that changing the button colors on your website isn't a controlled experiment. It is; it’s just not a very important one. It’s also representative of the kind of random “throw stuff at a wall” approach to experimentation that makes so many testing programs nearly useless.
One of the great benefits of controlled experimentation is that, done properly, the idea of learning something useful is baked into the process. When you change the button color on your Website, you’re essentially framing a research question like this:
Hypothesis: Changing the color of Button X on Page Y from Red to Yellow will result in more clicks of the button per page view
An A/B test will indeed answer that question. However, it won’t necessarily answer ANY other question of higher generality. Will changing the color of any other button on any other page result in more clicks? That’s not part of the test.
Even with something as inane as button colors, thinking in terms of a controlled experiment can help. A designer might generalize this hypothesis to something that’s a little more interesting. For example, the hypothesis might be:
Hypothesis: Given our standard color pallet, changing a call-to-action on the page to a higher contrast color will result in more clicks per view on the call-to-action
That’s a somewhat more interesting hypothesis and it can be tested with a range of colors with different contrasts. Some of those colors might produce garish or largely unreadable results. Some combinations might work well for click-rates but create negative brand impressions. That, too, can be tested and might perhaps yield a standardized design heuristic for the right level of contrast between the call-to-action and the rest of a page given a particular color palette.
The point is, by casting the test as a controlled experiment we are pushed to generalize the test in terms of some single variable (such as contrast and its impact on behavior). This makes the test a learning experience; something that can be applied to a whole set of cases.
This example could be read as an argument for generalizing isolated tests into generalized controlled experiments. That might be beneficial, but it’s not really ideal. Instead, every decision-maker in the organization should be thinking about controlled experimentation. They should be thinking about it as way to answer questions analytics can’t AND as a way to assess whether the analytics they have are valid. Controlled experimentation, like analytics, is a tool to be used by the organization when it wants to answer questions. Both are most effective when used in a top-down not a bottom-up fashion.
As the sentence above makes clear, controlled experimentation is something you do, but it's also a way you can think about analytics - a way to evaluate the data decision-makers already have. I’ve complained endlessly, for example, about how misleading online surveys can be when it comes to things like measuring sitewide NPS. My objection isn’t to the NPS metric, it’s to the lack of control in the sample. Every time you shift your marketing or site functionality, you shift the distribution of visitors to your website. That, in turn, will likely shift your average NPS score – irrespective of any other change or difference. You haven’t gotten better or worse. Your customers don’t like you less or more. You’ve simply sampled a somewhat different population of visitors.
That’s a perfect example of a metric/report which isn’t very controlled. Something outside what you are trying to measure (your customer’s satisfaction or willingness to recommend you) is driving the observed changes.
When decision-makers begin to think in terms of controlled experiments, they have a much better chance of spotting the potential flaws in the analysis and reporting they have, and making more risk-informed decisions. No experiment can ever be perfectly controlled. No analysis can guarantee that outside factors aren’t driving the results. But when decision-makers think about what it would take to create a good experiment, they are much more likely to interpret analysis and reporting correctly.
I’ve framed this in terms of decision-makers, but it’s good advice for analysts too. Many an analyst has missed the mark by failing to control for obvious external drivers in their findings. A huge part of learning to “think like an analyst” is learning to evaluate every analysis in terms of how to best approximate a controlled experiment.
So if controlled experimentation is the best way to make decisions, why not just test everything? Why not, indeed? Controlled experimentation is tremendously underutilized in the enterprise. But having said as much, not every problem is amenable to or worth experimenting on. Sometimes, building a controlled experiment is very expensive compared to an analysis; sometimes it’s not. With an A/B testing tool, it’s often easier to deploy a simple test than try to conduct and analysis of a customer preference. But if you have an hypothesis that involves re-designing the entire website, building all that creative to run a true controlled experiment isn’t going to be cheap, fast or easy.
Media mix analysis is another example of how analysis/experimentation trade-offs come into play. If you do a lot of local advertising, then controlled experimentation is far more effective than mix modeling to determine the impact of media and to tune for the optimum channel blend. But if much of your media buy is national, then it’s pretty much impossible to create a fully controlled experiment that will allow you to test mix hypotheses. So for some kinds of marketing organizations, controlled experimentation is the best approach to mix decisions; for others, mix modelling (analysis in other words – though often supplemented by targeted experimentation) is the best approach.
This may all seem pretty theoretical, so I’ll boil it down to some specific recommendations for the enterprise:
I see lots of organizations that think they are doing a great job testing. Mostly they aren’t even close. You’re doing a great job testing when every decision maker at every level in the organization is thinking about whether a controlled experiment is possible when they have to make a significant decision. When those same decision-makers know how to interpret the data they have in terms of its ability to approximate a controlled experiment. And when building controlled experiments is deeply integrated into the analytics research team and deployed across digital and omni-channel problems.
October 25, 2015 in Web Analytics | Permalink | Comments (0)
Near the end of my last post (describing the concept of analytics across the enterprise), I argued that full spectrum analytics would provide “a common understanding throughout the enterprise of who your customers are, what journeys they have, which journeys are easy and which a struggle for each type of customer, detailed and constantly improving profiles of those audiences and those journeys and the decision-making and attitudes that drive them, and a rich understanding of how initiatives and changes at every level of the enterprise have succeeded, failed, or changed those journeys over time.”
By my count, that admittedly too long sentence contains the word journey four times and clearly puts understanding the customer journey at the heart of analytics understanding in the enterprise.
I think that’s right.
If you think about what senior decision-makers in an organization should get from analytics, nothing seems more important than a good understanding of customers and their journeys. That same understanding is powerful and important at every level of the organization. And by creating that shared understanding, the enterprise gains something almost priceless – the ability to converse consistently and intelligently, top-to-bottom, about why programs are being implemented and what they are expected to accomplish.
This focus on the journey isn’t particularly new. It’s been almost five years since I began describing Two-Tiered Segmentation as fundamental to digital; it’s a topic I’ve returned to repeatedly and it’s the central theme of my book. In a Two-Tiered Segmentation, you segment along two dimensions: who visitors are and what they are trying to accomplish in a visit. It’s this second piece – the visit intent segmentation – that begins to capture and describe customer journey.
But if Two-Tiered Segmentation is the start of a measurement framework for customer journey, it isn’t a complete solution. It’s too digitally focused and too rooted in displayed behaviors - meaning it’s defined solely by the functionality provided by the enterprise not by the journeys your customers might actually want to take. It’s also designed to capture the points in a journey – not necessarily to lay out the broader journey in a maximally intelligible fashion.
Traditional journey mapping works from the other end of the spectrum. Starting with customers and using higher-level interview techniques, it’s designed to capture the basic things customers want to accomplish and then map those into more detailed potential touchpoints. It’s exploratory and specifically geared toward identifying gaps in functionality where customers CAN’T do the things they want or can’t do them in the channels they’d prefer.
While traditional journey mapping may feel like the right solution to creating enterprise-wide journey maps, it, too, has some problems. Because the techniques used to create journey maps are very high-level, they provide virtually no ability to segment the audience. This leads to a “one-size-fits-all” mentality that simply isn’t correct. In the real world, different audiences have significantly different journey styles, preferences and maps, and it’s only through behavioral analysis that enough detail can be exhumed about those segments to create accurate maps.
Similarly, this high-level journey mapping leads to a “golden-path” mentality that belies real world experience. When you talk to people in the abstract, it’s perfectly possible to create the ideal path to completion for any given task. But in the real world, customers will always surprise you. They start paths in odd places, go in unexpected directions, and choose channels that may not seem ideal. That doesn’t mean you can’t service them appropriately. It does mean that if you try to force every customer into a rigid "best" path you'll likely create many bad experiences. This myth of the golden path is something we’ve seen repeatedly in traditional web analytics and it’s even more mistaken in omni-channel.
In an omni-channel world, the goal isn’t to create an ideal path to completion. It’s to understand where the customer is in their journey and adapt the immediate Touchpoint to maximize their experience. That’s a fundamentally different mindset – a network approach not a golden-path - and it’s one that isn’t well captured or supported by traditional journey mapping.
There’s one final aspect to traditional journey mapping that I find particularly troublesome – customer experience teams have traditionally approached journey mapping as a one-time, static exercise.
Mistake.
The biggest change digital brings to the enterprise is the move away from traditional project methodologies. This isn’t only an IT issue. It’s not (just) about Agile development vs. Waterfall. It’s about recognition that ALL projects in nearly all their constituent pieces, need to work in iterative fashion. You don’t build once and move on. You build, measure, tune, rebuild, measure, and so on. Continuous improvement comes from iteration. And the implication is that analytics, design, testing, and, yes, development should all be setup to support continuous cycles of improvement.
In the well-designed digital organization, no project ever stops.
This goes for journey mapping too. Instead of one huge comprehensive journey map that never changes and covers every aspect of the enterprise, customer journeys need to be evolved iteratively as part of an experience factory approach. Yes, a high-level journey framework does need to exist to create the shared language and approach that the organization can use. But like branches on a tree, the journey map should constantly be evolved in increasingly fine-grained and detailed views of specific aspects of the journey. If you’ve commissioned a one-time customer experience journey mapping effort, congratulations; you’re already on the road to failure.
The right approach to journey mapping isn’t two-tiered segmentation or traditional customer experience maps; it’s a synthesis of the two that blends a high-level framework driven primarily by VoC and creative techniques with more detailed, measurement and channel-based approaches (like Two-Tiered Segmentation) that deliver highly segmented network-based views of the journey. The detailed approaches never stop developing, but even the high-level pieces should be continuously iterated. It’s not that you need to constantly re-work the whole framework; it’s that in a large enterprise, there are always new journeys, new content, and new opportunities evolving.
More than anything else, this need for continuous iteration is what’s changed in the world and it’s why digital is such a challenge to the large enterprise.
A great digital organization never stops measuring customer experience. It never stops designing customer experience. It never stops imagining customer experience.
That takes a factory, not a project.
September 08, 2015 in Web Analytics | Permalink | Comments (1)
Tags: agile, analytics, customer experience, customer journeys, digital analytics, digital methodologies, digital segmentation, Ernest & Young, experience engineering, EY, Gary Angel, journey mapping, segmentation, waterfall, web analytics
Enterprises do analytics. They just don’t use analytics.
That’s the first, and for me the most frustrating, of the litany of failures I listed in my last post that drive digital incompetence in the enterprise. Most readers will assume I mean by this assertion that organizations spend time analyzing the data but then do nothing to act on the implications of that analysis. That’s true, but it’s only a small part of what I mean when I say the enterprises don't use analytics. Nearly every enterprise that I work with or talk to has a digital analytics team ranging in size from modest to substantial. Some of these teams are very strong, some aren’t. But good or not-so-good, in almost every case, their efforts are focused on a very narrow range of analysis. Reporting on and attributing digital marketing, reporting on digital consumption, and conversion rate optimization around the funnel account for nearly all of the work these organizations produce.
Is that really all there is too digital analytics?
Though I’ve been struggling to find the right term (I’ve called it full-stack, full-spectrum and top-down analytics), the core idea is the same – every decision about digital at every level in the enterprise should be analytically driven. C-Level decision-makers who are deciding how much to invest in digital and what types of products or big-initiatives might bear fruit, senior leaders who are allocating budget and fleshing out major campaigns and initiatives, program managers who are prioritizing audiences, features and functionality, designers who are building content or campaign creative; every level and every decision should be supported and driven by data.
That simply isn’t the case at any enterprise I know. It isn’t even close to the case. Not even at the very best of the best. And the problem almost always begins at the top.
How do really senior decision-makers decide which products to invest in and how to carve up budgets? From a marketing perspective, there are organizations that efficiently use mix-modeling to support high-level decisions around marketing spend. That’s a good thing, but it’s a very small part of the equation. Senior decision-makers ought to have constantly before them a comprehensive and data-driven understanding of their customer types and customer journeys. They ought to understand which of those journeys they as a business perform well at and at which they lag behind. They ought to understand what audiences they don’t do well with, and what the keys to success for that audience are. They ought to have a deep understanding of how previous initiatives have impacted those audiences and journeys - which have been successful and which have failed.
This mostly just doesn’t exist.
Journey mapping in the organization is static, old-fashioned, non-segmented and mostly ignored. There’s no VoC surfaced to decision-makers except NPS – which is entirely useless for actually understanding your customers (instead of understanding what they think about you). There is no monitoring of journey success or failure – either overall or by audience. Where journey maps exist, they exist entirely independent of KPIs and measurement. There is no understanding of how initiatives have impacted either specific audiences or journeys. There is no interesting tracking of audiences in general, no detailed briefings about where the enterprise is failing, no deep-dives into potential target populations and what they care about. In short, C-Level decision-makers get almost no interesting or relevant data on which to base the types of decisions they actually need to make.
Given that complete absence of interesting data, what you typically get is the same old style of decision-making we’ve been at forever. Raise digital budgets by 10% because it sounds about right. Invest in a mobile app because Gartner says mobile is the coming thing. Create a social media command center because company X has one. This isn’t transformation. It isn’t analytics. It isn’t right.
Things don’t get better as you descend the hierarchy of an organization. The senior leaders taking those high-level decisions and fleshing out programs and initiatives lack all of those same things the C-Level folks lack. They don’t get useful VoC, interesting and data-supported journey mapping, comprehensive segmented performance tracking, or interesting analysis of historical performance by initiative either. They need all that stuff too.
Worse, since they don’t have any of those things and aren’t basing their decisions on them, most initiatives are shaped without having a clear business purpose that will translate into decisions downstream around targeting, creative, functionality and, of course, measurement.
If you’re building a mobile app to have a mobile app, not because you need to improve key aspects of a universally understood and agreed upon set of customer journeys for specific audiences, how much less effective will all of the downstream decisions about that app be? From content development to campaign planning to measurement and testing, a huge number of enterprise digital initiatives are crippled from the get-go by the lack of a consistent and clear vision at the senior levels about what they are designed to accomplish.
That lack of vision is, of course, fueled by a gaping hole in enterprise measurement – the lack of a comprehensive, segmented customer journey framework that is the basis for performance measurement and customer research.
Yes, there are pockets in the enterprise where data is used. Digital campaigns do get attributed (sometimes) and optimized (sometimes). Funnels do get improved with CRO. But even these often ardent users of data work, almost always, without the big picture. They have no better framework or data around that big-picture than anyone else and, unlike their counterparts in the C-Suite, they tend to be focused almost entirely on channel level concerns. This leads, inevitably, to a host of rotten but fully data-driven decisions based on a narrow view of the data, the customer, and the business function.
There are, too, vast swathes of the mid and low level digital enterprise where data is as foreign to day-to-day operations as Texas BBQ would be in Timbuktu. The agencies and internal teams that create campaigns, build content and develop tools live their lives gloriously unconstrained by data. They know almost nothing of the target audiences for which the content and campaigns are built, they have no historical tracking of creative or feature delivery correlated to journey or audience success, they get no VoC information about what those audiences lack, struggle with or make decisions using. They lack, in short, the basic data around which they might understand why they are building an experience, what it should consist of, and how it should address the specific target audiences. They generally have no idea, either, how what they build will be measured or which aspects of its usage will be chosen by the organization as Key Performance Indicators.
Take all this together and what it means is that even in the enterprise with a strong digital analytics department, the overwhelming majority of decisions about digital – including nearly all the most important choices – are made with little or no data.
This isn’t a worst-case picture. It’s almost a best-case picture. Most organizations aren’t even dimly aware of how much they lack when it comes to using data to drive digital decision-making. Their view of digital analytics is framed by a set of preconceptions that limit its application to evaluating campaign performance or optimizing funnels.
That’s not full-spectrum analytics. It’s one little ray of light – and that a sickly, purplish hue – cast on an otherwise empty gray void. To transform the enterprise around digital – to be really good at digital with all the competitive advantage that implies – it takes analytics. But by analytics I don’t mean this pale, restricted version of digital analytics that claims for its territory nothing but a small set of choices around which marketing campaign to invest in. I mean, instead, a form of analytics that provides support for decision-makers of every type and at every level in the organization. An analytics that provides a common understanding throughout the enterprise of who your customers are, what journeys they have, which journeys are easy and which a struggle for each type of customer, detailed and constantly improving profiles of those audiences and those journeys and the decision-making and attitudes that drive them, and a rich understanding of how initiatives and changes at every level of the enterprise have succeeded, failed, or changed those journeys over time.
You can’t be great, or even very good, at digital without all this.
A flat-out majority of the enterprises I talk to these days are going on about transforming themselves with digital and all that implies for customer-centricity and agility, I’m pretty sure I know what they mean. They mean creating a siloed testing program and adding five people to their digital analytics team. They mean tracking NPS with their online surveys. They mean the sort of "agile" development that has lead the original creators of agile to abandon the term in despair. They mean creating a set of static journey maps which are used once by the web design team and which are never tied to any measurement. They mean, in short, to pursue the same old ways of doing business and of making decisions with a gloss of customer experience, agile development, analytics, and testing that change almost nothing.
It’s all too easy to guess how transformative and effective these efforts will be.
August 30, 2015 in Web Analytics | Permalink | Comments (2)
Tags: agile, agile web development, analytics, customer experience, customer journey mapping, digital, digital analytics, digital experience, digital marketing, digital marketing optimization, digital segmentation, digital transformation, Ernest & Young, EY, Gary Angel, mobile
With a full first draft of my book in the hands of the publishers, I’m hoping to get back to a more regular schedule of blogging. Frankly, I’m looking forward to it. It’s a lot less of a grind than the “everyday after work and all day on the weekends pace” that was needful for finishing “Measuring the Digital World”! I’ve also accumulated a fair number of ideas for things to talk about; some directly from the book and some from our ongoing practice.
The vast majority of “Measuring the Digital World” concerns topics I’ve blogged about many times: digital segmentation, functionalism, meta-data, voice-of-customer, and tracking user journeys. Essentially, the book proceeds by developing a framework for digital measurement that is independent of any particular tool, report or specific application. It’s an introduction not a bible, so it’s not like I covered tons of new ground. But, as will happen any time you try to voice what you know, some new understandings did emerge. I spent most of a chapter trying to articulate how the impact of self-selection and site structure can be handled analytically; this isn’t new exactly, but some of the concepts I ended up using were. Sections on rolling your own experiments with analytics not testing, and the idea of use-case demand elasticity and how to measure it, introduced concepts that crystallized for me only as I wrote them down. I’m looking forward to exploring those topics further.
At the same time, we’ve been making significant strides in our digital analytics practice that I’m eager to talk about. Writing a book on digital analytics has forced me to take stock not only of what I know, but also of where we are in our profession and industry. I really don’t know if “Measuring the Digital World” is any good or not (right now, at least, I am heartily sick of it), but I do know it’s ambitious. Its goal is nothing less than to establish a substantive methodology for digital analytics. That's been needed for a long time. Far too often, analysts don’t understand how measurement in digital actually works and are oblivious to the very real methodological challenges it presents. Their ignorance results in a great deal of bad analysis; bad analysis that is either ignored or, worse, is used by the enterprise.
Even if we fixed all the bad analysis, however, the state of digital analytics in the enterprise would still be disappointing. Perhaps even worse, the state of digital in the enterprise is equally bad. And that’s really what matters. The vast majority of companies I observe, talk to, and work with, aren’t doing digital very well. Most of the digital experiences I study are poorly integrated with offline experiences, lack any useful personalization, have terribly inefficient marketing, are poorly optimized by channel and – if at all complex – harbor major usability flaws.
This isn’t because enterprises don’t invest in digital. They do. They spend on teams, tools and vendors for content development and deployment, for analytics, for testing, and for marketing. They spend millions and millions of dollars on all of these things. They just don’t do it very well.
Why is that?
Well, what happens is this:
Enterprises do analytics. They just don’t use analytics.
Enterprises have A/B testing tools and teams and they run lots of tests. They just don’t learn anything.
Enterprises talk about making data-driven decisions. They don’t really do it. And the people who do the most talking are the worst offenders.
Everyone has gone agile. But somehow nothing is.
Everyone says they are focused on the customer. Nobody really listens to them.
It isn't about doing analytics or testing or voice of customer. It's about finding ways to integrate them into the organization's decision-making. In other words, to do digital well demands a fundamental transformation in the enterprise. It can’t be done on a business as usual basis. You can add an analytics team, build an A/B testing team, spend millions on attribution tools, Hadoop platforms, and every other fancy technology for content management and analytics out there. You can buy a great CMS with all the personalization capabilities you could ever demand. And almost nothing will change.
Analytics, testing, VoC, agile, customer-focus...these are the things you MUST do if you are going to do digital well. It isn’t that people don’t understand what's necessary. Everyone knows what it takes. It’s that, by and large, these things aren't being done in ways that drive actual change.
Having the right methodology for digital analytics is a (small) part of that. It’s a way to do digital analytics well. And digital analytics truly is essential to delivering great digital experiences. You can’t be great – or even pretty good – without it. But that’s clearly not enough. To do digital well requires a deeper transformation; it’s a transformation that forces the enterprise to blend analytics and testing into their DNA, and to use both at every level and around every decision in the digital channel.
That’s hard. But that’s what we’re focusing on this year. Not just on doing analytics, but on digital transformation. We’re figuring out how to use our team, our methods, and our processes to drive change at the most fundamental level in the enterprise - to do digital differently: to make decisions differently, to work differently, to deliver differently and, of course, to measure differently.
As we work through delivering on digital transformation, I plan to write about that journey as well: to describe the huge problems in the way most enterprises actually do digital, to describe how analytics and testing can be integrated deep into the organization, to show how measurement can be used to change the way organizations actually think about and understand their customers, and to show how method and process can be blended to create real change. We want to drive change in the digital experience and, equally, change in the controlling enterprise, for it is from the latter that the former must come if we are to deliver sustained success.
August 23, 2015 in Web Analytics | Permalink | Comments (1)
Tags: analysis, analytics, big data, Digital, digital analytics, digital experience, digital optimization, digital transformation, Ernest & Young, EY, Gary Angel, Hadoop, testing, VoC, voice-of-customer, web analytics
One of our long time team members, Ryan Praskievicz, recently pushed a terrific blog reflecting on how he got started in Digital Analytics (with Semphonic). Since all my free time is still tied up trying to get the book draft finished, I'm grateful for the opportunity to point readers his way. If you're curious about how careers in digital analytics get started (pretty randomly mostly - the way careers often get started) it's a great read. It's also worth reading if you're on the hiring side of things. If hunting for a job feels random, so too, does the hiring process from the company side. Understanding both sides of the equation has benefits - and this is the type of problem that's often best understood in a novelistic, anecdotal fashion.
One aspect of our hiring at Semphonic that always both surprised and pleased me was how varied in interest, educational background and outlook the people we ended up hiring were. We mostly hired people like Ryan who had no real experience in the field - which certainly made it seem like a crapshoot. That it worked well on a fairly consistent basis is food for thought when it comes to reflecting on what really matters when you hire someone. Years of experience or the right degree are rarely on the list...
Enjoy Ryan's post here!
July 30, 2015 in Web Analytics | Permalink | Comments (0)
Tags: analytics, analytics hiring, angel, digital analytics, digital analytics careers, digital analytics hiring, Gary Angel, jobs
Way back at the beginning of June I knew we were running very late on getting X Change organized but I still figured to target late October or early November. But it’s become clear that we are just too late to get things organized and do the normal Conference marketing in that timeframe. So for now, we’ve made the decision to push it – probably to next spring. It’s a bummer but in some ways it works better...for me at least.
As regular readers may have noticed, I haven’t blogged much in the past six weeks either. That isn’t because I’ve stopped writing. The thing is, I’m finally writing a book (Measuring the Digital World) and the due date for the first draft is in the middle of August.
Looming is the word I think people use for that kind of deadline.
Like every other first-time author, I’m way behind and trying very hard to find whatever free time I can to write. So in truth I’ve been cranking out what amounts to two or three blogs daily and I decided I just couldn’t afford the time away from the book for anything else – including my regular posts. When I finish (on time I hope), I’ll be back to my regular schedule and I have a load of ideas and material accumulated.
July 16, 2015 in Web Analytics | Permalink | Comments (1)
Tags: digital Analytics, Gary Angel, X Change
Last Friday I took part in the DAA’s latest version of Ask Anything where DAA Members can send in questions and the designated responder (me for the day) does his or her best to say something sensible in return. At the end of the day, Jim Sterne sent in a final question on the role of the analytics warehouse and machine learning that I simply couldn’t resist expanding into full essay form. If you’re confused about the role of machine learning in the warehouse or suspicious of the claims of technology vendors touting the benefits of massive correlation to discover the keys to your business, read on!
Since technically I’m off-duty for Ask Anything (Friday is so yesterday. I’m parked at a Starbucks early on Saturday morning while my youngest daughter takes some bizarre High-School entrance exam called the SSAT – just guessing, but does the first S stand for Scam?) I thought I’d take a little more time and expand the answer to this last question into a full-on blog and post outside the forum as well. I hope that doesn’t violate any DAA contracts or anything – I’d hate to have some data assassin tracking me by my digital exhaust and switching all my 1’s to 0’s…
For those not consuming the entire DAA Ask Anything thread, here’s Jim’s question:
Have we reached a point / have you seen anybody be successful / have you seen anybody really try to build a data lake of customer information and use machine learning to derive correlations from it? All I'm looking for is a tool and a data set that will say:
It turns out people buy more stuff online when it snows. Is this correlation
[X] Worthless? [ ] Curious? [ ] Interesting? [ ] Fascinating? [ ] Actionable ?
The data suggests that sending email #256b after customers of type 83h have viewed product 87254 results in a 298.4% increase in sales. Is this correlation
[ ] Worthless? [ ] Curious? [ ] Interesting? [ ] Fascinating? [X] Actionable ?
Are we getting any closer to turning that corner??
It's a great ask and gets to what is perhaps the central issue around analytics and analytics technology in the enterprise; namely, what’s the role of the analytics warehouse and how seriously should we take the claims of big data machine learning advocates?
I’ll start with a short answer to Jim's first and most direct question and then I’m going to expand on his two sub-questions to cast some light on the nature of that answer.
Have I seen clients build a data lake and get value out of it? Unequivocally yes. Our best clients – the one’s really getting value out of digital analytics are nearly all using some form of advanced analytics warehouse / data lake and doing so very successfully.
Does that analytic value come from machine learning? Rarely. The vast majority of analytic value has come from traditional statistical techniques or from straight algorithmic selections (programmatic or SQL access to the detail data). I actually believe there is much value to be had from non-traditional techniques, but that’s more my theory than proven field fact and I absolutely am not a big fan of the massive correlation approach that I see most commonly advocated by machine learning folks (though I don’t want to cast a broad-brush here – there’s lots of different flavors of machine learning and the term itself is a bit ambiguous).
So let’s tackle your examples in more detail:
It turns out people buy more stuff online when it snows. Is this correlation
[X] Worthless? [ ] Curious? [ ] Interesting? [ ] Fascinating? [ ] Actionable ?
When I first read and replied to this, I assumed Jim meant that the correlation was random and unexpected. But I realized afterward that to Jim it was more in the nature of obvious. Either way, I’m going to disagree with Jim's selection here (though later I’m going to agree with what I take to be his broader point). Weather is important and while it might be obvious to Jim, it's left out of analytics and optimization on a routine basis.
As it happens, weather is often a highly predictive and essential variable when it comes to retail models. In Mrs. Fields’s store baking models for example, weather was the single biggest variable factor. It has a huge impact on whether people will buy a warm cookie or not. It also has a big impact of whether people will shop in store and, of course, the degree to which they might shift that behavior online.
And weather impacts aren’t limited to retail where people buy online because they can’t get to the store and are going stir-crazy.
I remember from personal experience an interesting case where we were analyzing the PPC campaigns for an Internet site focused on real-time traffic. When we did the analysis, we found (big surprise), that giant storms in the Northeast drove massive increases in site traffic. That may seem obvious. No, that is obvious. But here’s the thing, they weren’t regulating their PPC buys that way. They had a simple fixed daily budget. That meant that on beautiful summer days they were spending the same amount as on blizzard days. Their budgets for PPC and their daily caps were keeping them from expanding their buys in December, so they were simply losing out on the opportunity to capture more (and, by our measurement, more engaged and valuable) customers. We found other important, local effects (like closures) and by shifting their buying model to something more local were able to dramatically improve their overall performance in PPC. Actionable.
We often find retailer's PPC vendors ignoring weather – and that’s almost always a BAD idea.
Weather matters in all sorts of places and around all sorts of use cases. I’d make a small wager that folks are less likely to shop for life insurance or 529’s on beautiful Saturday afternoons than rainy dreary ones – and that may matter when it comes to thinking about when I drop (and pay for) a display ad. But how many display campaigns in financial services are optimized for weather? A pretty small percentage I reckon.
Understanding when people do something has real meaning in digital (and, even more, outside digital), and weather is an important part of when they do something.
So I’m going to disagree with Jim's immediate answer (Worthless). Whether because analysts don't think about the obvious or because program managers don't act on it, finding out that people buy more stuff online when it snows is useful and actionable – not least in allowing you to amp up your PPC buys to capture mostly offline customers (driven to online by being snow-bound and open to capture by new brands) of your competitors who aren’t working hard enough to incorporate weather.
On the other hand, I’m on board with what I take Jim's deeper point to be (and sorry for hijacking the point into a bunch of “weather” matters examples - especially since I probably missed the original irony). When Ms. Fields built their model, when we did our PPC analysis or built our utilities model, we didn’t use machine learning techniques (narrowly defined as massive, undirected analysis of variables to discover important relationships) to happen on weather as an important variable. We knew it was likely to be significant and we modeled it the old-fashioned way to understand the depth and importance of its impact.
The real question is how many important variables are there that analysts don’t know about and is it worth randomly assembling data to find them? I’m very, very, very skeptical about this. It’s true that analysts don’t always understand the business their modeling very well and maybe weather is a great example of that. To solve this problem, you can:
If you're reading this and you picked C, you’re probably a salesperson for a technology vendor or a data science consultancy. Can unexpected correlations and important variables sometimes be discovered? Of course. But most businesses actually have a pretty decent understanding of the key factors driving performance even if they can’t describe exactly how those key factors relate or interact. When that’s the case, massive correlation is just a big, truly massive, very impressive waste of time.
Here’s some thoughts that seem to me so basic and obvious that I’m almost embarrassed to write them down except that I often meet people who don’t seem to grasp them:
I feel a bit like Martin Luther nailing these five (90 short of Martin) theses to the wall of the big data church. To me they seem so obvious that’s hard to understand how they could possibly be controversial.
Which brings me to Jim's second sub-question and one that I think we can handle quickly because we are in complete agreement:
The data suggests that sending email #256b after customers of type 83h have viewed product 87254 results in a 298.4% increase in sales. Is this correlation
[ ] Worthless? [ ] Curious? [ ] Interesting? [ ] Fascinating? [X] Actionable ?
Yes, clearly right. And by putting these two examples forward, I assume Jim means to sneakily suggest that most of the value in analytics comes from very unsurprising places and on the tail of considerable work. We are always charmed to hear stories of sudden analytic insight, swift brilliance and amusing and unexpected correlations. But real business analytics adds value mostly by delving in patient and disciplined detail into what we think is probably true.
It’s the difference between the real, day-to-day practice of science and the “genius” models that dominate public imagination. I’m not a big believer in the genius models, even for true genius. Mostly, I suspect it’s a lot more work than people like to think.
But if I’m not confident how genius works, I am sure that genius is not a strategy.
If you want to build an effective analytics team, the right strategy is to focus your attention on the problems you know matter and the data you think is probably important. Know your business? Absolutely always. Get the data you think you need? Definitely. Massive correlations of stuff that you doubt makes a difference? Occasionally…maybe.
Here’s the link (you must - and should - be a member) to the DAA thread...
June 13, 2015 in Web Analytics | Permalink | Comments (0)
Tags: analytics, Big data, correlation, DAA, digital analytics, Digital Analytics Association, enterprise analytics, Ernst & Young, EY, Gary Angel, machine learning
I won’t pretend to be an expert on UK politics and even less on UK polling. But in the wake of the disastrous performance of pollsters in the UK predicting the outcome of the general election there, I think it’s worth reflecting on the lessons to be learned. If you’re not familiar with the broad storyline, it goes something like this. In the days leading up to the election, pollsters were reading a toss-up between the incumbent Conservatives and the Labour party with expectations of a divided Parliament and much confusion. It didn’t go down that way, with the Conservatives winning a flat out majority of seats in a fairly decisive victory.
Now, the polling wasn’t as far off as that simple story may imply. There is a powerful disconnect between seats and raw voting percentages (as witnessed strikingly in the Scottish elections). You can win much less than 50% of the vote and still win far more than 50% of the seats, meaning that poll numbers aren’t necessarily reflective of seat wins. But there’s little doubt that the pollsters had the election quite wrong.
After the election, Nate Silver (who I happen to think is pretty frigging great at this stuff) and team weighed in with a series of blogs which discussed the errors in their model. There’s a ton of interesting stuff in this discussion, but of particular interest to me was the following quote:
“Polls, in the U.K. and in other places around the world, appear to be getting worse as it becomes more challenging to contact a representative sample of voters. That means forecasters need to be accounting for a greater margin of error.”
[There’s also a supporting and fascinating redux by 538’s Ben Lauderdale which shows how a huge part of the error in the prediction was driven by a seemingly very reasonable choice about which of two alternative questions around party voting likelihood would better represent actual preference.
They chose to use (as seems reasonable), a much more specific version of the voting question, but it turned out that the very general version was closer to capturing reality. I’m not sure there’s a clear lesson here (I doubt it’s always true that the general form of the question will work better) EXCEPT that if you have two seemingly similar questions that yield very different results, you’d best beware!]
If you’re a data analyst, I’d expect these discussions into the mechanics of voter modeling to be pretty fascinating. But of far more importance to non-analysts should be the troubling implications of Mr. Silver’s take on polling and its declining accuracy. Because here’s the thing – his comments apply just as surely (maybe even more surely) to opinion research done for commercial purposes. There are real differences between political opinion research and our commercial variants. But they both suffer from the growing challenges of getting a good sample.
In fact, when you get right down to it, the biggest difference between polling for political work vs. commercial research is that the political pollsters have a real proof point. When the votes are counted, they know if they were right or wrong.
Your enterprise survey research almost certainly doesn’t have a proof point. If you’re voice of customer opinion research is fundamentally skewed, how would you even know?
Not only does that tell me that your commercial polling likely has all the same errors as the political guys, it probably means it’s far worse. Why? When you don’t have a proof point, you’re that much less incented to make sure you get the results right and you have far less opportunity to correct your mistakes.
I strongly suspect that the work that team’s like Silver’s do is more careful and better than the overwhelming majority of the survey work done in the commercial sector. If that’s true, and if they are having a hard time getting it right, think about what it means for enterprise voice of customer research.
I often get significant push-back from enterprise analysts skeptical of online voice of customer. I get that. And what I’m saying may be reasonably taken as grounds for that skepticism. But those same skeptics aren’t taking at aim at the increasing challenge of getting accurate VoC results in the offline world. I’m pretty sure that commercial opinion research is significantly less accurate now that it was twenty years ago (just try random digit dialing these days!) and it may well be harder to get a representative sample offline than online. Certainly, I see no grounds in today’s world for assuming the opposite. Getting a good sample is hard and getting harder. Without a good sample, you are in constant danger of drawing the wrong conclusions from the data. And don’t even start on that tired and utterly incorrect idea that you protect yourself from this by “just looking at trends.”
So what’s the solution?
First, I think people need to reevaluate their biases around offline vs. online surveys. Traditional attitudes around online surveys and their biases revolve around a couple of issues that I think are largely historical. Back when in the days when intercept surveys first became popular, it was understood that online samples were unrepresentative of the broad population. True then, but nowadays online populations in the U.S. are probably quite a bit more representative out-of-the-box than is easily obtained from most traditional techniques (canvassing, mail, phone and mall intercepts all have huge biases these days). Of course, putting a survey on your own website automatically introduces a significant selection bias. But it’s now routine and easy to pop surveys on 3rd party platforms that eliminate that bias entirely. Many social media platforms, of course, do have significant biases in their user population and likely in your fan base. But given the incredible reach of platforms like Facebook, there’s no reason why you can’t build out excellent samples based on the top social networks. In all these cases, what you retain in the online world is the ability to collect large numbers of respondents very rapidly and with far less cost than with traditional techniques. I’d the be the last person in the world to argue that online voice of customer isn’t challenging. But it's a bit frustrating to see the offline world get a free pass on the same or worse set of problems when it comes to sampling.
Second, people need to find a proof-point for their voice of customer data. If you’re going to pay attention to it and let it influence your decision-making, you need to find ways to test its accuracy and predictive power. This isn’t just important to your peace of mind. It’s important because without those proof points you have no way to improve your VoC. Go read Lauderdale’s description of the likelihood to vote questions they used and tell me which you would have chosen! Not only will establishing proof points give you a clear path to improving your Voice of Customer research, I venture to suggest that it might also push you to improve the actionability of that research. If your opinion research is too fuzzy to yield predictive models, it problem isn’t very interesting to begin with!
Unlike Silver and his mates, you won’t find much discussion in the enterprise space around the accuracy of our enterprise brand tracking and product surveys. It isn’t because they are better. It’s because nobody knows if they are any good at all and without being forced to, nobody is anxious to put them to the test.
When it comes to enterprise VoC, perhaps it’s time to call an election.
May 26, 2015 in Web Analytics | Permalink | Comments (1) | TrackBack (0)
Tags: brand awareness, brand tracking, data science, Ernst & Young, EY, Gary Angel, intercept surveys, online surveys, opinion research, polling, social media research, VoC, voice of customer
Recent Comments