In the measurement and analytics business, it’s natural to assume that the more information we have the better off we are. I’m not talking here about management reporting – where the demands of presentation and audience attention frequently necessitate highly abbreviated, laser-focused culling of data. I’m talking about situations where, as a decision-maker, I’m seriously involved in grappling with data to make a decision. And by and large, I share the presumption that the more relevant information I have, the better off I am.
But there do exist cases where I, personally, try to avoid looking at the numbers. And as I think about those cases, they reveal a common pattern that has implications for the practice of web measurement and for analytics in general.
I’ve mentioned before that one number I do my best (and it’s a pretty good best) never to look at is my blog readership. I wouldn’t deny the fact that the number of readers on my blog is a potentially interesting and even important statistic. My blog, after all, is meant to be a part of my job and I’ve always viewed it as a piece of our Semphonic branding.
However, I have good reasons for staying away from readership numbers. First, raw numbers are not that meaningful when it comes to Semphonic branding – the ostensible purpose of my blog. It’s much more important to have a small number of potential clients and influencers reading and liking my blog than to have many, many non-buyers reading it. But I know this, and should, presumably, be able to discount the importance of the statistic while still admitting that it has some real value. So why don’t I study my traffic numbers?
There are several reasons why I ignore blog traffic numbers ranging from the psychological to the historical.
When I first started blogging, I looked at traffic statistics for awhile and then stopped because, frankly, it was too disheartening. If you’ve decided on a marketing course and you know it will take time to see results, then looking for results too soon can be needlessly discouraging. This was particularly true in the early days of my blogging when I was just trying to build enough content to have a respectable site – not really trying to get people to read it.
I think everyone here at Semphonic has found that certain topics will always draw more readers. Talk about Google Analytics, and you’re pretty much guaranteed to add traffic to a web analytics blog. If you are tracking those numbers, it becomes hard not to want to write more about popular topics. Talking about the most popular topics makes sense except that Google Analytics is a pretty small part of our business and talking about it a lot would be difficult and potentially brand-confusing.
Even for an ad-based media site popularity isn't necessarily the main goal; skewing your content to what draws the most traffic can be dangerous. You may find that your editorial is catering to an SEO driven audience instead of to your natural, engaged audience. In the longer run, that can be disastrous as there are sharp limits to the value you can drive from a non-engaged SEO driven audience.
Finally, I’ve found that paying too much attention to the blog’s measurement and optimization takes the fun out of it for me and makes it feel too much like a chore. I’ve stuck with the blogging because I enjoy the writing. The more I look at numbers, the more worried I get about the numbers and the less I enjoy the writing. In the end, paying too much attention to the measurement can undermine the essential psychology of the enterprise.
What all of these “reasons” have in common, is the realization that as a decision-maker, I can’t always be rational about measurement. This personal involvement in the process means that sometimes, I have to make decisions about whether or not it is riskier to consume or ignore the relevant data. Of course, the decision to ignore the data can have profoundly bad impacts. But so, too, can the decision to use it.
I’m not suggesting by this that there is some fundamental problem in web analytics and that we should all actively ignore our data! I never have the slightest qualms about looking at and using measurement data from our web site. Despite being heavily involved in it, I don’t have the same level of personal attachment to the website that I have to my blog – a level of attachment that makes me think it is dangerous to pay much attention to the numbers.
What I am suggesting is that it is sometimes appropriate to make “meta” decisions about the use of measurement. This can involve decisions about what data to use and show as well as how to organize measurement and these meta decisions are frequently in response to cases where there are strong reasons for thinking that the measurement cannot easily be used well.
For example, if you are a content site and you begin running advertising, you should expect to have a sharp uptick in very negative user feedback and a significant decline in satisfaction. You should know this is going to happen and if you’ve made the decision to run advertising because it’s essential to your business, you might actually be better off just ignoring your user sat scores for awhile. You might be justified, for instance, in dropping all sat score trends from your pre-advertising site and simply reporting on trends going forward. In a world where we are all perfect rational consumers of measurement, this would be neither necessary nor useful.
Indeed, anecdotal negative comments from surveys are a classic case where meta decisions about data usage are probably essential. As an analyst, each negative comment may be useful and interesting. But circulating anecdotal comments can result in organizations continually using them in ways that are fundamentally inappropriate and essentially political. Without broad understanding of the proper use for anecdotal comments, it may be much more harmful to circulate these comments than to ignore them.
Meta decisions about data can also drive organizational structure and process. As an example, I’ve long felt that leaving the measurement of PPC programs to the agencies running them was a very poor strategy. Even without any motives of cupidity, the buyer in a program is often too invested in some types of programs or decisions to be the best judge of how to measure them.
Similarly, large organizations can take advantage of their size to insure that the organizations doing the measurement are to some degree independent of the creators of any given marketing or design decision.
This highly human tendency to abuse not use measurement is another reason why one of the most important measurement process steps is to force every design change to pre-commit to specific measures of success. It’s just far too easy, after the fact, to find SOME measure by which a new tool or page drove a successful outcome. This isn’t usually a cynical attempt to hijack the system – it’s just human nature.
As James Madison famously observed, “If men were angels no government would be necessary…” He wrote this defending a framework for a significant but carefully controlled national government. And because we are no more perfectly rational than we are angelic of temperament, it is sometimes necessary in web measurement to take account not only of what we can know, but sometimes even of what we should know!
Very interesting, original, but Gary, very puzzling post.You address here a cultural dimension of analytics that's highly relevent to the reality of measurement. However, even though it is most probably right, I will have to hide your post from many people I know!
I would be too afraid they would use what you're saying to get back to their old "gut feeling" ways of doing things, i.e. do stuff without being really concerned with the concrete, tangible results. I know it's not what you're saying, and I believe your pointing out of the dangers of misusing numbers is very true, but an organization/manager needs to be very mature about analytics to apply what you say correctly.
I believe adoption is currently the deppest and most important problem with Web Analytics.I think we are at a stage where misusing data, rather than not using it at all, may still be a better risk to take.
Posted by: Jacques Warren | December 08, 2008 at 05:55 AM
Without the technical approach, I think gut feeling is not enough. I agree that not implementing it right is the biggest risk.
Posted by: web analytics guy | December 12, 2008 at 12:00 PM