Act I
"And here’s an idea Dell, when you tell a story, have a point! It makes it so much more interesting for the listener…" Steve Martin’s character to John Candy in ‘Planes, Trains and Automobiles.’
This (two-part, Super Bowl delayed) post is the culmination of a whole series on reporting that began a few weeks ago with a post called "Why 100% Conversion is a very bad thing." Because that post has been followed by several others (not all of them mine), a brief recap is probably in order so that this post can be placed in the context of the overall discussion. In that original post, I argued that it didn’t make sense to think of any single measure as directly actionable. And I took what might be the paradigm case of a meaningful KPI – conversion rate – and showed that it was impossible to interpret any particular value for conversion rate as good (even 100%) and that it was equally impossible to judge any directional change as good. This may seem bizarre, but I think the arguments are almost unassailable. And, in fact, this wasn’t really the issue people focused on. What turned out to be more controversial was the inference I drew from this fact – namely, that no single KPI could possibly be actionable.
Eric Peterson followed up on this with an excellent discussion of how a KPI might be actionable. I agreed with Eric that what he was describing was both actionable and good analysis/reporting, and it helped me clarify my thinking about what was going on. In my brief (for me) reply, I argued that Eric’s example worked because it wasn’t really a single KPI but an analysis in which layers of meaning were gradually added to build a useful context for decision-making. We’ve been thrashing things back and forth since then, and I’m pretty much completely on board with Eric’s latest thoughts – and I hope, in this post, to give a fuller description of my view of what I think reporting should do and how KPI’s fit into this scheme.
So here’s a little story in reporting that I think illustrates two simple concepts: how we gradually come to a complete picture of a business situation and how no single number can be understood except in the context of a wider view.
Let’s start out with a single number – site revenue last month – and peg it at an even million dollars. It’s an important number and just that one number tells you quite a bit (mid-sized, E-Commerce site). But, of course, not nearly enough. Is that number good or bad? To begin to form an opinion, we need to know more. So let’s add a second number – annual online sales. Suppose that in the entire calendar year before, the web site did 15 million dollars in sales.
At first glance, this would imply that sales were down last month. That’s bad. But more numbers might well change this picture. Let’s name the month January. And let’s add a year-ago revenue number for January – six-hundred thousand dollars. Now we might think the "seasonal" adjusted number looks pretty. Nearly 70% growth year-over-year.
Suppose I add another number – offline sales growth – 100%. Suddenly the online channel doesn’t look so good – it’s lagging the rest of the business. But the picture need not end there. Let me add another pair of numbers – change in gross margin on sales for online and offline. Online +20% and Offline -30%. Add in a known business fact – the offline channel has been aggressively couponing and the online channel does no couponing – and our view of the situation may change again.
I could go on and on like this. Adding information about cost of sales, change in product mix, competitive advertising and product life-cylcles. And with each new piece of information, you might change your mind about what the sum of all the numbers I’ve thrown out really mean and which one is most important. At no point is it possible – or productive – to say that one single metric "means" anything. The meaning is the story told by the gradual accumulation of understanding that comes from knowing as many of the truly relevant metrics as possible and knowing how the metrics actually fit together.
It’s important to realize that not every metric is relevant. I might throw out lots of metrics that would either not improve your understanding (or actually worsen your understanding). Suppose, in the example above, that I knew that sales variations were driven not by seasonal factors but by product introductions. This January there was a product introduction and last January there wasn’t. In that case, the inclusion of a year-ago metric would be worse than useless - it implies a kind of seasonality that I may know is incorrect. By including it, I actually worsened your understanding. And that’s the worst kind of metric – worse by far than one that is simply irrelevant.
Of course, irrelevant or very marginal metrics have a cost too. Each additional item in a report set takes time to absorb and makes the overall story longer and harder to grasp. By including irrelevant information, you may also confuse the decision-maker – because it’s very inclusion is a kind of claim for meaning.
In my example above, I used some types of variables that have traditionally proven to be extraordinarily useful in terms of providing context. Historical data, in particular, is probably the most powerful contextual datum in almost any reporting system. It’s the lack of historical data than makes building reports for brand new (or newly measured) sites so frustrating. Without historical context, reporting is incredibly difficult and has, nearly always, the potential to be misleading.
These common and powerful context-builders form the heart of most reporting systems. But contextual data comes in all shapes and sizes – much of it specific to the online world and web analytics metrics. Here’s another story:
Traffic on site X was down 20% last month. This seems immediately and obviously bad. Couple it with another datum: seasonally adjusted traffic was down 17%. Couple it with a third datum: there was no change in the product environment. So now we have a basic picture of a steep (and sudden) drop in traffic put in a reasonable time context (month ago and season ago). Problem? Let’s add a fourth datum: PPC sourcing dropped to zero because the two-year program was placed on hiatus while its effectiveness is evaluated. How about a fifth: direct traffic was actually up. Sound good? We may still change our mind – here’s two more data items: organic traffic was also off 15% and organic search accounts for 40% of sales."
Once again, this story could go on and on as a picture is drawn with increasing clarity. In a way, this process is like adding touches to a painting. The initial lines may define the broad outline of the picture – but each new feature has the potential to change the essential message. In the same way, the first few pieces of information often drive the most understanding. But even though they are vitally important to the picture, they aren’t enough to tell the real story.
What all this boils down to is a simple idea. A report is like a story. Indeed, all analysis is really an exercise in storytelling.
So what makes a good story?
In our business, of course, a report has to be accurate and true (within the limits of our systems). Web Analytics is not fiction. The stories we tell are valuable only insofar as they mirror some aspect of reality – however interesting they might otherwise be.
Beyond this obvious level, they must be understandable and useful – but this really dodges the main question. What makes a report useful?
The conventional wisdom in web analytics is that what makes a story useful is actionable KPI’s. But what I hope my examples have shown is that at one level this cannot be true. A story doesn’t become useful because of any single piece – but because each piece contributes to an understanding of the whole. In other words, what I want people to realize is that the evaluation of a report as effective or useful needs to be at the report level – not at the metric level.
A report becomes actionable by using KPI’s to provide the business context within which an action can be identified or deemed worth trying. The more relevant context a report provides, the more likely it is to be actionable. KPI’s are the context builders that make up our view of what’s important and what isn’t. And, of course, a KPI can be relevant or irrelevant. Powerful or nearly useless. So in a way, this view may not really change all that much in the way you think about metrics. What I think is more likely, is that it will change how you think about reports.
Building context doesn’t have the force of a demand for "action" – but it does make sense of a report as a unit – and I think it better mirrors how we might actually move from a piece of information to a report and then to an action.
In part II, tomorrow, I'm going to talk about one way to think about building context - and give an example of a simple report based on these ideas.
Incredibly insightful post! And it shows in a very effective way why the tool itself is only 10% of the solution, and why the analysts job is to uncover the 90% of context (to paraphrase Avinash's post about the 10/90 rule). Analysis is a skill and an art in itself, the KPI's provided by the WA solution are the elements of context, but not the whole context in itself.
Posted by: S.Hamel | February 06, 2007 at 04:38 PM
What a fantastic post. Analytics continues to be a cross between the layers of an onion (which must be peeled back to uncover the 'core' or 'why'); and layers of cake and icing (that must be devoured to be truly enjoyed).
Posted by: benry | February 06, 2007 at 08:16 PM