More on Web Analytics Reporting
In the first part of this post I argued that reporting is a kind of storytelling - and involves the gradual accumulation of contextual information around a single story. I further argued that this view implied that actionability resides at the report level not the KPI level (KPI's being the tools for building context). In this post, I want to show what this view implies about the relationship between analysis and reporting and how that abstract understanding gets translated into a real report...
There are lots of ways to think about building context in a report – and the whole history of data analysis is rich in examples of the kinds of data and analysis techniques that provide relevant context and help protect against misleading data. But I want to extend my discussion of one particular way to think about building context that I believe is helpful in producing good reports.
Specifically, I’m going to extend the literary metaphor and push it (maybe farther than I should). I think there is much to gained (and learned) by thinking about a report as story. Because what makes a good story? First, it has to be about something. As Steve Martin wryly observes, it has to "have a point."
Lots of report sets aren’t about any particular thing. So they end up being a mishmash of data. Even the best KPIs are lost and useless when they aren’t in a report where their relationship to each other or to a specific problem is made clear.
Second, a good story has to have interesting characters – and characters that have a specific role in conveying the plot. It is through characters that a story is illuminated and made significant. In reporting, the characters are our metrics. And the key point is that metrics in a report need to deepen the story. They need to be relevant to that particular story – not to some other – possibly very interesting – tale.
It’s also why good metrics are incredibly valuable in a report set. I’ve been talking about Eric’s Engagement metric – and as a "character" in a report set it has some signal virtues. First, it captures a lot of stuff in one simple measure. To get something similar, I might have to report on five or six different things for every aspect of a report (source by blogs, source by conversion, source by key pages, etc.). That makes for a bulky story – one that may be impossible to read and understand. Second, it captures a really important set of behaviors – one that is likely going to be useful in lots of stories. In a report set, that’s a big win. It’s hard enough for decision-makers to glom onto even basic web analytics terminology (visits, uniques, etc.) – so the fewer and clearer your KPI’s, the better.
For an analyst, metrics are the single most important thing you have. They are the tools you use to tell stories – and the better the tools, the better the stories are likely to be.
Finally, a story has to hold our interest. In reporting, that usually boils down to brevity and having a point. The clearer the metrics and the more directly they can be understood, the better the story is told. And the less time it takes to grasp the story, the better. This is never, of course, more than a subjective exercise. Remember the scene in Amadeus where the Emperor says ‘Too many notes, my dear Mozart’? Just as there is no right answer to how many notes a musical piece should have there is no right answer to how many metrics a report actually needs. Enough to tell the story. Not one metric more.
And there is a second part to holding our interest – a story is directional. It moves the listener in a specific way through a story. Reports should do that too. When you build a report, it’s essential to think about how the decision-maker can best be led through the story.
All this is very theoretical, of course. So before I close I’m going to show an example of a particular report that I built with this metaphor in mind – and I’ll explain why I think it does its job reasonably well. But before I go there, I want to remark on something that seems to me to emerge clearly even from this very abstract discussion. Namely, that much of what we (and I mean my company as well as our industry) do with reporting is backward – because we build the reports prior to doing any deep-dive analysis. But it is obviously essential, when writing a good report, to have reached an understanding of what the important metrics (key characters) actually are. And how can you have the understanding except in light of a thorough analysis of a system?
Yes, one of the jobs of a report set is invariably to trigger analysis. But this suggests that one of the jobs of analysis is to trigger report sets. This isn’t circular. It says that a report set captures the understanding of a system gained from an analysis. Changes in the report set may be of two kinds – changes that seem to reflect the evolution of the business in ways that the conceptual model already understands – and changes that reflect differences in the relationship of the variables in the model. When we see the latter, we need to re-analyze the system because we think the model needs changing. Let me illustrate this with a quick example. Suppose we have a metric that measures the traffic quality for a source. Then we have another (independent) metric that measures actual conversion value for a source. If our reports show a source improving in traffic quality and improving in revenue (or vice versa) then we have useful knowledge and no reason to believe our model needs adjusting. But suppose our report shows a source improving in traffic quality but declining in actual revenue. We may think our model needs re-study.
The problem is that almost NOBODY DOES ANALYSIS. So if you require analysis prior to reporting, lots of companies won’t have anything. Plus, analysis takes a long time – and you can’t wait to produce all of your reports. So what do you do? Probably the best solution is to bootstrap your reporting system using resources like the Big Book of Key Performance Indicators (by Eric) or to hire people like us (rare marketing plug here) who can at least take a best guess at what’s relevant. But I know one of the things this whole discussion has led me to believe is that from now on when we present an analysis I’m going to try and include an ongoing report with it that captures our understanding of the system.
In the following report, I chose to tell a story about Traffic. I picked this arbitrarily. I could have chosen a story about Conversion or Engagement or Product Mix. But traffic is one important piece of a business. It represents a measure of opportunity – a way to think about the potential and the efficiency of your business. However, I didn’t limit myself to just traffic numbers. In my view, traffic needs always to be viewed in the context of quality as well. So while I don’t want to futz up my Traffic Story with incredibly detailed metrics about revenue, product mix, etc. – I do want to a make sure that traffic growth or decline is understood within the context of quality. In other words, I want my traffic report to reflect a high-level view of quality – but not necessarily to have the detail that a report on a different story (like conversion effectiveness) might need.
So this is a story about traffic. It begins by telling a story about two key elements of site traffic. How site traffic in total has changed and how each key segment has shifted as a proportion. So we could quickly identify that overall traffic has grown year over year and month over month. But that most of the growth has come in Support with a last month jump in Prospects. When I tell a story about traffic I always want to explicitly add the thought "of what." And the report should make sure the decision-maker does that too.
In this case, I made the decision to only tell the second part of the story (Sourcing) for Prospects. The implicit assumption is that we don’t much care how Returning Customers or Customer Support visitors found us. May not be true, but that’s what’s implied. Here I kept the level very high. A decision about level of detail would obviously depend on the audience and the story. If I’m telling a story about SEM programs for a PPC or SEO manager, the detail is necessarily going to be much greater even for a first-glance report.
For the third part of the story, I made a different kind of decision. This is a story about cross-channel sourcing. And it assumes that there is something interesting to be said about the difference between original and subsequent sourcing. Obviously, that isn’t always the case. But if I was doing an analysis, it’s certainly something I’d look at. And I’ve seen enough cases where it was true that I might build it into the report set regardless. Here’s a case where seeing lots of different sites and studies is a distinct advantage. Sometimes, you’ll build factors into a report because you know they often are significant – even if they don’t mean much to the business right now.
Finally, I bring in quality. Because, as I mentioned, I don’t want the decision-maker to ever forget about the quality of the traffic being sourced. But I also don’t want to clutter up a story about traffic with a gazillion different measures of accomplishment. So I borrowed Eric’s Engagement Metric and showed how it could be used to tell a very terse sub-story about channel quality over time.
I could produce a generalized summary (for a Powerpoint report) like this:
Overall Traffic is [up/down].
The change is driven by [segments x,y].
This change [is,is not] driven by original sourcing.
Where channel [x,y] have changed in overall volume, the quality of their traffic has [increased, decreased, stayed the same].
And that’s my story.
There’s nothing all that special about this report. It involves lots of trade-offs and decisions and I make no special claim for any of them. I don’t mean it to be some kind of definitive Traffic Report – especially since I made it up in an analytic vacuum. What I think it does do, is illustrate how the idea of telling a story can shape a report – getting the analyst to focus on what pieces of the puzzle matter, how they need to flow directionally (what does the decision-maker have to understand first), and what pieces may not be worth including.
Ultimately, I think a report set must capture the analysts’ view of what matters, what is likely to matter, and what doesn’t in helping a decision-maker understand the story. Unlike an analysis, the numbers in a report set are going to change constantly. So there’s really no way to insure that your report tells a useful story at any given point in time. Indeed, once a decision-maker has grasped a report, there’s a pretty good chance that it will only occasionally provide new information. That’s okay too.
To me, what’s important is that a report provide as rich a context for the core concept (traffic in this case) as possible in the simplest and shortest amount of time. If you can succeed in getting a decision-maker to understand what’s happening with a core area of your business, then your report has done its job.
This has been a gargantuan post, but the topic is huge and I think my previous posts left much to be said. No doubt, this one does too.
Comments