Eric. P has an excellent post (http://www.webanalyticsdemystified.com/weblog/) which covered several reactions to his Engagement Metric including mine. In fact, there’s nothing I actually disagree with but it did raise a few points I’d like to clarify about my own thinking.
Eric makes three points that I want to talk about. He argues persuasively that it makes sense to include "blogs" as an explicit part of his "Engagement" metric for his site (and the metric is specifically for his site – it isn’t meant to be some generic computation for any site). Second, he evaluates Marshall Sponder’s referrals to his site using the Engagement metric and comes up with a specific action he can take based on the KPI. In passing, he mentions some previous issues he’s had with how broad Marshall’s posts sometimes are.
On this last point, I think Eric and I both suffer from envy. I know I do. I write a "corporate" blog so I have to be pretty careful about staying on topic and not writing about politics, religion or popular culture. I often wish I could though! I love web analytics. I really do. But I love books, movies and football too. And Marshall is enviably prolific (as well as amazingly timely – I can promise I did no writing the evening I had dinner and wine with him)!
Alas - I'm getting non-topical. So on to the real stuff regarding the engagement metric about which I have these two thoughts:
1) I pretty much buy Eric’s explanation of why Blog views are an essential part of the engagement metric. The only real exception I’d make is the case where I want to use the metric to evaluate the various tools on my site against each other – specifically including the blog. In that case, I don’t want it to be part of the metric – no matter how valuable it is otherwise - because it flat out skews the metric vis-a-vis other tools.
2) Eric describes a particular action that he can take based on the Engagement KPI (get Marshall to plug books) – and it makes sense. But it doesn’t so much change my mind as make me think I need to clarify my original post.
I don’t want to be taken as meaning that you can never act based on the information you gather. If that were true, then web analytics would be pointless. My comments (see here) are about a particular way of thinking about reporting – namely, that the criteria for including a KPI in a report set is that it be actionable in the sense that if the KPI changes or has some particular value you have a known action you will take in response. I call this the Myth of Actionability.
I don’t think that those who propose the criteria of actionability mean it to imply that you include a KPI if it might, under some conceivable set of circumstances, suggest a possible action. A watered down version like this could be met by any metric. It’s perfectly possible for me to conceive of an action (and even a plausible one under some highly unusual circumstances on a particular kind of site) to measuring the Exit Rate on odd vs. even numbered pages in a site visit. Rather, I take the demand for actionability to be just that – a demand about what you are going to do if the KPI does X – and a belief that the metric shouldn’t be reported on if a ready answer isn’t available. What I believe I have shown is that such a demand is meaningless - because no single KPI can be actionable except within a broader context of understanding generated by many other relevant numbers.
So while Eric’s Engagement KPI might generate hundreds of potential actions when presented to an intelligent decision-maker or analyst, it won’t infallibly generate a single "appropriate" action and it might not generate any action at all. Nor is it necessary – or even reasonable – that a decision-maker should have to formulate a rule like "If I see that blog B has high engagement but low sales I’m going to ask them to recommend my product and that’s why I want it in my reports" to get an obviously interesting metric included.
The Engagement metric is much richer than most KPI’s – by blending a half-dozen separate measurements, it comes close to providing something like a real context for decision-making (you're probably picking up hints that I think a report set needs to have multiple descriptive KPI's to provide an overall context within which an action might make sense). That’s why I think it’s unusually good. But even so, Eric has snuck in a second metric (Marshall doesn’t generate book sales) before coming up with an action. I’ll give him a pass on that since he could presumably create an Engagement/Sales ratio as yet another KPI. And in a way, he's even snuck in a third because the original KPI is simply a measure of engagement but he's crossed it by source - tieing it to a particular kind of story - a story about sourcing. This gradual building of a context around a measure is exactly how I think reporting actually works - and how analysis drives to actionable understanding.
But even with all this a decision-maker will still not know what action he/she is going to take in response to knowing that a blog fits even this extraordinarily advanced KPI (or set of KPIs). The decision-maker will likely feel that the action will depend on lots of other factors (many of which can’t be captured in a report) like "what the blog is about" and "how well I know the blogger" and "does the industry really care about the blogger" and even "is the blogger a competitor." Would Eric’s action be the same if the blog with a poor Sales/Engagement rate were from a competitor’s blog or from a blog that had nothing to do with web analytics?
So even with a very rich and complex KPI like Engagement/Sales for blog sources, any particular value of the KPI may lead a decision-maker to do nothing, do some action X, or do some other action Y. And, in the vast majority of cases, the decision-maker has no preexisting idea at all what actions might flow from any given KPI with any given set of real-world values in a report set. All of which goes back to the central theme of my post – the demand for actionability in a single KPI as a criteria for inclusion in a report set is misguided.
To demand that a change in a KPI result in a specific action is wrong-headed. To claim that a KPI might be able to generate some action in some conceivable set of circumstances is meaningless (no KPI would ever fail this test). Decision-makers act within a larger context of information in which a single KPI is more or less important depending on the situation and problem. A reporting system needs to be evaluated by the quality, depth and efficiency with which it conveys the important business contexts to the decision-maker. And, in this regard, it will not be best served (or served at all) by demanding that each and every piece of it be somehow tied to a specific actionable lever.
I wrote – and believe – that the Engagement metric is one of the best measures of its sort I’ve ever seen. Where something like it (appropriate for a specific site) can be implemented in a report set, I would strongly recommend doing so. But I wouldn’t include it because it is specifically "actionable."
I still intend in the next post (which I must admit seems daunting) to put forward a more complete description of what I think a report set is supposed to accomplish and how it may be judged. In doing so, I hope to at least partially address Daniel’s comments (particularly "how do we frame the context") on the 100% conversion post.