My Photo


  • Clicky Web Analytics

Your email address:

Powered by FeedBlitz

« Media Measurement – The Assessment of Social Value | Main | With Hat in Hand »


Hello again Gary,

You make a very strong point here. I can see now, and, true, this is a question of wrongly taking correlation for causality, the ultimate crime for an analyst!

If I follow your argument correctly then, there would be a flaw with Eric Peterson and Joseph Carrabis Fi Index in their calculation of Engagement Index, since feedback would be predictive of nothing. I know Eric and Joseph suggest that the simple fact of giving feedback is positive ("...every session in which feedback is gathered is scored positively"), but if it can't predict whether the visitor is more or less satisfied, or even more or less engaged in terms of content comsumption (thus impacting their Ci and Di), how can it predict, or much less, determine *value*?

What I like about what you're doing here, is that it is counterintuitive. One would be easily drawn to conclude that comments *are* a positive sign of engagement, the more the better, thus the more valuable. Whereas they are not... necessarily.

I must say, however, that the measurement path you propose is quite complex (well, it is a complex problem),and I wonder if you see it as actionable only in a minority of organizations (using Visual Sciences?).


This is a truly formidable question. I’m not sure I have ever fully understood Eric and Joseph’s work. But here’s my take: their measure of engagement is NOT a conversion proxy or a cause of success. In other words, it is not a stand-in for any other measure of success (at least as I understand it – and if it is intended to fulfill that role it is surely mis-designed). The engagement metric they are proposing isn’t put forward as a “cause” of success, it’s a definition of success. I take it to embody an “a priori” (prior to experience) claim about what constitutes success on a web site. In other words, nothing in the proposed framework is measured against anything to prove it actually has anything to do with any particular sites actual success.

This is a very different sort of beast than the type of analysis I was discussing which is designed to answer a real-world empirical question like: “Will I make more money if I add/remove comment functionality to my site?” I take it that not only can an Engagement metric NOT answer this question, it would also suffer from exactly the same issues of self-selection as any other metric if used as a measure of success. Such questions turn out be quite tricky to answer when the content has a significant social component.

Is there something wrong with making an “a priori” claim about engagement as a type of success? I’m not sure. As I’ve said before, where sites can measure their actual success per visitor (as is increasingly true for media properties), I am skeptical of the value of an engagement calculation. But where you can’t measure actual success, a well-thought out framework that makes some basic assumptions about engagement may be your best bet.

The type of analysis I originally proposed is indeed a complicated one. It can be done in most enterprise web analytics systems, but it often requires a certain amount of setup and pre-planning to accomplish well. With a little bit of extra tagging, it can certainly be done in Omniture (for example) though I won’t deny that it will take a good chunk of work.

The comments to this entry are closed.