My Photo

Clicky

  • Clicky Web Analytics

Your email address:


Powered by FeedBlitz

« Two Cultures | Main | Visitor Segmentation: Building Site-Wide Visitor Segmentations »

Comments

Hi Gary, good to see you at the big show!

I think you’re very much on target here as you plumb the depths of this argument. We can either say “Engagement is different for every site” and thus end up with nothing very valuable, or we need to focus in on something specific and call that Engagement.

> I’m finding that we are increasingly focused on the problem of predicting visitor value (success or engagement) from early visit behavior. This is a fundamentally different problem than measuring engagement – and it cannot use all of the same factors

To that point, I believe this is the very definition of Engagement, it has to be a Prediction. Engagement is a likelihood of future activity.

Let’s break this down. Using the current activity-based approach, if someone is “Engaged”, what is it we are really saying, what is the value of Engagement? Well, I guess the visitor was “happy”, right? And what does that mean, really? Is it we hope they will come back? Isn’t that what people are really thinking?

If so, then Engagement is really a Prediction.

Else, what good is it? If we’re just talking about past activity, Engagement doesn’t look very different from the same stuff we have always measured. Prediction is different.

> many of our lead-gen client sites show a distinct segment of committed brand visitors. These visitors show up and immediately generate an action or a lead. They have very few page views, short time-on-site and may have very few visits. However, actual quantification of lead-value suggests that this is usually a highly valuable segment

Exactly, and this is why looking at past Activity does not make a very robust indicator of Engagement. Rather, you would have “Events” - which could be a single action or multiple actions – and then you have Engagement with each Action. Action–Engagement pairs, if you will. For a single site, I may be Engaged with some features and dis-Engaged from others.

> significant pieces of a site are not functionally a part of the sales/content cycle. If you remove a bad navigation page, you may reduce overall views on your site – and this can apparently reduce engagement

Yes, and this is why I don’t think an “additive” approach to Engagement is really viable. Grouping different actions together clouds the picture. Engagement should be a Prediction of future activity, not a report on the past. It should be about “expected value” in the future. If a person is Engaged, they will contribute further value.

Overall, the best predictor of future behavior is how long it has been since the original Event took place. Eric has Recency in his model, and that's a good thing.

But it’s a Prediction and I think folks will find it much more useful as a unique vector, as in the Event-Engagement pair. Recency should not be combined with past action into a single number, it’s a fundamentally different idea.

There is the Event, and the likelihood of the Event to occur again, which you can then plot on an x-y axis and map a visitor’s Engagement with different Events. Or the relative Engagement of different visitors with the same Event.

If Engagement is looked at not as the Event itself, but the likelihood of the Event to occur again, then I think the issues you have above get resolved.

Except, of course, that tool problem, though the high end does provide this kind of Event-Likelihood "pair" capability.

Make any sense to you (or anybody else)?

Jim,

I know you've talked about this to me before but maybe it's finally starting to sink in. I think this makes a lot of sense. The more I think about it, the more I'm coming around to what seems like your view - that prediction is really the essence of the practictioner's problem.

I like the description of event-engagement pairs. I certainly think this is right in terms of what usually turns out to BE predictive for any given set of behaviors - but I still think there is a role for an additive calculation of visitor engagement.

As I think about how we typically approach this problem, we start by looking at an over-time view of the visitor. Then we try to break that down into our ability to predict where in the spectrum of possible success a visitor will end-up - preferably with the least amount of behavior possible. This prediction may very well turn out to be a set of sub-predictions based on specific events. And those sub-predictions can be added/weighted to produce a comprehensive prediction of visitor value for optimization.

This is still an additive framework but, if I'm understanding your argument, it isn't the use of additive framework for optimization you're objecting to - it's the idea that the additive framework is the target for prediction. Summing and weighting the values of the predicted outcomes would seem to avoid this problem.

Does that seem right to you?

I think we're together on the idea of event prediction as part of Measuring Engagement. On the "summing equation" issue, I just realized me might simply be coming at it from two different places, but we're saying the same thing.

Haven't you and I done this before? ;)

Here's what I think the difference is:

I'm talking about a generalized model people would use to analyze behavior and plan a project. A platform for understanding what tests should be created and used against event-enagagement pair segments, and the results of these tests. Past events and event likelihoods should not be combined into an equation for this purpose because you don;t really understand anything yet.

You're talking about the end result of the above project, an equation that describes what this testing reveals *after* the relationships and weights are known, that is it possible to create a unified equation describing the results.

Does that seem like an accurate description to you?

If that's what you're saying, then we agree!

The comments to this entry are closed.