Part Five in Series on Engagement
I originally wrote a four-part series on Engagement as a kind of mental primer for myself heading into eMetrics and my talk with Eric Peterson and Joseph Carrabis on engagement.
At eMetrics, I presented after Eric and Joseph and I ended up talking about a somewhat different set of issues than I wrote about. My posts had essentially been about the “semantics” of engagement – the distinctly different concepts we have somewhat arbitrarily attached to the single word – engagement.
I suggested in those posts that each usage of engagement (as a proxy for success, as a comparative measure for media buying, and as a measure of brand impact) should be given a distinct term-of-art in web analytics since the concepts were distinct in both intent and in practice.
In my eMetrics talk, I spent a little time on this and more on the practical problems we at Semphonic have encountered when attempting to implement measures of engagement.
I was neither totally pleased nor thoroughly unhappy with that session or with my part in it. I’ve always been in essential agreement with Eric Peterson’s basic approach – and it mirrors in many respects work we’ve done when using engagement as a conversion proxy or measure of lead value. And I didn’t see much to quibble with. The only thing I disagreed with (and it’s a fairly modest point having to do with the value of engagement when you CAN easily measure actual success) wasn’t in my slides at all and I forgot to mention it during my short talk.
I was a bit disappointed with Joseph’s presentation mainly because he didn’t spend any time explaining the conceptual aspects of his math and I still don’t understand what it all means. I know he’s been working on a generalization of Eric’s approach and I know that’s what he talked about at eMetrics. But I just don’t get it.
I think (and I should emphasize the uncertainty in that “think”) that his framework is essentially a matter of saying that by adjusting the different weights on engagement metrics (including setting them to zero or even, I suppose, to negative numbers), almost any framework for engagement that meets a few basic rules can be translated into any other measure of engagement. At some level, I suppose this must be true. But I’m struggling to decide if it's meaningful and I do wish I understood it better. Probably there's been more written about this since but I've been pretty swamped and haven't kept up...if I do read more and think it's interesting I might even revisit my redux!
But that isn’t what I’m going to focus on today. Instead, I’m going to elaborate quickly on each of the types of engagement I identified and then talk about the practical problems of measurement I discussed at eMetrics.
I’ve long believed that the core usage of engagement in web analytics is what Eric suggested calling “visitor engagement.” Visitor engagement compounds a wide range of visitor actions into a single metric of success. It’s useful for sites that don’t have a single point of success or an easy way to aggregate success values. We use “visitor engagement” all the time – because without it, it is either exceedingly difficult or impossible to actually report on or optimize the success of site initiatives and campaigns for sites that lack that single or additive success.
Here’s the example I used at eMetric to illustrate this problem:
“If, for example, one search keyword generates slightly more page views, slightly less time on site, a moderately higher return frequency and a moderately lower number of site interactions than a second keyword, which keyword is better? This is the type of question that campaign managers WILL face over and over again. At some point, you must find a way to answer this question and answer it correctly. If you don’t, you can’t optimize.”
But, and here’s where I quibble with Eric’s presentation on visitor segmentation, if success is single point (and there is enough of it to measure all the cuts you’d like with significance) or success is easily additive (and there is enough of it…) then you don’t really need a measure of engagement. In practice, ecommerce sites have been, rightly, much less interested in measures of engagement. And, as our more sophisticated media clients have gradually found ways to measure the monetization of every action (including each page view), they have also found engagement less useful.
In other words, if you think that “visitor engagement” is mostly used as a proxy for success and if you can directly and with significance measure your campaigns or site initiatives against success, you don’t need the proxy.
This does not alter my second usage of “visitor engagement” (as a tool for assessing lead value in ICM programs) and it does not change the fact that many sites (including many media, social, b-to-b, health and pharma, and public sector sites – all large verticals for us) simply cannot measure success directly.
The second type of engagement – “audience engagement” is Eric’s term – is used as a measure of comparability between sites for the purposes of media buying. When I first thought about this, I was highly skeptical of it (at least from the perspective of traditional ad banners as opposed to video). I’ve come around a bit. The core of my objection here is that internet advertising works differently than most traditional mass media – so the content actually competes with the message. This fact makes some sites very “engaging” but means they work very poorly for advertisers. In addition, the same measure of engagement used as a proxy for success would NOT work for audience engagement. And it bothered me that traditional measures of engagement might not properly show Google’s dominance in the space.
I think all of these concerns are pretty much correct. But I realized that there were areas where audience engagement might be useful. First, advertisers want to make sure that they are going to get sufficient run per visitor on impressions – and that means capturing enough views/time to make an impact. This seems a relatively straightforward measurement to me – and one I’d recommend making outside of the “engagement” calculation. But it is, nevertheless, an aspect of engagement that makes sense. I also realized that for brand integration (and I used the example of having a radio personality endorse a product), the level of engagement with the target brand is actually very important.
In both these cases, the higher the engagement the better for the advertiser – which is what I think would be demanded of a good audience engagement metric intended for this space.
So I’ve come to think that there is indeed a role for “audience engagement” in media buying (and, again, I’m concentrating on non-video here). But I’m not convinced that it is the primary metric for buying.
The third usage of engagement is as a method of quantifying “brand” impact. I was actually hoping that Eric and Joseph’s work would be aimed at this role, but I don’t think it is. The goal here would be to give web marketing managers a way to think about and quantify the branding value of their sites as part of their overall site optimization. This is an important usage for many corporate sites – it’s just hard as hell to do. I am not a brand skeptic. I think brand value is real and should be quantified and I’d like to see some high-end companies or researchers take a stab at it. Maybe someday we’ll get an opportunity to tackle this problem with some client who has enough money to invest in something that approaches “theoretic” research!
If these are the three terms-of-art, what were the practical issues concerning engagement I discussed at eMetrics?
For the most part, the practical concerns I expressed focused on “visitor engagement.”
The first, and most important practical issue with visitor engagement, is a big and very real problem not only in using engagement but in using actual success to optimize campaigns.
The essence of the problem is simple: you have to optimize campaigns in something like real-time. But the value of visitors to most sites (their engagement or success) can only be measured over fairly significant chunks of time.
This leads to a serious optimization dilemma. Using just visit-based behaviors to optimize campaigns and site initiatives can seriously understate campaign results AND can cause large misoptimizations. But if you wait around for three months to measure actual results you may have wasted a lot of money. And even three months may not be long enough for some businesses to establish visitor value. What’s worse, things may have changed in the intervening time so you’re final results aren’t even useful.
I’m finding that we are increasingly focused on the problem of predicting visitor value (success or engagement) from early visit behavior. This is a fundamentally different problem than measuring engagement – and it cannot use all of the same factors (like return frequency). As I mentioned, this prediction problem exists even in cases where you can completely measure site success.
In my mind, this is a larger and a more difficult challenge than measuring engagement.
A second issue we’ve seen using “visitor engagement,” is that many of our lead-gen client sites show a distinct segment of committed brand visitors. These visitors show up and immediately generate an action or a lead. They have very few page views, short time-on-site and may have very few visits. However, actual quantification of lead-value suggests that this is usually a highly valuable segment. Including this group (which can be substantial for strong brands) in your engagement optimization results in very misleading numbers. We try to remove them entirely from most analysis and reporting that relies on engagement metrics.
The third issue with engagement calculations has to do with the fact that significant pieces of a site are not functionally a part of the sales/content cycle. If you remove a bad navigation page, you may reduce overall views on your site – and this can apparently reduce engagement. We don’t want our metrics to mislead users about the impact of site improvements. How do we handle this? We use the Functional approach to carve out site elements that are navigational, informational, explanatory, and ancillary (like job-seeking). We remove all of these from the calculation. This greatly sharpens up the measurement and makes sure we aren’t crediting campaigns for bringing in lots of job seekers (unless that’s what we want).
Lastly, though I hardly got to talk about this at eMetrics, there is an old-fashioned tool limitation. Our current generation of web analytics tools make generating visitor engagement scores very challenging. They do not provide even a hint of the predictive modeling capabilities necessary to solving the problem of predicting customer engagement or success value from early visit usage patterns. For the real-world practitioner, this can make life miserable.
It’s always painful to end a topic with an admission that our tools are so very unsatisfactory. But at least we do know from prior experience that time will ease this problem.
Tools will always get better while practice is much less predictable.
Handling these three practical issues (and finding a way to get your tool to do what you need) is essential if you’re going to use visitor engagement as a practitioner. Failure to do so may leave you worse off than when you began. For, as I’ve repeatedly said about Search Engine Marketing programs, there is nothing worse than optimizing to the wrong thing.