Part II - Engagement as a Media Metric
In the first post on this subject I proposed three distinct uses of the word engagement in web analytics and I covered the usage (as a success or lead-value proxy) that is most familiar to us at Semphonic. After that post, Eric Peterson sent me an excellent and quite detailed comment that I’m going to strongly suggest you read. Why? Well, a good chunk of it was stuff I was going to say but now feel stupid repeating. Thanks Eric….
Eric didn’t entirely replicate my thinking, however, and I’m going to take up my second usage of Engagement (providing a measure that can be used by media buyers to compare sites) and just tailor my thoughts a bit so as not to repeat his points. Here is what Eric wrote about this type of engagement:
“The second measure of engagement, within the realm of media measurement, is something I have started referring to as "Audience Engagement." Audience Engagement has to be measured ** not ** using a census-based system as it requires cross-site visibility. This measure of engagement almost certainly includes some of the measures of engagement you'd include in visitor engagement (time, click-depth, recency) but would likely ** not ** include more site specific actions. Audience Engagement gives media planners and buyers a different ruler against which to judge the audience visiting competing properties. It is still not completely clear to me how media planners will use a measure of Audience Engagement; it is only clear that they are actively looking for such a measure.”
I like the term “audience engagement” – and Eric’s comment that “It’s not completely clear to me how media planners will use a measure of Audience Engagement” echoes my sentiments exactly. In fact, I have deep doubts about this project.
My sense is that this usage of engagement has been shaped by historical and offline concepts in ways that make it quite suspect. At its core, I believe it is an attempt to approximate reach on the web - something like a GRP.
Media Measurement started out by focusing on Page Views as a comparative measure. This approach had a certain in-built logic since it matched the unit at which media was sold. Page-view based media comparison had its problems, however. It was increasingly broken for the very sites the traditional media measurement companies were most interested in – large media publishers. As video, flash and rich-media became common on those sites, the meaning of a page view was greatly diminished as a measure of reach.
To counter this problem, traditional media measurement has moved toward time on site as a better approximation for media buyers. It is, in fact, a better proxy for a certain type of site – most notably those same large media companies. But as has been noted in many places and by many people, it is hardly a universal measure. When I was speaking at SMX last month and this question came up, Brett Crosby from Google made the point that one of Google’s success factors is reducing time on site. The less time someone spends before clicking out the better.
What’s curious about this is that if you accept – as I think is correct – that this use of engagement is meant to facilitate the buying function, then we should reasonably expect it to show Google as a huge winner. After all, that’s where all the dollars are and I don’t think it’s because buyers are stupid.
The Google example illustrates an important point about media measurement in general; it only seems applicable within a certain class of sites (I’m indebted to a discussion with Joe Shantz of PHD for clarifying some of my thinking here).
We might think Time on Site is interesting as a comparison of two news sites, but find it meaningless as a comparison of a news site and a social networking site. Unfortunately, sites don’t always fit into neat classifications. Large properties tend to cross many boundaries; so any form of media measurement that demands site commensurability is problematic from the get go.
Nor is this the only issue at stake. At a deeper level, the issue is a fundamental difference between web display advertising and traditional mass media. In most traditional media, the experience of delivering eyeballs/ears is essentially synchronous. What I mean by that is that the programming doesn’t compete with the advertising – it funnels it. On the web, this is almost never the case. In nearly every experience on the web, advertising and content are delivered asynchronously and are constantly competing for attention.
It is the fundamentally asynchronous nature of the web that accounts, in my view, for the dramatic difference in metric performance by site type. The site experience of advertising on a social networking site is fundamentally different than the site experience of advertising on a media site.
However, the differences are hardly limited to site type. Even sites within the same paradigm are dramatically different in their “friendliness” to advertising. Known techniques like ad scheduling (showing different versions of ads to reduce tune-out), ad collapsing (removing ads or changing layouts also to reduce tune-out) and behavioral targeting can make a dramatic difference in the actual performance of advertising - not to mention simple things like ad placement. If these techniques aren’t measured by a media metric, then what good is the metric?
I’ve said before in other contexts that choosing the wrong metric for optimization is usually much worse than not having a metric at all. I think this applies in spades to the situation in media measurement.
Metrics like time-on-site and click-depth completely miss the asynchronous nature of the web and by focusing on the wrong metric they create a dissonance between the interests of an advertiser and the interest of a publisher. You can maximize time-on-site by reducing advertiser effectiveness. In effect, the metric becomes a self-defeating prophecy. Publishers can make more by making things worse for advertisers!
This problem simply does not exist in Radio and TV and it’s why any attempt to shift the mass media measurement paradigm to the web (as content is currently structured) is doomed to failure.
It’s also easy to see – on this paradigm – one of the reasons why Google works very well. It doesn’t compete for eyeballs at all. There is no there, there.
Is there a metric that would actually be useful to assist in media buying?
I believe that any worthwhile media buying metric should do all the following:
1. Show Google as a hugely dominant player on the internet
2. Reward sites for behaviors that improve advertising friendliness like:
a. Behavioral Targeting
b. Scheduling
c. Collapsing
3. Not require an artificial categorization of large properties into some pre-defined group.
4. Create no dissonance between the optimization of the publisher and the optimization of the advertiser.
I can’t think of any traditional metric that will meet these criteria. However, there is a behavioral metric that might help (at least for traditional display – video is a different animal).
What I have in mind is a measurement of quality click-throughs. A click-through is just a click to an advertiser from a site. Quality might be defined as something like “a click-through to an advertising or sponsoring site that is judged non-fraudulent by the target site, consumes more than one content unit and does not return to the originating site directly.”
You’ll note that the measure of Quality Click Throughs (QCT) should meet the four criteria above. Google contributes a vast number of QCTs to the internet. It also contributes a fantastic QCT per impression. Techniques to improve advertising effectiveness will result in more QCTs and more QCTs per unit. QCTs are insensitive to site type because they reflect advertising friendliness. And finally, QCT’s foster a shared interest between publisher and advertiser. QCTs even provide a nice metric for pricing and evaluating placements inside a site.
It seems to me that using QCTs, you can get a good sense of site reach (total QCTs), site quality (QCTs / CTs) and efficiency or advertising friendliness (QCTs/impression or QCTs/minute and QCTs/dollar).
If I were a media buyer, these metrics would seem to me vastly more interesting as comparables than total page views, total time on site or any other similar metric. Of course, it’s not clear that these metrics are anything like a measure of engagement – except possibly with advertising.
One of the implications to all of this is that the measure of engagement that would be appropriate to media buying is not only different from but in many ways opposed to the measure of engagement appropriate to the site owner. Based on actual practice where issues about the costs and benefits of losing site visitors to networks like Google’s Ad Sense are routinely discussed, this seems reasonable to me.
There is nothing particularly revolutionary about the idea of measuring click-throughs. People do this all the time to evaluate advertisers and placements. However, adding the quality dimension sharpens the measure and makes it much more useful for advertisers as a comparable. That it doesn't necessarily measure how engaging a site is may be taken as a flaw - but given the asynchronous nature of the web I think any advertiser should be careful about using a measure that actually did capture that elusive quality.
In my next (and last) post on this topic, I’m going to cover a third important use of engagement – to measure the brand impact of a site. It's the one use of engagement where capturing that elusive state of "engagement" really is the goal.
Comments