My Photo

Clicky

  • Clicky Web Analytics

Your email address:


Powered by FeedBlitz

« X Change 2011 : It's the end of the world as we know it, and I feel fine (Part II) | Main | Dealing with Imperfect Data : Thoughts from X Change »

Comments

As always, Gary, a thought-provoking post! While these are techniques that are potentially *repeatable*, to what extent would it make sense for a company to actually repeat them on a recurring basis? As an agency, certainly, it seems like repeatability is a desired goal. As a single company/brand, though, how often would it make sense to repeat these techniques? Are all three applicable to run at the cadence of site updates (do the analysis, update the site/content, wait for collection of new data, repeat...)?

I've shied away from the phrase "recurring analysis" in the past, as that tends to get into mucking up the distinction between performance measurement and analysis. And, it leads to a situation where I'm pointing out that, "If nothing was changed, results are likely to be the same."

Good luck with the experiment!

Tim,

It's a great point - there's no doubt that recurrence is more important to me than it is to enterprise practitioners and that's true for each of the techniques I've laid out (as it would also be for Market Basket analysis in retail analytics). Like Market Basket analysis, most of the techniques I've laid out should be repeated, but they aren't necessarily going to be something you do on a constant basis. They are probably, as you suggest, the types of analysis you would need to match to the cadence of site change - repeatable in the context of significant site changes, new content, or simply a long enough amount of lapsed time to make a potential difference. In truth, there are very few analytic techniques that are consistently employable in any other fashion - though direct response modeling is probably an example of one that is.

As you suggest, if nothing much has changed in the business, how can any analysis be expected to add incremental insight every time it is run? Even something like attribution analysis, while likely to be constant in execution, will probably only yield sporadic conclusions of interest unless a company is unusually dynamic in their campaign strategies.

On the other hand, I think that the having a quiverful of techniques that are nearly guaranteed to yield value is almost as critical for enterprise-side practitioners as it is for consultants. Those techniques tend to carry a lot of water in the organization - making it far easier to justify the exploratory analysis projects and the team necessary to conduct them. No analysis team is going to be successful all the time (or probably even a majority of the time) - so having some high-probability winners is very important.

The comments to this entry are closed.