[Kelly Wortham and I have been talking optimization at X Change for years. Now that she’s part of the team here, we’ve been working on a piece together around building an Optimization Center of Excellence in the enterprise. Kelly also leads a regular group of testing folks who meet every month or so do discuss enterprise optimization programs.]
Kelly,
We’ve been working on a piece that captures a key theme for our practice – that analytics should drive testing. Naturally, we both believe that’s true, and it’s a theme I want to explore in some detail. But I wanted to start this series out with the admission that it’s by no means 100% true (at least I don’t think so). I’ll start with the simple fact that the vast majority of testing programs I see are not driven by the analytics team; indeed, are often completely independent of it and do very little pre-test analytics on their own. It isn’t that testing organizations are jealously hoarding their own analytics, it’s that the only analytics they feel the need for is to evaluate whether and with whom a test succeeded. I’ve argued for a long time that this approach is backwards, particularly when it comes to segmentation and targeting. The idea that you run a population-wide test and then study it to see which groups had the highest performance so that you can target them still seems to me ludicrous. Surely the only way to find interesting creative approaches for unique segments of the population is to START with the segmentation and build the creative based on the population you’re targeting. Agree?
But having said that, it seems to me that there are two types of testing strategies that enjoy widespread usage, make sense, and don’t require analytics. The first is what I’d call best-practices testing. People who do a lot of testing are going to notice significant patterns in terms of what works. Aggressive, benefit directed calls-to-action consistently outperform “Submit” on button text. Stronger color treatments or larger fonts routinely draw the eye to key places. Taking these best-practices and layering them into tests on your site surely makes sense and is almost guaranteed to work. At least for a little while.
And I suppose that’s the rub. You could probably make most of those changes without even bothering to test. They really are just best-practices – and once you’ve done a round of them, where do you go? I don’t suppose best-practice gurus hoard their practices and give them to companies in little drips and drabs? Assuming that’s not the case, after you’ve done one round of best practice improvements it seems like the well is going to run pretty dry. What’s your experience with this type of testing improvement?
The second type of strategy is one I’d describe as pure creative testing. Back when I was doing political direct mail, we’d have several writers take a crack at what amounted to the same letter. Naturally, though, they’d each write it a different way. Same themes, same call to action, but very different styles. When we’d test those letters, they’d perform very differently – and we would often see that a particular style or writer resonated with specific segments. That’s almost exactly the methodology I routinely see in testing – although most testing shops don’t seem to do as much full-on creative alternatives as I’d really expect. I’m curious if you agree.
Anyway, I can’t see analytics ever replacing this pure-creative approach. It might supplement it and provide better creative briefs and a more segmented starting point, but much of this kind of testing just seems inherently bound up with the writing process.
You’ve obviously worked on lots of different kinds of tests and testing programs. In addition to what I’ve already asked, I’ll add two big summary questions. First, do you agree that in addition to analytic driven testing the best-practices and pure-creative testing strategies I’ve described here make sense? If so, what’s the right balance between them, do they form a progression, and how do you know when to tackle each?
Looking forward to your thoughts!
Gary,
Thought provoking as always. The problem with best practice is that the context, customers, site, value proposition can be so radically different that your 'sure fire winner' on site A will tank when tested on site B.
However, there is a huge amount of digital cruft, bugs, browser and device compatibility issues, screen resolution problems, no-brainer UX issues and general improvements that most sites can make. I call this JFDI work (just effing do it) and the successful companies I work with are not only optimising products but also running split tests and fixing sub-optimal stuff (often many small things) on a continual basis.
I've tested a few (over 40M visitors) so the best practice thing was always interesting to me. The reality is that all these tests that people show to others - can sometimes lead people to think that it can simply be replayed for the same impact and success on their site.
I would say that best practice provides great input on the themes, styles, approaches you can use but it will never replace finding the optimal result for your site, visitors, personas etc. etc.
When you talk about pure/creative testing - this almost feels like Shakespeares' Monkeys - without directionality, inputs, customer insight - these are just guesses and not very informed ones. It's probably more luck than effort when you get a good result.
Let Kelly know that I'm happy to share some experiences here on this stuff, if you think she'd find it useful. I've been advising some other large setups about how they build and run optimisation programmes (for themselves, or for clients) and can give you an unvarnished and unbiased insight here.
Posted by: Craig Sullivan | July 23, 2014 at 04:11 AM