My Photo


  • Clicky Web Analytics

Your email address:

Powered by FeedBlitz

« Beyond Attribution to Action: Building a Personalization System | Main | The Analytics Club »



Thought provoking as always. The problem with best practice is that the context, customers, site, value proposition can be so radically different that your 'sure fire winner' on site A will tank when tested on site B.

However, there is a huge amount of digital cruft, bugs, browser and device compatibility issues, screen resolution problems, no-brainer UX issues and general improvements that most sites can make. I call this JFDI work (just effing do it) and the successful companies I work with are not only optimising products but also running split tests and fixing sub-optimal stuff (often many small things) on a continual basis.

I've tested a few (over 40M visitors) so the best practice thing was always interesting to me. The reality is that all these tests that people show to others - can sometimes lead people to think that it can simply be replayed for the same impact and success on their site.

I would say that best practice provides great input on the themes, styles, approaches you can use but it will never replace finding the optimal result for your site, visitors, personas etc. etc.

When you talk about pure/creative testing - this almost feels like Shakespeares' Monkeys - without directionality, inputs, customer insight - these are just guesses and not very informed ones. It's probably more luck than effort when you get a good result.

Let Kelly know that I'm happy to share some experiences here on this stuff, if you think she'd find it useful. I've been advising some other large setups about how they build and run optimisation programmes (for themselves, or for clients) and can give you an unvarnished and unbiased insight here.

The comments to this entry are closed.