Gary,
I’m going to do my best to “bring it home” as you requested in your last post but that’s a lot to cover in a single post. Hopefully, I won’t lose folks along the way! Stick with me, everyone – I promise to make it worth your time.
Measuring the success or ROI of any program or team can be challenging. Analytics teams have struggled with this for way longer than optimization teams have even been in existence so you know very well how challenging it can be to devise a way to measure both operational efficiency and impact. You are absolutely correct that it is nearly impossible to do both with a single KPI. I’ve often used an analogy in the past to explain my view point here. In my experience, most marketing teams are working against each other to claim a larger slice of the overall “pie”. Unfortunately, the PPC team is working towards a pecan pie while the email team is increasing the size of their cherry pie and the onsite search team loves growing their apple pie. At the end of the day – the site itself is one crazy, nasty-tasting pie. If instead of focusing on their slice of the pie, all groups (marketing, merchandising, analytics and optimization, et al.) were to focus on the size and taste (performance) of the overall pie – the resulting pie would likely win blue ribbons at the fair!
So what would this look like in real life? I’m of the opinion that all teams should have a single shared “lag” goal. This goal represents the ultimate goal of the website and all teams should be bonused against this lag goal to ensure everyone is driving toward this goal. Then, “lead” goals should be developed for each team that – when improved – are predictors of later improvements to the lag goal (assuming no other negative drags on the lag goal). Those lead goals then become the 2nd bonusable measure of each team and these lead goals become the KPI of the program. “Win Rate” and even “Learning Rate” are not likely to be in any way predictive of your overall site goal of revenue per visitor or new customer registrations or engagement. The reason for this is that neither is connected to actions – and if you do not act upon the wins or insights – there is no way for those results to impact customer experience or site performance.
Therefore – with optimization programs, I feel the best lead measure is Actionable Insights. What percentage of tests run lead to not just insights, but insights you were able to act on. It’s awesome that we learned that customers prefer a single page sign up or checkout – but if we cannot reasonably create such an experience due to technical limitations, that insight is just that – interesting. Ironically, having actionable insights as a test program’s KPI can dramatically reduce the testing you do and help prioritize your roadmap to ensure your limited resources are directed only to areas where you will be able to make a decision based on what you learn. In the analytics world, we have all been preaching the push-back response of “What decision will you make with that data?” for a long time – we must learn to do the same with the shiny new thing that is optimization. “Can we test it?” should be changed to “Should we test it?” and “What action will be taken based on each potential result?” If we are going into a test idea with the knowledge that it will be nearly impossible to implement the resulting winner – we need to ask ourselves if the learning is worth the drag on limited resources.
Important note: I’ve seen some companies combat this issue by having a separate bucket of money just for this type of testing – they consider this testing pipeline the “innovations” channel and test results that come out of this pipeline are only used to feed the future-state innovations that everyone knows are 1-3 years down the line at the earliest. This type of testing roadmap would clearly be measured only on the Learning Rate and I have no issues with that though I’d like to see some measure of “time to action” or something similar to ensure the program does not stop with insights but also pushes for future optimizations. But for all other types of testing – programmatic or best practices or analytics-driven – if you know going in that you cannot implement a winner…maybe you shouldn’t be testing it.
The really challenging part of this comes with the deeply-held belief most digital analysts share that test ideas, like analyses, should consider the audience segment and tailor the test design to the segment and intent of the visit. Unfortunately, many companies do not have the back-end capability to actually deliver a different experience to different segments – so testing in this way leads to results that are fascinating…but not actionable. Therefore, companies must determine up front what can and cannot be done with their existing infrastructure before the test roadmap is developed. Ideally, a plan can be put in place to use the testing program (perhaps the future-state pipeline mentioned above) to gather data to support an ROI argument for updating infrastructure to support personalization in the future.
Your second question around creative agencies and the complexities often caused by having one design team working with the optimization team and another working on big future-state site redesigns. This frequently leads to beautiful, yet non-optimized new designs for the optimization team to help “fix” post- launch. If instead the optimization team were to feed their insights into the design team responsible for the big site redesigns – much of this issue could be eliminated entirely. I recommend frequent – perhaps quarterly – sharing of testing program insights with the entire design team (in house or agency) to share all the insights gained from analytics and optimization efforts regarding how customers are using and would prefer to use the site. It must be driven home that these preferences are in no way static as the web in general begins adding new functionality and customers begin to expect the same on all sites. In this way, there must be some recognition and acceptance that what was true just 6 months ago may no longer be true today and the analytics and optimization team can help identify those changes before customer satisfaction is too negatively impacted.
The model I’d love to see is one where every new site element or major site change is funneled through a “library” of sorts managed and updated by the optimization analysts to learn if there are any existing insights that might be relevant in the design of this new element or site update. The optimization analyst will then review the existing information and determine what we already know and what additional (if any) might be helpful to know up front. From this information, the analyst can provide a reference document supplying the creative team with what we already know and a recommended roadmap of new tests designed to flesh out what we don’t yet know. In this way – the designers and optimization team are working hand-and-hand to proactively create new content most likely to perform well right out the gate ideally leading to lag site metric performance that looks more like stairs than the mountains and valleys most commonly seen in the waterfall approach to site design.
I’ve written so much already, that Gary and I agreed we'd call this Part I...in Part II I'll lay out the five things I think companies should be doing right now to have a better optimization program!
Comments