Gary,
Sorry for the delay in my reply! It’s not due to lack of interest, I assure you. In fact, your last post really hit the nail on the head of one of my biggest pet peeves around optimization programs – namely that they so often fail to recognize and/or adequately communication their incredibly important place within the analytics and business intelligence organizations.
Advanced analytics are so important in understanding the past and present and in helping to guide what should be the focus of the future. But as you so rightly point out, you can only analyze data you have. That’s where testing can step in and help take you to the next level.
Good testing programs start with analysis to determine where tests should focus and get ideas on what to test. Great testing programs build out entire roadmaps based on real business questions like the one you gave as an example (“What is the value of short-form video on all our sites and what is the best strategy for integrating that video?”). No amount of analysis can answer that question. But analysis can and should guide a testing roadmap for how to get those answers. And you’re right, the “pay-to-play” testing budget just won’t cut it for the kind of continuous improvement, constant learning, iterative testing that is required to get those answers.
Instead, the testing budget should be unique and distinct from any specific business unit so that the testing team can fully answer any and all business questions that come their way with the best possible test roadmap strategy. When the testing budget proves to be too small to meet all the needs of each business unit, the business units can help the testing team make the case for increased budget or they can speak up impact prioritization efforts. Any robust program will have bottlenecks due to limited resources and or available testing real estate and traffic. This is where prioritization begins to truly prove its importance and its value to the organization by helping to ensure every test roadmap that is developed is the highest quality with the highest potential ROI.
It’s always been odd to me how analytics teams rarely require business units to “pay” to get their business questions answered but testing teams often do. Why are we different? Why don’t we have a budget structured similarly to analytics? I have seen one very interesting example of getting around the pay-to-play issue the testing organization was having by moving the “pay-to-play” testing org strategy under the analytics team. In this situation, roadmaps were decided and designed by the analytics team based on business questions that were asked that required further validation through testing. The testing cost was then covered by the analytics organization rather than the business unit – allowing the testing team to really focus on building out the right strategy to get the best answers quickly. I’d love to see more organizations get to this type of funding strategy or to have their own fully independent budget.
You mention almost in passing that ‘wins and losses aren’t what’s important because every test yields analytic knowledge that then feeds back into broader strategies.’ Each year at X Change, I have asked attendees what their primary KPI is for measuring success of their program. Five years ago, it was win rate nearly across the board. It has been steadily shifting since then and though it is still primarily win rate – teams are beginning to redefine “wins” or even shift nearly entirely to more of a “success” or “learning” rate where the success of a program is not determined by how many tests led to a measureable increase in revenue, conversions or leads, but rather, how well the test was designed to help us find those all-important answers to important business questions. How well did we learn? How actionable were the insights gained? Are there clear findings we can apply immediately to our site or to future site designs? How scalable are the results? Do they apply to more than one area of the site or even to other sites owned by the brand? When we change the conversation from “did it win?” to “what did we learn and what decisions can we make based on those insights?”, we change everything. And for the better.
You ask near the end of your last post if I think testing departments and consultancies doing a good job with this kind of testing. Sadly – no. It is incredibly rare to see any organizations doing this type of testing. And the consultancies I see are also failing to offer this level of analytics-driven strategic roadmap design. Instead, I see very functional-based roadmaps – “we’ll help you optimize your checkout funnel”; “we’ll help ensure your category pages are driving more purchases”; “we can help you increase sign-ups by x %”…Ok. Sure. I can do that, too. So can you. But what about helping me figure out what questions I should be asking in the first place? Who is my audience? How are they trying to use my site? Where are they struggling and why are they struggling? Got a site redesign planned for the coming year? Let us build you a test & learn roadmap to help with planning that new design! Want to add some completely new technology to your site but would like to understand potential impact and ensure you can get ROI? There’s a roadmap for that.
Roadmaps designed to result in a ‘best-practice’ document that can be used for guidelines site-wide are way more valuable to the organization than a roadmap that results in a new design for a single page, page family or funnel. The later simply redesigns the user experience and can give a boost today. The former can help drive a continuous improvement learning culture with every change on the site looked at through the lens of what you’ve learned about your customers.
There is honestly a place for both. The quick-wins, low-hanging-fruit, fix this page, path, tool type of roadmap can be run quite efficiently and effectively without extensive, broader planning and support. Most organizations stop there – but to be truly a world-class testing organization, I believe at least one half to three quarters of testing budget should go to support the comprehensive, best-practices development test & learn roadmap.
Ideally, the more comprehensive roadmap development is something that happens on a quarterly basis in conjunction with product or marketing teams who have line of site to coming changes, new products, or big promotional plans. The creative team or agency working on the comprehensive test & learn roadmap can then work with those teams and the analytics group to determine what insights can be gained through analysis and which require testing. Working hand-in-hand with analytics, the testing team can then work to get those necessary best practice plans to hand to the relevant parties to make the important decisions before the product or technology or promotional launch ever hits the site – giving companies the best chance at proactive success planning and (hopefully) avoiding the reactive scramble to “fix” what we should have known would not work post-launch. Once the new design, promotion or product are fully live to all traffic, the other part of the testing team can pick up the ball and focus on continuous optimization efforts and helping to drive it to the next level. Ideally, working together, these two types of roadmap will help deliver a success graph that looks more like stairs than the hills and valleys of so many programs that fail to do that comprehensive roadmap prior to launch.
So – how’s that for “Kellyvision”?
Comments