Guest Post by Kelly Wortham
I’ve been involved in several conversations lately regarding the pros and cons of MVT vs A/B for isolating and understanding results of optimization testing. The conversations go something like this:
- “A/B gets you an answer so much faster with easier to interpret and explain results!”
- “MVT is the only way to understand each element’s interaction with the other elements!”
- “MVT can create nonsense combinations that simply waste traffic and unnecessarily lengthen test time while also risking customer experience! At least with A/B you can be confident you’re avoiding that!”
- “If you’re not running MVT, you don’t have an advanced testing program!”
And so on. The thing is – these concerns may be valid, but they all miss the point. It’s not about one vs. the other. It’s about designing the test and test program to achieve your goal. And if your goal ISN’T insights that help you take action – it should be, because without insights, you’re still making decisions blindly and without the ability to take action, you just have some really knowledgeable analysts wasting resources.
But if your ultimate goal is to achieve actionable insights, you have more options than just MVT vs. A/B.
- You can manually create an A/B…N test with recipes designed to test all relevant combinations of elements to isolate element and interaction impacts while avoiding the nonsense recipe risks and wasted traffic inherent in tool-automated MVT. Runtime will be longer than traditional A/B tests simply due to the higher number of recipes but shorter than a full factorial MVT with reduced customer experience risk and content creation demands due to the removal of the nonsense recipes. Results should be as easy to interpret as A/B and equally easy to communicate.
- If you need to run multiple tests concurrently (such that the same customer can qualify for more than a single test in the same visit), you can either combine the tests as an MVT to understand impacts of the different combinations of interactions or you can conduct an analysis on the back end looking at each group separately (ex: Test 1 – Control, Test 2 – Control = “real control” while Test 1 – control, Test 2 – variant = true test 2 variant performance and Test 1 – variant, Test 2 – variant = variant interaction impact, etc). Both options will require same amount of traffic and results should be about as easy to understand and communicate. However, setup may be more complicated if you try to combine into MVT depending on your particular testing tool’s capabilities. Therefore, unless you are confident the two tests have a high likelihood of interacting with the results of the other, its likely best to simply use traffic splits and keep the results separate. If both variants win – you can always roll out a follow-up test combining the variants to value interaction impact.
- Best of all solutions – do not focus on interaction of a few elements or tests. Instead, use strategy to design a continuous optimization roadmap with pre-planned A/B…N iterations and concurrent tests where reasonable utilizing traffic splits to ensure only the planned interaction tests are run on the same traffic. Remember, concurrent testing analysis as described in #2 becomes infinitely more complex as each additional test and recipe is added to the analysis and can quickly become unmanageable for both analysis and required traffic.
The optimal test type is much less about the tool and much more about your program strategy and overall goals.
Do you have a brand new radical redesign to test? MVT isn’t necessary (nor would it be helpful). Use A/B – primarily to measure impact of your new design.
Are there several elements you want to test and you have data to help you design the optimal combination of each? Then you don’t need MVT to do it for you. You can just create it yourself!
Are you lacking data to tell you where to focus for a specific page redesign? Are there several elements in the template and you’re largely ok with the current template but would like to understand best combination (and/or location) of elements? MVT might make the most sense for you to more quickly focus your future A/B testing efforts on the elements with the greatest potential impact.
Do you have the data to help you design smart A/B tests but just want to do “advanced testing” and think MVT is the next logical step? You might want to rethink that goal. The most advanced programs focus much less on the type of test and much more on if the test program is delivering actionable insights - something MVT is not particularly well known for.
So next time someone asks you which is better, MVT or A/B, you can answer none of the above! Or is it all of the above? MVT might help you focus your efforts when data is limited or theories abound, but A/B drives learning and action.
So good luck and get testing! And as always – would love to hear your thoughts on this topic and all things testing.
Comments