Hey Gary –
Whew! Back from my week long family vacation to Seattle and the Pacific Northwest beaches and ready to dig back in (after digging OUT from the work waiting upon my return). I’ve read through your most recent post a few times to get me back on track and I think I’ve answered all the lingering questions in this post – though don’t hesitate to call me out if I miss something. I am still at least partly day dreaming of the windy, cloudy, beautiful shores of Washington.
Let’s start with your assertion that a good creative brief should include at minimum the two-tiered segmentation information of who the intended audience is and what they might be trying to accomplish but should also include demographic profiles, related content interests, pre-experience and choice drivers. In an ideal / utopian world – I 100% agree with you. Some companies may have all the tools, data and process in place to make this happen at the pace necessary to feed a robust testing program – but my experience tells me a different story for the majority. This level of information can generally be analyzed and provided at most on a monthly level – and more often only quarterly or even annually. That doesn’t mean it shouldn’t be done – nor does it mean that it shouldn’t be included in individual test creative briefs where applicable. However, I see it more like this:
Step 1: Divide site into rational segments based on use-cases (for example – navigation elements and main landing pages might have goal to find right category of product, information or service while product detail, search, comparison or other filter pages might have goal to help customer find specific product, information or service and cart, registration or account sign up might have a “conversion” or completion metric as primary goal).
Step 2: Run two-tier segmentation analysis on these main site sections (or purposes). You can even break down each main site section into its disparate parts if you have the resources and bandwidth to do so (for example – instead of “category identification” you might have “category identification – header” and “category identification – landing pages”).
Step 3: Run satisfaction impact analysis by two-tier segment and site section. (Dare I suggest a third tier for testing purposes?) Looking at the two-tier segmentation analysis by site segment will give you a great place to start ideating for test ideas and a very clear roadmap for each site segment. Who is using this area of the site and for what purposes? Layering in survey or other source of satisfaction analysis will tell you who is dissatisfied with what area – and thanks to the two-tiered analysis – with which purposes are they struggling. One step further tells you not just who is least happy with what site area and attempted function – but also which segment-goal-area grouping is most valuable to the company and how much impact an improvement in satisfaction for each of these groupings will mean for your bottom line.
Step 4: Present overall creative brief and use for brainstorming of specific test ideas. So here we get to the kind of creative brief you describe in your post – again – something I think is highly recommended and sorely missing in most every program I’ve been involved in. However, rather than doing this level of creative brief for each test – my recommendation would be to do it on some regular cadence (monthly, quarterly or annually as resources can support) and present a site segment creative brief showing the how the different two-tiered segments are using the site areas and how satisfied they are (or dissatisfied). Attendees for this presentation should include representatives from design, IT, product and site owners (where relevant). Goal of the presentations should be to come out the other side with a list of potential test focus areas. (note – analysts should always come with some ideas of their own based on their analysis but also add new ideas that may come from the other attendees)
Step 5: Prioritize! Once you have a list of test ideas focused on each segment-goal-area grouping, you can use your understanding of which groupings represent the greatest opportunity for improvement to help prioritize your roadmaps for each site area and overall to ensure your resources are best aligned to achieve optimal impact. (note – further analysis may be required here to determine which test ideas within each segment-goal-area grouping present greatest potential)
So – should creative briefs include all the information you outline (pasted below)?
It should start with a 2-tiered segmentation telling your creative team who the target audience is and what they are trying to accomplish. Then it should describe that target audience in terms we all understand – their key demographics. It should outline what that target audience tends to consume, and, even more important, exactly what they’ve likely seen when they arrive at the test. Finally, it should drill-down into the key factors that are going to drive choice around whatever decision the test is trying to influence.
Yes for the “overall creative brief” presentation…but not necessarily for each specific test. Because once you get your prioritized list of ideas for each site area roadmap, you will need to build out a more streamlined and much simpler creative brief. This single test idea creative brief should start with your last element – a drill-down into the key factors that are going to drive choice around the decision the test is trying to influence (likely the only “unique” or “new” information created for the specific test brief). All the other information relevant to this specific test should be simply copied from the broader overall area creative brief already shared and pasted – maybe in a backup or appendix section – but the focus for the kickoff is on the more detailed use case and goals of each specific recipe.
One more thing on creative briefs – is it possible to run tests without any creative brief? Yes…sort of. For example – if you want to increase overall site conversion and do not have a means to differentiate experience by segment or intent, you will go after the illusive (and likely illusionary) “average customer” and focus on increasing conversion for the “whole” rather than the parts. This is where many companies sit today and why they get frustrated so quickly with their test programs. Test A clearly showed that bullets worked better than paragraph copy on page type X but test B says that deep content in paragraph form works better on page type Y. What they’re missing is that there are two different types of segments or goals for visitors on those two different pages. The pages in Test A are most often visited by customers who might fall into the “ready to buy – want to compare” two-tiered segment while those pages in Test B are most often visited by the “researcher – need information about specific product before purchasing” segment. So is it possible to avoid the creative brief? Yes. Is it recommended? Generally the answer is still no – even if you do not have the capabilities to customize experiences. Why?
Because even the simplest creative brief outlining the possible segments and use cases for the area being tested can radically reduce the backend effort on the analytics team to derive and share meaningful and actionable results.
Whew! That was a lot. Hopefully not too much to consume.
Just one thought on your 2nd goal for this discussion – I would challenge analytics teams to have testing on their minds as they conduct each and every analysis for their companies. Social media analysis? Segmentation analysis? Satisfaction analysis? Even a page weight and load time analysis – all of these can and should lead to test ideas to feed the test program. My goal for any analytics team I ran would be to ensure every single analysis included concrete test ideas associated with each result element with included potential impact, ease of implementation and recommended priority against the other ideas in that specific analysis. Wouldn’t that be a beautiful world?
So – that’s it for this time. What do you think, Gary? Is my dream reasonable? Can we ask analytics teams to provide more than just the “what”, “why” and “so what”? Can we ask them to go that next step and provide the “now what” as well? And – in an even more ideal world – maybe even include the “how?!?”
Kelly
Comments