(Part V of a Series on Methods in Web Analysis) Since there was a bit of hiatus in this series on Internal Search while I wrote up some eMetrics comments, it’s probably worth a quick review of the series. This group of posts is designed to help an analyst understand how to measure internal search using a web analytics tool. I don't think you can learn how to do web analytics in the abstract – the methods and techniques are specific to particular site problems. So in helping someone understand how to do web analytics for real, it’s useless to start with things like Most Requested Page reports or Referring Site reports. Instead, you have to start with a business problem (like optimizing Internal Search) and then walk through the steps (and the reports) to measure and optimize against that problem. With Internal Search, I began with some high-level notes to help the analyst understand how Search is actually tracked in web measurement solutions and some very common problems around measuring Advanced Search and Additional Page Results. This post also covered a critical idea about internal search analytics – the analyst has to understand whether internal search is a preferred navigation option on the target site or a fallback for when direct navigation fails. This makes a big difference in every subsequent analytic step. This first post also covered how to get a feel from site usage numbers which of these two possibilities really fits your current site model. The second part of the series covered the analysis of Search sourcing. Finding pages that source either too little or too much to Search. The next post covered Functional analysis of Search – treating Search as a class of Router Page and outlining the special nature of Search next steps. In the last post, this analysis was extended to cover specific categories or search terms – and the necessary steps to evaluate the performance of a topic specific subset of Search. In this post, I’m going to tackle the Failed Search Terms report. This report is available in most WA solutions (often requiring a bit of extra tagging) and shows you which Search Terms didn’t return any results. You should be aware that many Search Tools will also provide this report. Of course, you can’t find out what the effect of a failed Search was from the Search Tool – only your analytics tool can provide that information. But for most sites, the really important piece of information is just the fact of failure. Intuitively, a failed search is always a bad thing for a site. There aren't many site experiences as frustrating as a failed search. Nor is the problem usually (or even often) in the quality of your Internal Search Engine. Even a great Search tool can’t manufacture content for you. And the most common cause of failed searches is an absence of any relevant content. Looking at failed searches is a great way to identify these content holes. Any term that shows up quite a bit in this report indicates something visitors were expecting your site to cover - and were unable to find. You can also spot common typos. That's important because typos are probably the second most common cause of failed searches. What's the 3rd most common cause? Usually content that isn't indexable by your engine. Some forms of content (such as Flash) aren't really handled by any engine. Others, such as PDF or Powerpoint are handled by most but not all Search tools. Of course, the absence of content on a site can be intentional. We worked with a software company that decided to stop providing free trials of their software and removed all information about the free trial from their site. The number one failed Search Term quickly became "free trial." But even though the company pulled the free trial for pretty good reasons, it’s doubtful that giving an empty Search Results page was the best user experience. It makes a lot more sense to create content that explicitly deals with the issue – lets the visitor know that the search for a free trial is fruitless – and sells the programs "proven qualities" or the company’s reliable brand. For the most part, don't expect to find smoking guns with this report. More often than not, you’ll see an odd collection of terms that you don’t care about. But when you do find real problems, the fix is both straightforward and valuable. I mentioned earlier that one thing your web analytics tool can provide that the Search Tool won't is the impact of a failed search. If you are trying to make the case for a new Search Tool or for fixes/tuning of your existing setup, then documenting the impact of failed search is important. To do this, you need to isolate Search Results where the result was failure. For some sites, this is easy. They code the Search Results Page differently when it's a failure. It's a good method and makes this analysis a snap. Just compare Next Pages and Exits when the Search Results is a Failure to when it isn't. For many sites, however, the Failed Search Results page is folded into the universe of all Search Results. So you need to find a way to segment on searches that were failures. Many tools don't make this easy. If a tool has an explicit "Failed Search" event you can reference in segmentation or analysis then you're fine. But if not, you may have to hack this problem. Here are two methods. Method one is simple, just find a few of the top failed searches and build a Segment based on including those Search Terms. Then compare segment Search performance to overall Search Peformance (in terms on routing efficiency). This analysis is a little muddy since you will also have successful searches in your analysis - but it usually gets the job done. Method two is sneakier. Create a visitor segment where Search Results is the Exit page - then check the rate of Failed Keywords to Searched Keywords for that segment. This comparison will give you a pretty good idea of how often Failed Searches lead to Exits. There’s an additional analysis we do that’s related but not quite the same as failed searches. It’s based on the principle that the number of searches on any given term should be about proportional to the obscurity of the term (this only applies where Search is a fallback navigational option). In other words, most of your visitor's searches ought to be products, features, or concepts too obscure or niche to ever merit prominent navigational support. You don’t want lots of visitors searching for the same thing. If they are, then a clear navigational link would streamline their experience, enhance user satisfaction, and drive visitors to where you want them to go. One way of measuring the "Obscurity Curve" of your keywords is to use a keyword report to plot the frequency of searches against the raw number of searches – how many searches per keyword fall within 1-99 searches a month, 100-200 searches a month, 200-300 searches a month, 300 – 400 searches per month, etc. Here’s what one such plot looked like:
The Y-axis represents the sum of all the searches on all keywords that fall into different containers (each container is a range of Searches per Month); the X-axis represents the raw number of searches. The clustering on the left part of the graph indicates that more obscure terms get the greater amount of searches – this is good. It means there are lots of terms with very little volume (Search Count) per term. The spike on the very right of the chart indicates that a keyword or a few keywords take up a disproportionately large number of searches. In this real-world example, that spike was taken up by a single keyword. This site should probably put a clear navigational link to that particular keyword as part of the global navigation.
In the next post I’m going to tackle using Search to find some UI cues – both for possible navigational problems and for optimizing page templates.
Comments