Getting from There to Here Part I of a Series on Functionalism and Web Analytics
(You may want to visit http://www.semphonic.com/resources/whitepapers.asp and download the White Paper on Functionalism as a detailed technical companion piece to this series).
I’m going to start this series on Functional web analytics with a kind of intellectual history lesson. We’ve been doing web analytics consulting for about as long as anyone, and over the past (nearly ten now) years, our practice has embraced, built, used and abandoned a whole bunch of different tools, techniques and even comprehensive methods for doing web site analysis. Which is also to say that (at least until now) we’ve never found anything we could hang our hats on and say, with some confidence, this is "the" – or even "a" - right way to do web measurement.
By tracing through this slightly revisionist history, I hope to show why we think Functionalism addresses specific failures in the other techniques we’ve tried and how it emerged from our real-world attempts to do useful web measurement.
When we first started doing web measurement, we came from a world (data-based credit card marketing) where customer segmentation was king. And our first thought was to apply this same customer-based philosophy to the web. Unfortunately, our initial customers knew absolutely nothing about their visitors and had no way of linking visitors to customer or prospect information. Because of this, all of the bellwether demographics that are so useful in card marketing were just out the window. With nothing to work with, this approach came to a short and untimely end – but it was far from the last time we thought about or tried customer segmentation methods!
So we decided that good web analytics needed to be behavioral – and we started looking for tools that would illuminate web behaviors. Needless to say, the tools at our disposal were virtually worthless. So we began a two-year project to build our own toolkit. And the centerpiece of that effort was a detailed pathing tool. It seemed to us, as it has to so many others, that if you could just see and understand the paths visitors take on your site then you’d be able to powerfully tune your marketing.
We ended up with a pretty darn good pathing tool – one that was well ahead of its time and would still stack up reasonably if not favorably to the tools in HBX or SiteCatalyst today. But what we found was that path analysis was rarely useful. I remember discovering that one of our clients had something like 16 million unique paths in one month! And when we tried to make sense of paths, we found that top paths were generally un-interesting and obscure paths were too numerous to consolidate and understand in any reasonable manner. Web sites simply allow too much open-ended navigation to make visual path analysis useful.
However, one of things we learned doing path analysis was that a lot of behaviors were more interesting when you thought about them as occurring within a group of pages. Our path tool eventually encompassed pathing at content levels. But as we de-emphasized path analysis we began to think more and more about hierarchies – grouping related pages on a site together and then analyzing the users’ movement from group to group. Even better, we began to see that groups of pages often provided interesting statistics in their own right.
So our internal tools began to emphasize content hierarchies – and we spent a good chunk of time in our consulting engagements grouping pages into different logical structures and then looking at basic KPI’s like visitors, visits and page views at the content group level. At this point, we re-examined our previous thinking about visitor segments and came up with an idea that I’m still convinced was really good. We borrowed a bunch of our old neural network models for segmenting card customers and we modified them so that instead of taking demographic inputs they took behavioral cues – specifically, data about how often visitors viewed and visited site areas. From this, we build profiles of visitors that were far more interesting than our original attempts (enterprise software vendors take note – this is still a good idea!).
Alas, this approach foundered on three big rocks. First, though our statistics about content groups were interesting they were rarely particularly actionable. And while our behavior segments looked really interesting, similar objections applied. Cues from visitor behavior are often extraordinarily difficult to find in web actions – even with powerful neural network approaches. We tried to get sites to try dynamic content serving based on the segments, but this was asking a lot. In addition, the segments themselves didn’t often suggest any particular personalization strategy. We often found ourselves using the segmentations to justify personalization decisions we arrived at subjectively. Finally, and most importantly, our segments didn’t really incorporate any outcome data – so we began seriously investigating tracking to conversion.
It was about this time (perhaps three years ago) that we scrapped our internal tools (mostly) and began using tools from Enterprise vendors. As powerful as some of our tools were, they weren’t going to compete for ease-of-use, speed, ease-of-implementation, flexibility and features with the Enterprise packages that had begun to emerge.
Over the course of time, we looked at a number of web sites studying conversion data. And one of the things we began to realize was that a vast amount of conversion was ultimately multi-session. In one classic case, a client had asked us what visitors did right before buying (a very expensive) product. Turned out, by far the most common behavior was Land on Home Page and Click Buy. The reason? The average buyer had visited the site 11 times previously over nearly three months and consumed 150 pages on the site.
In short, everything was interesting except the behavior in the buying session! So we began to focus on a very sophisticated approach of tracking the correlation from groups of content to over-time conversion by visitor segment. This, in my opinion, is the general state-of-the-art for skilled practitioners in our profession. And a year ago, it was pretty much our standard method.
So what’s wrong with this method? There were a couple problems with it that became increasingly apparent the more we perfected our methods. First, there were many, many changes which, it turned out, couldn’t be measured with this method. Overtime conversion necessarily is impacted by many exogenous effects on a large web site. Nor is it realistic to suggest that a wording change on a page is going to have a statistically measurable impact on final site outcomes. But we were often supposed to be telling site designers whether such changes were good or bad. And, in cases like SEO optimizations, the consistent answer that the change wasn’t measurably negative began to seem like a foolish cop-out.
Second, we found that the method itself, while powerful at measuring good v. bad for a whole web site, provided precious little direction to designers and marketers about how to improve. The mere fact that thing x out-performs thing y doesn’t provide information about why or whether more thing x would also be good (or necessary). This lack of buy-in and direction for site designers and marketers increasingly began to seem like a fundamental problem with our approach. Too much web analytics was going to waste because the natural consumers didn’t know how to use or understand the results.
Third, the method was inappropriate in cases where pages weren’t directed toward conversion. For large web sites, this is a significant component of all pages – and a method that says nothing about their usefulness seems wrong. At first, our tendency was to believe that if pages didn’t drive to conversion they should just be eliminated. But it’s clear that this view is quite un-realistic for any large company.
For all of these reasons, we began to feel that this whole method, while reasonable and powerful in some respects, was also too lofty to be useful in many analytic situations. At the same time, we had begun to find that we could address each of these issues in specific ways by treating web pages (and content areas) as having a specific functional purpose. And we’d begun to build up a library of measurements (KPI’s) that were specific to those purposes.
When we explained these to channel marketers and site designers, they got it immediately. We could see the lights go on. "Ah – I get it. This page is supposed to move people here and it isn’t doing the job and I can even understand how you proved it isn’t doing the job."
And the more we codified page types and KPI’s, the better the approach began to look to us – and the more different types of sites and pages we could fruitfully analyze – hence, the creation of the Functional methodology.
That’s about it. The whole sad, wandering history of how Functionalism was born, like many babies, in countless bad decisions, mindless passions and time ill-spent! I’ve left out many blind-alleys and I’ve streamlined history to try and make maximum sense of our experience – but I have the feeling that most web analysts have been down similar paths and made similar discoveries, even if they’ve only been at it for a year. With the tools available today, what took us years to build and discover can now be found out in weeks or months.
So here’s how I’m thinking this series will lay out:
- Part II will lay out Functionalism as a method;
- Part III will consider whether anything is really wrong with web analytics right now and consider some commonly heard complaints and suggested fixes that are alternatives to Functionalism (this one will be dangerous!);
- Part IV and on will delve into the guts of Functionalist analysis and KPI’s.
See you soon!
Comments