Reflections from the X Change Berlin
In my swing back through Europe, I spent a few days in Paris with the family. Naturally, we tackled the Louvre and, naturally, that means the Mona Lisa. You approach the Mona Lisa like a traveler on a small stream, joining ever larger flows until you are swirled, at last, into a great river of humanity. The crowd is forty deep before the great painting and you must edge your way through the crush until your very petite daughters have a front row view. There, before you, is the most famous painting in the world. Now see how long you can stand there before you yawn.
This makes the Mona Lisa the most famous example in the world of a phenomenon I call reporting fatigue.
In my wrap-up on the X Change Berlin, I mentioned that while every Huddle is unique, themes often persist across Huddles and even across many years. At last year's X Change in San Diego, one of the main topics to surface around the distribution of analytics is "reporting fatigue." If you've ever built reports for an organization, I'd bet that you're familiar with the problem if not the term - it's the ever diminishing interest that a report set generates in the organization over time.
When you first deliver a shiny new report set, carefully constructed from business requirements, and embodying all the latest and greatest elements of UI design, there's a pretty good chance that everyone in the organization is going to be happy. Like a shiny tinsel wrapped package under the tree, everyone loves opening a new report set. But with each iteration of the report set, the interest fades. With daily report sets, fatigue often sets in after a week or two. With weekly report sets, you might get traction for a month or more. With monthly reports, you can expect at least a solid 3-month honeymoon. But sooner or later, the reports go unopened or are but cursorily browsed. They are, after all, the same old stuff.
Is report fatigue inevitable?
At the last U.S. X Change, Bob Page discussed a startlingly advanced strategy for combating report fatigue: eliminate all regular reporting and replace it with an on-demand cube-based analysis system. We've had other clients take similar strategies and some who have simply decided to replace those static report sets with small, business-priority analytics projects that often produce a short stream of reports that are meant to span only a few iterations.
In my Berlin Huddle, one of the participants put a six month expiry on every report - enforcing a short life-cycle and constant reporting updates. That's not a bad idea at all.
Some organizations have tried to combat report fatigue by pruning reports down to a tiny set of key performance indicators (KPIs) that are so simple and so important that they can be consumed at a glance and are unlikely to ever change. This approach, however, is deeply misguided on just about every level. As I've written many times, these high-level site-wide KPIs are neither meaningful nor interpretable. They eliminate everything you need to know to actually make decisions.
Of course, such reports can be interpreted as a kind of very basic alerts system. All they are good for is triggering alarms (which may or may not have any justification).
Which brings me to one of the more interesting parts of the discussion - the use of alerts instead of reports. If you've narrowed down your reports to the point where all they are good for is triggering alarms, then why send them even when no alarm would be triggered? It makes no sense and it actually makes it much less likely that anyone will notice a problem. When you get a report that contains no useful information except to trigger an alert and you receive it regardless of whether or not there's a problem, you are far less likely to actually look at it.
Most of us in the Huddle agreed that alerting strategies were significantly underutilized and, for some of the most common tasks, were an excellent way to combat reporting fatigue.
Of course, building good alerting systems isn't trivial either. But one of the things I really like about alerting strategies as opposed to most enterprise KPI reporting is that the work that goes into them is data-focused not format focused. It's usually trivial to format an alert. But deciding when to generate an alert is generally a pretty interesting analytics exercise - an exercise that's actually quite akin to predictive modeling. To build a good alerting system, you need to understand the natural level variation in the system, the key factors (such as daytime parting, seasonality, campaigns) that drive additional variation, and the places in the system that are most prone to breakage. In other words, you need a pretty good model of the system to generate intelligent alerts. Putting intelligence into the system is particularly important because alerts have an even shorter fatigue cycle than reports. Cry "wolf" even a few times when there is no substantial reason and your alerts can easily start to stack-up, unread, in the "In-mail" queue. That intelligence isn't wasted however. By creating a model of the system, you're in far better shape to answer questions about the actual performance of the system and to steer discussion around performance changes in a healthy and useful direction.
An intelligent alerts based system built from a good model of the Website is superior to a KPI-based dashboard in pretty much every respect.
Another method that we've used at Semphonic to fight report fatigue is by building those alerting models directly into reports - so that the reports surface or focus on only the things that have changed. The theory is simple. After the first few deliveries of a report, most key variables simply don't move very much. No matter how good a job you've done highlighting the right data, if it rarely moves it's not very interesting. This also makes annotation a challenge. Believe me, it's frustrating to annotate a report that's pretty much the same as last month's!
The fundamental challenge with enterprise reporting is simply stated. The more reports and information you provide, the more overloaded report consumers feel and the less likely they are to find the key pieces of data. But the more you refine the report down to a small subset of easily consumable data, the more likely you are to eliminate both the most interesting aspects of the data as well as the actual drivers of change.
By building a model into the reports, you create a new balance. You can load lots of data into the report to find the interesting pieces, but you only expose small bits of it - the stuff that has changed in a significant fashion. The goal is to provide the best of both worlds - crisp, compact presentation and rich access to the underlying data.
We've been doing this type of reporting for years, but what I've written thus far begs the larger question - what kind of model can you build into a Web analytics report? That's really the heart of the matter. For a long time, we built model-reports that focused on either traffic or campaigns. Recently, however, we've revamped that strategy to create model-driven reports that are segmentation focused. The goal of this is to create models that not only address a broader range of site issues than traffic but that provide a rich foundation for broader enterprise reporting.
When I've finished this short series on X Change Berlin, I mean to circle back to this topic in some depth and show how it works. It's rapidly become our standard approach to delivering powerful enterprise reporting and I'm pretty happy with it.
Not all the interesting strategies for avoiding report fatigue reside in the reports. Another intriguing part of the discussion focused on the socialization of reports. I've seen first-hand that some of our most successful clients socialize their reports in person. Instead of sending out reports, the analytics team meets with report consumers on a monthly basis to walk through the data. I think that's terrific. Not only does it deliver better understanding of the data to report consumers, it creates a true two-way communication of business issues and concerns. And if an analyst has nothing to talk about, it forces people to update the report set to keep it interesting. It's an extremely healthy process for everyone involved.
But in Berlin I also heard a really novel approach that doesn't have quite the benefits in terms of back-and-forth communication but offers more scalability and an interesting way to tackle report fatigue. One of the analysts in the Huddle creates short webinar videos for each reporting cycle (usually 2-3 minutes) that cover the highlights of the data. The video accompanies the reports.
That's a great way to do annotation - almost like a water-cooler conversation to pass on what's really important. It's quick, easily absorbed, probably more interesting and more attention capturing than ANY on-report annotation or presentation strategy and it's fairly efficient to do. I've never seen this technique before, but I think it's a wonderful idea.
Reporting fatigue is one of those issues that people never seem to worry about until they've been down the aisle a few times. The promise of fancier tools, better KPIs, nicer infographics, etc. always seem so alluring. But the reality is that none of these approaches will solve or even come close to solving the recurrent problem of reporting fatigue. In fact, they often make the problem progressively worse.
Reporting fatigue is so fundamental to the exercise that it may, in one sense, be impossible to banish. You can only read even the greatest of books so many times. You can only stand so long before even a very great painting. But there are a variety of strategies for taming the beast or keeping it longer at bay. From shorter report life-cycles to alerting systems to model-based reporting to superior socialization and annotation techniques, there are ways to make reporting better and more interesting. Ignore them at your peril. Even for da Vinci, yawns are never far away.