Reporting often seems like the simplest part of digital measurement. Building reports is the first job we give to new analysts here at (my fingers were half-way through typing Semphonic before my mind caught up) E&Y. But my experience is that it’s actually one of the hardest jobs to get right. So hard, in fact, that I’ve never been entirely satisfied with any of our efforts. That despite re-inventing our approach repeatedely over the last four to five years.
During that time, the state-of-the-art in enterprise dashboarding has swung from walls of data to reports so pretty and information sparse they feel more like efforts in abstraction than analysis. None of it has ever felt like the definitive approach I was looking for. So not surprisingly, I’m at it again.
I started my eMetrics presentation with a brief introduction into what I find to be the biggest problem with nearly all the reports being built for enterprise digital measurement. I call this the problem of the “Current State”, and I like to illustrate it using weather reporting.
When we watch a typical weather report, we get three different types of information. In the first, we get information about what’s happening right now – as in the temperature map in the upper-left corner. Next, we hear the forecast: what weather is actually expected tomorrow or for the weekend. Finally, we may also get a model that tells us why we can expect a certain type of weather (a low pressure system is in place). Because we have no control over the weather, by far the most valuable part of a Weather report is the Forecast. We don’t really need to be told the current temperature. We can just open a window. Seeing the model gives us a deeper kind of knowledge, but it’s not something we can usually act on.
The vast majority of digital reporting in today’s enterprise is about what’s happening now (or this past week or past month). It’s all about the current state. This isn’t completely useless. Unlike the weather, we can’t simply open the window on digital and see what’s happening. So reporting on the current state has a real function.
But if that’s all we do with reporting, how valuable is it? I think this is an especially trenchant question when we have non Web-analytics means of knowing the current state. For an eCommerce site, we know sales, revenue and profitability without recourse to Web analytics. So if you’re primarily interested in taking the temperature, what else do you need?
Here’s another example from my eMetrics presentation borrowed from real-life.
We’ve all seen and built exactly this kind of report. But what good is it really? If you’re on the business end of this report, it’s telling you that you’re short of plan and the shortfall is growing. But isn’t that more of an alert than a report?
Wouldn’t something like this:
Work just about as well?
In fact, it might work rather better since you aren’t sending the same report out every week and people are more likely to pay attention when they get an alert than when they get the umpteenth copy of report that, heretofore, has never had any useful information.
The simple truth is that the Be Worried Alert and the Video Consumption report are pretty much the same thing. They tell you whether you have a problem, but nothing more. Here’s one simple question to ask yourself about your reporting – if the news isn’t bad, does it have any function? If the answer is no, then perhaps you should be generating alerts not reports.
But my aim here isn’t to argue for the superiority of alerts over traditional reporting; it’s to point out that good reporting should do more than simply describe the current state.
For a decision-maker, the ideal would be to have a tool that described the current state, provided a real forecast of what to expect, and provided a means of testing different strategies to improve future outcomes. Such a tool would be useful no matter what the current state of the system was. It would provide a means to optimize and act no matter whether the current state was bad, good or about what we expected.
Even better, such a tool would create understanding. That’s something our current generation of reports simply doesn’t do. Temperature and precipitation may be the key metrics when it comes to weather, but neither give us any basis for understanding why the current state is the way it is or how we might predict the future more accurately. KPIs are, almost by definition, NOT predictive and NOT explanatory.
To create an accurate forecast, you have to build your understanding of the underlying system. You can use a barometer without understanding why it works. But you wouldn’t create a barometer unless you understood that atmospheric pressure is an integral part of understanding weather systems.
In building a forecast, we’re forced to test our understanding of what matters in a system and how those factors are related against the real-world history of the system. That’s a powerful discipline. If we then embody in our reporting those models (as weather forecasters often do), we’ve helped our consumers not only understand what the current and likely future state are, we’ve helped understand what levers of change actually exist.
Because unlike weather, in digital, we usually have significant control over many aspects of the systems we’re reporting on. Nevertheless, I can count on two hands the reports I’ve seen that actually help decision-makers understand the levers of change and how they inter-related.
So the first half of my eMetrics presentation described why so much current reporting suffers from rapid fatigue – with users quickly tiring of and ignoring even beautiful, KPI-filled reports. These reports do nothing more than report on the current state and all too often, tell us nothing we didn't already know. It demonstrated how that focus on showing the current state leads us to reports that would be better cast as simple alerts and causes us to ignore the things that really matter to decision-makers. It introduced the idea that reporting as a tool would embody current state, forecasting, and predictive modeling. And finally, it argued that this concept of reporting would drive a virtuous cycle of analytics improvement where deviations from forecast would necessarily drive ongoing analysis to understand why things didn’t work out as expected and that this, in turn, would deepen decision maker’s understanding of the levers of change and the options available to them to optimize their business.
Which brings me to the half-way house of my presentation and the end of this blog; here, we jump from the abstract presentation of the problem to the more concrete discussion of how to build the solution. There are many, many different ways to create predictive models that can be embedded in this type of reporting tool. Indeed, it would be a considerable mistake to think that one approach is always correct. However, one of the most interesting avenues we’ve been trying uses simulation techniques to create models of moderately complex digital situations (like video consumption, product launch, or digital marketing).
In my next post, I’ll show examples from my presentation of simulations that we’ve explored and I’ll also try to lay out some tentative guidelines for when simulation is an appropriate (or even necessary) technique and when different kinds of modeling techniques might be better.
After that I’ll walk through how Social Media can play a significant role in tuning both simulation and non-simulation models in digital marketing and lay out some new techniques we’ve been using to help explore Social Media segmentation (along with the role of segmentation in modeling) to analyze demand signals. This was, after all, a presentation in the Social Media track…though I must admit that the connection felt, particularly in the early going, a bit tenuous.
So that’s how you get from Reporting to Forecasting to Simulation to Social Media in forty minutes or, in this case, a mere 3-4 blog postings.