We’ve all seen how a change in senior leadership nearly always results in a major re-org. Nobody ever seems to like someone else’s structure. I used to see this all the time in programming. Take any given computer program, no matter how successful, and give the code to another senior programmer to support and all you’ll hear is how poorly organized the code is and how it needs to be fundamentally re-written. I’ve never met a programmer who didn’t think everyone else’s code needed to be fundamentally re-written!
Our goal in building this process was to remove that sort of bias and force ourselves to do something we always tell our clients is essential – pre-commit to your measurement of success.
How can you pre-commit to a measurement of success when evaluating a digital measurement program?
We do it by creating a formal assessment structure with very clear guidelines for how a program should be rated or scored, and backing up those guidelines with industry opinion research. The opinion research helps establish the usefulness of the guidelines and enables us to place an enterprise within a scale of actual industry practice.
Here’s how the assessment process works.
Creating an Objective Assessment of a Digital Measurement Program
We start with five major dimensions for the assessment:- Infrastructure: How robust, consistent and scalable is the current infrastructure for digital measurement?
- Technology Stack: Are all the right pieces in place to measure and understand the digital ecosystem?
- People and Process: Is the enterprise efficiently organized to transform, distribute and use data for decision-making?
- Data Democratization: Can people who need direct access to data and information get it in a timely and usable fashion?
- Analytics: Are the analytics techniques necessary to drive intelligent use of the data understood and acted on?
In the next step, we build out each of these dimensions. The Infrastructure dimension, for example, is expanded into five areas of interest:
But how do you asses something like “Consistency of Data”? It’s not uncommon for our first view of a client’s data to be within the context of an analysis problem. And it’s almost common for the first outcome of an analysis to be nothing more interesting than “the data is flawed and inconsistent.” There’s no better test for data quality than a deep attempt to actually use the data. Unfortunately, that’s generally not an option when we’re building a strategic plan. So we’ve developed proxies for data consistency and robustness of data that can be objectively assessed within a strategic planning effort.
Here, for example, are the sub-dimensions we use to assess consistency of digital data:
We’ve found that organizations without Standards or processes for finding and alerting to tagging problems nearly always suffer from data consistency issues. So in evaluating data consistency, we’ve picked these elements as proxies for scoring.
Formalizing the evaluative dimensions isn’t quite enough to create an objective assessment. You also have to have formal guidelines for each dimension. We provide that too:
At this point, we have a complete framework for scoring an enterprise along every evaluative dimension in the assessment. We also have a way to comparing an enterprise to a broader industry or enterprise set using opinion research.
This allows us, first, to score an enterprise in considerable detail for each factor in the framework to highlight particular problem areas (and success stories):
It also allows us to objectively compare a complete enterprise program to the broader industry. In the diagram below, the bar is our assessment of the target industry in terms of the lowest to highest scoring enterprises. The triangle places the client’s score along that dimension:
Advantages to Using a Formal Assessment Framework
There are a number of advantages to this approach to program assessment. First, as I’ve emphasized, it makes an assessment far more objective. It would be foolish to deny that there is still much that is subjective here. But it should also be obvious that we’ve dramatically narrowed the scope of that subjectivity. Real problems (or real successes) in a program will be hard to hide or gloss over and the transparency of the assessment process makes it much easier for stakeholders to decide for themselves whether something has been appropriately scored.
Equally important, in my opinion, is that the assessment forces a comprehensive view of the program. There’s a tendency to focus on narrow aspects of a program that are perceived as either problematic or especially important. That focus can hide real problems elsewhere that may end up crippling a digital measurement program.
In addition, I find the comparative elements of the assessment very compelling. Executives often bandy about words like world-class but are unwilling to commit the resources necessary to make world-class possible. Part of the point of a good strategy is to cut through empty words to understand what is truly necessary to achieve a given result. By comparing a program to the broader industry, you can lay the groundwork for what follows in a good strategy – that plan to direct the resources and a direct tie between the resources committed and the outcomes desired.
It also happens to be faster and cheaper to start with a robust framework like this than to build an assessment without one. It seems almost paradoxical that it can be faster, cheaper AND better. But it’s essentially the difference between building a house with bricks versus building it out of sand. Having pre-constructed the necessary building blocks, everything else (including the final product) is both easier and better.
The Role of the Assessment in the Broader Strategy
Creating an objective assessment of a digital measurement program isn’t the same thing as creating a strategy. It’s just a first step. In my last post, I described a process for creating a digital measurement strategy that is truly strategic. By strategic, I mean a plan that focuses your effort on the key problems to solve and the main directions for solving them.
Within that plan will be many questions of execution, logistics and organization – the people, process and technology questions (and not just those since there are other equally important aspects of tactical execution that must be illuminated) that are necessary to executing the strategy. These elements aren’t the strategy, of course, they proceed from it and are determined by it. They are, however, the “rubber-meets-the-road” part of a strategy since they embody the changes you are asking of the organization in light of the strategy. The objective assessment provided here is the “current state” from which you’ll be working. As such, it’s an essential ingredient in the strategy.
It's really this simple.
To get from “here to there”, you have to know where “there” is. That’s what the strategy is for.
But, guess what? You have to know where “here” is as well.
Knowing where you are now is the function of the objective assessment piece in a comprehensive digital measurement strategy and it’s truly a great place from which to start.
Webinarmaggedon Wrap-up
Here's a handy little list of our recent webinar "storm" with links to everything…inlcuding the webinar that covers Building a Digital Measurement Strategy!
- Advanced Analytics for Site Optimization & Testing Webinar with iJento & Dell - Past Listen to Recording
Whitepaper(s) to follow...
- Effective Measurement of Multi-Channel Marketing with Anametrix - Past - Listen to Recording
Download the Accompanying Whitepaper
- Choosing a Big Data Technology Stack for Digital Measurement with IBM - Past - Listen to the Recording
Download the Accompanying Whitepaper
- How to Build a COMPREHENSIVE Digital Measurement Strategy with Cognizant - Past - Listen to the Recording
and while the webinar may have been just passed when I created this list, you can still read the very recent Whitepaper on Digital Merchandising for Multi-Product (List and Aisle) Pages with Cloudmeter.
Comments