I had a great time doing the Social Media ROI webinar with Scott (except now he keeps calling me Elwood)! We’re talking about doing more webinars together – and I hope that works out. There were a lot of great questions that came through and while I’m sending out these responses personally, I thought almost all of them were widely interesting. So I’ve extracted the questions I’m answering (Scott is tackling some and I’ll post his answers as soon as I have them) here.
If you missed the webinar and would like the Powerpoint, drop me a line. We’ll have the full webinar on the web site (http://www.semphonic.com) sometime this coming week.
Q: What if you don't have a way to find out who is visiting your site...what if you can't track them?Gary: This is a huge problem for tracking success – certainly on all .gov properties. When you can’t track individuals, you have no way to measure the “halo” effect of visits. In such situations, you’ll still be able to measure the direct (visit sourcing) impact of social efforts. To understand the halo effect, I’d suggest incorporating specific questions into your survey vehicle to try and come to a research-based understanding of how often visitors to your site have social history and the difference between 1st time visitors with social history and subsequent visits. Note that this technique (online opinion research) is equally applicable to .gov and .com type sites.
Q: What are good strategies to measure sentiment; are there good automated international natural language analysis tools, or do you recommend people classifying posts/tweets?
Gary: I think Scott and I both agree that automated sentiment measurement is not very good right now – especially when it comes to international. This question came from an EU-based company and they may have a better perspective on this than we do, but seeing how well (I mean poorly) the English-language ones work, I’d be really surprised if it was better elsewhere. There are tools for doing it, of course, but their miss rate is so high that I think their use is risky. If you have to check their classifications they don’t save you much time and if you don’t check, I doubt you’ll feel sufficient confidence to use the results. So I do recommend this as a manual – probably occasional task. I think it’s better to suited to periodic research programs than ongoing reporting.
Q: How do you use the community and not worry about biases in the social net
i.e. hand-raisers...might not be your core audience that is commenting or making recommendations
Gary: This is right of course – your core audience may not be the community leaders or hand-raisers. For Product Development issues, that’s why I think it’s wise to supplement this research with more direct research. By including voting and flagging mechanisms for features, however, you can broader out the reach of this considerably. Many community members will vote or flag a feature who wouldn’t have suggested it. In general, community measurement just isn’t a perfect research vehicle for anything. A well-run community may engender strong positives even while the broader company is destroying the brand (and vice versa of course). But it is a great resource for identifying potential issues, problems and opportunities for more research. And, of course, if you consistently find that the community research is supported and closely matches your subsequent efforts, you might find yourself simply relying on the community research.
Q: how do you recommend monitoring the buzz? What are the most important things to pay attention to when you're listening?
Gary: Scott will probably have thoughts about this as well. But my thinking is that there are essentially two different goals to buzz monitoring. The first, and the one I see most commonly, is brand protection.
Many of our clients monitor social comments simply to be able to respond and support or protect the brand. This isn’t really a measurement or research function – it’s a branding/customer relationship effort. It may well have measurable effects (communities will notice if you are doing this well) but it isn’t really buzz measurement. Which brings us to the 2nd function – tracking comments to understand if your marketing efforts – viral or otherwise – are having an impact. I’ll say first that I think companies under-utilize the potential of careful site baselining to track this using their branded sites. You can often get better tracking of the impact of tv or radio campaigns using your own sites than using social monitoring. Given the tools we have, if you are going to do social monitoring, I think it’s best to keep it simple. Mentions, broken out by brand and concept, are probably the most important/practical thing to track.
Q: How do you measure ROI from hard coversions from social media. Foe example: leads generated from blogs, twitter, facebook visits?
Gary: Typically, this needs to be done from a web analytics solution like Omniture, WebTrends, Google Analytics or Unica. When you setup these systems properly, you can track visits sourced by specific URLs. If you release URLs into the public (including Tiny URLs) you can improve this tracking by adding a specific campaign code to the URL that is used by the measurement solution to track success. For most of these tools, you can also track over time visits using their cookie. This allows you to measure the down-stream impact of visits to social or community spaces even when there is no direct tie to the visit. As I highlighted in the presentation, use of a Global Report Suite is one of the key techniques in web analytics solutions for accomplishing this.
Q: Have you looked to potential financial-type metrics (ROI)?
Gary: We’ve done all sorts of different measurements to try and help validate the success of community (and non-community) marketing efforts. Effective measurement of ROI is invariably tricky. In the reporting example I showed, one of the components I liked best was that we actually collected site operating and marketing costs to calculate the net value of the site. But I’ll fess up and say that not only was this difficult to accomplish it was one of the least robust aspects of the final product. Another complicating factor to ROI is the fact that many efforts have long-term benefits (SEO optimization is a classic example of this but so are community and viral efforts). You have to measure the Lifetime value of your efforts to be accurate – something that’s extraordinarily difficult in most situations. I find that most real-world work we do is a set of compromises that, at their best, bring in elements of a full ROI calculation.
Q: Do you think the 5% rule applies to news organization sites? Sort of think news orgs like newspapers could/should expect more. Don't want to set the bar too low for them.
Gary: I’ll be interested in seeing Scott’s thoughts. On the whole, I think the 5% rule isn’t unreasonable in a news setting. Part of the reason for that is that while news organizations have high-interest subjects to deal with, they also get a lot more “walk-bys” than sites like an Intuit Community. Obviously, rules-of-thumb are just that and very strong brands or sites that begin with highly-engaged niches (the more focused the more engaged) shouldn’t take this rule too much to heart. I think Scott means it more as a warning – don’t expect too much if you are thinking about starting a community because most of us are just natural lurkers.
Q: Can you talk a little more about how to do a control group for communities you are not targeting? How do you find another community that is similar?
Gary: This is a great question. The quality of my answer here depends on what you’re hoping for. What I find is that in most special interest communities, there are very good control groups available. For instance, if you are a small business owner, you might have invested in Scott’s Intuit Small Business Community and you might use StartupNation or Bank of America’s Small Business Forum as a control. The vast majority of special interest niches have more than one community servicing them. Where the issue gets much trickier is when you start dealing with the huge mega-networks like Facebook and Twitter. Linked in has plausible alternatives (maybe). But it’s probably right to suggest that Twitter and Facebook really don’t. So if you are tackling these, there really isn’t going to be a good control. You can try and build a control inside these communities. Facebook is so huge and sprawling that following particular interest groups may give you a reasonable control mechanism. In such cases, that’s about the best I can suggest!
Q: Best suggestions for develop success metrics for informational (non-transactional sites) such as gov't sites
Gary: This is a much bigger and trickier question than I can tackle in this context. But there are a few general principles I believe apply to nearly ALL non-transactional sites. First, success is audience specific. The most important step in developing a good success model is building a set of use-cases that help you understand AND identify the key audience segments on your site.
Next, your success metrics need to be use-case specific. In addition, success metrics will need to be tiered and should be tiered consistently. What I mean by this is that you aren’t going to have Boolean success measure (success or failed). You’ll probably need to have several tiers of success. I often categorize these as “Attracted,” “Engaged,” “Impressed,” and “Converted/Satisfied.” But what goes into these levels is always site specific. By tiered consistently, I mean that what you call engagement should occur for about the same percentage of visitors across every success metric. This makes it much easier and more practical to baseline success across different use-cases and different properties. Finally, I’d say that success often needs to take account of what content a visitor viewed not just how much content was consumed.
For customer support type applications, for example, you need to know whether a visitor consumed a real support page and then exited – or just churned around in navigational pages. One interesting measure I frequently look at in non-ecommerce sites is the ratio of “success” pages to “navigational” pages.
Hope this is helpful – and thanks to everyone who attended and for these great questions!
Comments