The Case for Transparency in SE Rankings
We all know how big a business Search Engine Optimization (SEO) has become. But it still feels to me like a house-of-cards built on technological vagaries that could (and maybe should) disappear almost overnight.
Even as money spent on SEO has skyrocketed, solid understanding of how and when it works (and what the rules are) has lagged. Search Engines are terrifically cagey about how to optimize a web site for their engine – meaning that experts must try to re-engineer the rules with limited testing – and companies must take SEO advice pretty much on faith.
That’s a dilemma that as an Analytics firm, we’re frequently called upon to referee – and most of the time it isn’t easy to make the best call.
Just how shadowy and difficult these trade-offs can be was made especially clear in February when Google de-listed BMW-Germany for unacceptable SEO practices – meaning that for several days, searchers in Germany couldn’t reach the BMW Germany site. True, there were aspects of this affair that felt more like a PR Stunt than anything else. BMW Germany was only de-listed for a few days – and, frankly, if I was an aggressive large company looking at the punishment, I’d be sorely tempted to read the exactly the “wrong” lesson from it; namely, that the penalty for SEO “cheating” is so rare and so small that you might as well go ahead and do it.
But if you work with large companies (as we mostly do), you have to be a bit surprised. Companies the size of BMW are usually so risk averse with their brand that the prospect of being publicly labeled a cheater – and just their general conservatism – make them very reluctant to engage in questionable practices of any kind.
However, Search Engine Optimization (SEO) has become very important. Search Engines are a major director of new prospects – replacing much of the traffic that used to come from linkage networks. And the SEO for most companies is not done by their own marketing folks but by outside experts – specialists whose sole expertise is advising sites on the best ways to improve their position in the Search Engine listings. Those experts aren’t getting paid to be conservative, and the fact that so few companies ever get punished by the Search Engines must make it awfully tempting to “cheat.” I quote cheat in this context because there really isn’t any ethical decision – I don’t see why a company is bound to obey SE rules for any reason except self-interest.
But why do large companies even need outside expertise to manage their ability to achieve good listing positions on the various Search Engines? The reason is simple – the Search Engines do their absolute best to keep secret the algorithms which determine where a company’s web page will appear in response to a user entering any specific keyword. As the SE’s have grown in sophistication, they’ve created a complex blend of factors – from the actual words on the page to popularity of the site to try to match user requests to relevant, high-quality pages.
Not only are the formula for rankings complex, but they are constantly changing. So a company’s internet marketing director might make wake one morning to find that, for reasons unknown, their search engine traffic has suddenly dropped by 10% or 20%. Naturally, this isn’t a comfortable position to be in. Nor do companies – large or small – relish embracing the weird science that is SEO. Like a complex tax code, these complex rating systems spawn a class of “experts” who benefit hugely from the system; a class of cheaters who “game” the system to everybody’s detriment (including the SE’s); a class of large companies who – feeling initially disadvantaged – hire that self-same army of experts until they get the premium positions they feel they deserve; and the class of everyone else who simply get…well…screwed.
Are complex and secret ranking formula’s really necessary? No doubt, these formulas were originally the “secret” sauce which allowed a Search Engine to publish rankings that were more relevant or of higher-quality than others. But the war for improved relevancy rankings has about run its course. As large companies spend more and more on hired guns to help them “optimize” their placement – and as the SE’s have gotten better at moving high-popularity sites to the top of their list – the organic listings returned by various SE’s are increasingly similar and ever more attuned to the amount of money a player can afford to spend.
This isn’t all bad. The quality of listings is often improved by a simple rank ordering of how much a company is willing to pay – one of the reasons why PPC placements are often more interesting than organic listings.
Let’s go back to our tax-code example. Every one of us knows how frustrating it is to do taxes. The code is too complex and arcane for any but the most dedicated expert to really understand. The rules change frequently. We all know that there is a large segment of cheaters and evaders taking advantage of every loophole to pay less than their fair share. We all know that only the wealthiest clients can hire the kind of expertise that not only relieves the frustration but takes full advantage of the system. Sound familiar? Now imagine that the government didn’t bother to publish the tax code. Instead, they had a team of experts create an arcane formula for determining how much you owe – and at the end of each year they just sent you the bill. Does anyone imagine this would be fairer? Or better? Or more likely to reduce cheating? Most importantly – does anyone think it would produce a more productive economy?
By having optimization rules that frequently conflict with the recommended UI from design firms – companies have a strong incentive to present one site to their visitors and another to the Search Engines – essentially the practice for which BMW was cited. And by putting a veil of mystery behind rankings, the Search Engines encourage companies and experts to game the system. True, Google and others publish a list of “unacceptable” practices (which BMW violated) – but really – when you turn something into a game you have to expect that everyone is going to do their best to win.
And is transparency really impossible? What if the SE’s just fessed up and published a set of guidelines about how they ranked a site? Just as there is no tax code that doesn’t encourage some types of behavior over others, there is no set of SE ranking rules that don’t encourage some types of sites over others. But a good tax code works because it minimizes friction and bad behaviors and encourages socially productive behaviors. Is it impossible to imagine that SE’s could construct rules of ranking that can withstand transparency?
How might a transparent Search Engine work? Let’s start with the way current engines actually work. When you type in your search request, they match the words to billions of documents on the web – seeing which ones contain the search terms. The set of matching documents is the initial search set. But let’s suppose you typed in “DVD Player” – out of the millions of sites that sell, talk about or mention DVD players, the SE has to decide on the 10 you are most likely to want to see.
In the early days of rankings, they might have given you the page that had DVD Player on it the most – or perhaps had DVD Player as the highest percentage of words. Alas, this led clever SEO practitioners to build pages containing DVD Player a thousand times or pages with only 2 words – DVD Player. Naturally, this didn’t produce optimal results. So the SE’s began weighting listings by factors like site popularity, linkages, and click-through. All of these factors are much harder to trivially game – and they all have the very conservative effect of favoring large and influential sites. This works well – if not fairly – because users are generally happier with large and influential sites.
The bottom line is this – no word relevancy system – however clever, concept based, advanced or arcane can ever figure out the 10 best sites to show a user who enters “DVD Player” because the task is – on its face – impossible. Even absurd.
So at this point, most SE’s have settled on a set of criteria for ordering results that are –in their effect – extremely conservative.
Let’s suppose, therefore, that SE’s published a specification for evaluating site and page relevancy for inclusion in a match set. This might include rules for minimum and maximum word density, specific ways to request inclusion in a topic, and limits on the number of topics a site or a page can request.
This specification would focus on only the matching of sites to keywords – it would not use any measures of popularity, links or anything else. Sites could optimize to this specification to their hearts content.
Second, the Search Engine would provide a set of additional, equally transparent measures that would be used to sort the result-set (either by the engine or – possibly – selectable by the user):
These could include things like:
- Total Site Traffic
- Site Linkage Score
- Del.icio.us Tags (or equivalent)
- User Rating
- Editor Ratings
- Site Category (Sales/Informational/etc.)
- Site Company Owner Size (Revenue)
- Site SIC Category
- Newness of Content
- Breadth/Focus of Content
- Independent 3rd Party Rating Services
- Registered Topic Experts
Many of these criteria are, of course, as or more inherently conservative than the rules they would replace. Some are equally gameable. But they all have the virtue that they can survive transparency – both for the lister and the user. And there may, of course, be much better approaches to achieve transparency.
However that may be, transparency of sort order would benefit all listers (except the gamers). Sure, big companies wouldn’t have the expertise advantage – but they gain inherently from any reasonably conservative set of rules – and no one who works for or with large corporate clients thinks they really want to be spending their time and money gaming search engines. The mid-range listers would no longer be squeezed from above and below – and would gain assurance about why and where they can get traction. In the end, transparency should benefit searchers too - not only in the quality of the listings they obtained, but in their ability to decide what kind of factors actually matter to them when doing a search.
Transparency could (and should) be the next big thing for SE’s. Certainly, any engine that committed to a plausible scheme of transparency would get the vast majority of sites to optimize to its specification – presumably producing significantly better listings. That has to be a significant competitive advantage vis-à-vis the other engines.
But until the Search Engines commit to transparency, companies will always have a tough business decision to make. “Game” the system to the greatest extent you can – or stand by and watch the other guy’s do it. It’s a decision that, in reality, they shouldn’t have to make – one that is forced on them by bad thinking at the very center of the Search universe.