Think Info

Exploring the information space

Double standards in the Google Empire

Google is big. Google can do pretty much anything it likes; with a code change – justified by its vision of what the web should be – Google can change the fortunes of companies of all sizes. As such, it sets the rules everyone else must operate by. It is accountable to no one but outdated laws. Google hates contextualisation of the internet; a practice it refers to with the shady term of “cloaking.”

Google logoWhat, though, are we to make of Google employing double standards?

While this post is about cloaking, the thought process was triggered by Google’s announcement that, in the name of security, search query data will no longer be included in referrer strings for logged-in users; this information being critical contextualisation (as well as SEO) data for site owners.

Google’s rule on cloaking

The following is an excerpt from Google’s Webmaster Guidelines

Make pages primarily for users, not for search engines. Don’t deceive your users or present different content to search engines than you display to users, which is commonly referred to as “cloaking.”

The real gist of Google’s rules on cloaking is that a single URL has a single, canonical, view. Who- or whatever requests that URL will get exactly the same content back. If this is not the case, then a search result could point to a page that would contain completely unrelated information due to the context of the user.

Google’s self-cloaking

Let us take a Google URL, and see if it will return the same content for any two users, or even for a user verses Google’s own spider. Yes, I am talking here about the spider looking up its own back side; not a pretty picture, but at the heart of the issue.

A search query directed at Google will – especially for users who are logged in, for whom Google have a profile on which to contextualise; less so for those not logged in – give specific results that do not match what its own spiders would find at that URL. This was widely reported earlier this year, including a TED talk on filter bubbles.

Google’s own search results are – for want of a better word – cloaked.

Double standards

Google is being two-faced on the cloaking issue.

On the one hand, it wants – rightly – to provide the end user with relevance. Google wants to help you find the information you want to see. This is all well and good; contextualisation on you is important (notwithstanding privacy issues).

On the other hand, however, Google does not want anyone else to contextualise; it wants everything it looks at to be absolute: canonical.

The message here is that Google wants to be the only party to contextualise the internet; as holders of all your meta-data, Google is the hub you have to come back to for every next step.

The real problem?

Now, I don’t believe Google is trying to play Big Brother to the point of disrupting your web experience; reeling you back in to take every next step in your online journey (there is enough analytics tracking that it can watch you hop from site to site).

Withdrawing the search query from referrer data, however, has two sides:

  1. Google is aware that many sites do contextualise based on the search query, and that upsets the canonical view, which big-G can’t cope with, so this is an excuse to remove part of that problem
  2. Google realises that it is giving away value for nothing by allowing this user context to be (ab)used by those benefiting from a search link without paying for advertising space

I can understand that, with the dramatic increase in searches being from a logged-in state (as a result of the uptake of Google+), Google wants to stop giving away the contextual value of the user to those not paying for it. A business decision, though with backfire potential.

My issue with it is that the cloaking argument was wheeled out as a part of the justification (as documented by Search Engine Land).

But the entire cloaking argument is a cover-up for Google’s incompetence; its ability to operate only in a one-dimensional information world. The real problem with cloaking, I believe, is that Google can’t figure out how to read a contextual internet. There is a lot of information out there. If a resource is presented differently depending on contextual knowledge of the user, Google is not made aware of this; it don’t see the meta-data driving the decision. Without that information, it cannot understand the context of the different views, so does not know how to (re)use the information.

The solution is actually very simple (though I am not going to give it away free here… if Google wants it, it can pay). My opinion on the hiding of query details in the referrer is irrelevant here; I just wish Google – or someone who wants to take the search crown – would implement a context-aware spider.

Reference links

Google’s SSL search announcement
Search Engine Land article on Google’s SSL search announcement, with comments from Google software engineer Matt Cutts
Google’s Webmaster Guidelines
Eli Pariser: Beware online “filter bubbles” (Google reference @2:00)

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: