Let me tell you a little story.I was going to pick up a friend at the Airport in Montreal and he called me to say his flight was going to be delayed. “No problem I said, I’ll check your updated arrival in real time and come and get you when you land!”
So off I went to Google, to look for the official Montreal airport website which has real-time updates on flight departures and arrivals. I tried 10-15 different searches, here are some examples:
Pierre Elliot Trudeau Airport Montreal
Montreal Airport Flight Arrivals
The first 100 results for all my searches did not bring the official site www.admtl.com ! However, I was able to find hundreds of Hotels, Limousine services, Google Map results of businesses around the area, Youtube videos about the airport, airline ticket purchase services, and hey… who would have thought: Wikipedia’s entry about the Airport.
Now you can sit there and tell me that it is the airport’s fault for not having “Montreal Airport” on the title tag, and not optimizing, blah, blah, blah. But the reality is that Google’s job is to crawl, index and present relevant results. They have the responsibility of ordering all of the chaotic content that appears on the internet each day to help users sift through it in an organized manner. That is their service. If they fail to do this, they becomes useless, people will no longer waste their time using Google.
Yahoo! gave me the same horrible results. What to do!!??? Surprisingly Live.com showed the site in the 1st page. Not the 1st result as that is always reserved for Wikipedia, but at least it was listed on the first page.
Why did Live.com get an appropriate result if the word Airport does not appear as text anywhere on the page? I looked at the results snippet for Live.com:
Sounds a lot like a Dmoz entry huh? Well that’s because it is. When there is no Meta Description tag Live.com does what Google used to do: Grab a DMOZ description.
You can hate DMOZ all you want, but it sure helped Live.com rank the appropriate relevant site. Strange how such an old school, primitive and manual method helped Live.com get an appropriate result even though the site contains no mention of “airport” whatsoever.
The reality is this: White hat/Government and Official sites are not always optimized. In fact many companies and organizations don’t even know SEO exists. However, BlackHat Spammers do, and this puts them in a considerable advantage against “white hat, relevant content” websites:
1. Black Hats know what to do to rank. They have tested so many sites, gotten so many sites de-indexed and penalized that they are able to tell just how far they can push Google.
2. Black Hats develop automated methods of creating thousands of websites in a short period of time. These websites are created by hundreds or thousands of different Blackhats, and will all have millions of varying footprints if any at all.
So how do you algorithmically remove these millions of results appearing everyday? How do you do it manually? Even combining the two as Google does, has limitations.
Imagine a librarian trying to organize millions of books by category, author, title, etc. And each day hundreds of trucks come in and dump more books on top of the few which were already organized. I don’t envy Google nor Yahoo! For what they have to accomplish to stay relevant as an industry, but no matter how many Phd’s you hire, massive volume always prevails.
Black Hat Spammers will not be stopped from producing new content anytime soon, and Google does not seem to have figured out a permanent way to remove these sites from their index. It seems that the Google index no longer belongs to Google.
The ‘progress’ Google is making in its algo’s don’t seem to be reflected in the actual SERPs – a shift in paradigms seems to be an inevitable necessity, but my question to you is, does it have to be a shift to a new paradigm, or would a shift to an old one be more effective?