As I already mentioned in my previous blog entries, search engine spiders are not perfect. Like an agent Smith from the movie Matrix, they try to index the whole Internet environment, but as of now, they can not handle the job. That is why even large leading search engines can cover only a small portion of the Web. I read somewhere that this portion amounts only to sixteen percent.
So when web crawlers visit your site they will not go deeper than the fifth level, as a rule. This makes the location of files on your web site very important for proper optimization. Files that are located in the root directory on the first level usually will get better ranking. Deeply buried files will have to wait longer.
You, probably, observed the same thing that I did. Sometimes, pretty dumb and ugly web sites with boring and not properly optimized would have a pretty decent PageRank. The owner created the site in the middle of nineties of the previous century and did not really pay much attention to it. So what created this paradox?
The answer is simple. Somehow leading search engines consider old sites more trustworthy. (This just proves again my point about current imperfection of the web crawlers and spiders). This AI naive trust is based on the age of the site only because of search engines belief that the website is established and it will not vanish unexpectedly one day. And, naturally, the approach to the new web sites is much more cautious, which possibly created all these legends and myths about mysterious Sandbox.
This fact still does not mean that you should pay for your domain five years ahead. There is nothing wrong with renewing annually, because in the end this is not the decisive fact about the maturity of the web site.
Wednesday, January 13, 2010
The answer is simple
Labels:
analytics,
marketing,
optimization,
owners,
ranking,
search,
search engine spiders,
search engines,
site
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment