SEO How Early SEO Corrupted The Search Engines

The practice of​ search engine optimization first came into being in​ the​ mid 1990s when the​ first search engines began cataloguing the​ contents of​ the​ Internet. Initially the​ entire procedure was fairly honest and a​ fair reflection of​ what content there was on​ the​ web. Sites were submitted to​ the​ search engines and a​ spider crawled the​ content and then stored the​ collected data in​ a​ database that could be accessed by individuals performing a​ search.

When a​ search engine spider detects new content on​ the​ Internet it​ downloads a​ page where it​ is​ stored on​ the​ engine’s own server. Once on​ the​ server a​ second program,​ known as​ indexer extract information about the​ page as​ well as​ all of​ the​ links it​ contains. This page is​ then placed into a​ depository of​ pages to​ be crawled at​ a​ later date.

At first,​ the​ information that ended up on​ these early search engines came from the​ webmasters who were trusted to​ be honest. Early versions of​ search algorithms relied on​ webmaster-provided information such as​ the​ keyword meta tag,​ or​ index files in​ engines like ALIWEB. Meta-tags provided a​ guide to​ each page's content. However the​ corruption of​ this system began when webmasters abused meta tags by including keywords that had nothing to​ do with the​ content of​ their pages,​ to​ artificially increase page impressions for their Website. This of​ course also increased their ad revenue from pay per clicks and costs per impressions. Greed soon led to​ inaccurate,​ incomplete cataloguing of​ web pages and too many people being led to​ pages without content or​ with misleading content when they conducted a​ web search. Search engines responded by developing more complex ranking algorithms with the​ result that understanding SEO has become much more complicated then it​ ever has been before and will continue to​ become more complicated.

You Might Also Like:

Powered by Blogger.