Early search engines

SEO began in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all a webmaster needed to do was submit a site to the various engines which would run spiders, programs to “crawl” the site, and store the collected data. The search engines then sorted the information by topic, and served results based on pages they had spidered. As the number of documents online kept growing, and more webmasters realized the value of organic search listings, so popular search engines began to sort their listings so they could display the most relevant pages first. This was the start of a search engine versus webmaster game that continues to this day.

At first search engines were guided by the webmasters themselves. Early versions of search algorithms relied on webmaster-provided information such as category and keyword meta tags. Meta tags provided a guide to each page’s content. When some webmasters began to abuse meta tags, causing their pages to rank for irrelevant searches, search engines abandoned their consideration of Meta tags and instead developed more complex ranking algorithms, taking into account factors that were more diverse, including:

  • Text within the title tag
  • Domain name
  • URL directories and file names
  • HTML tags: headings, bold and emphasized text
  • Keyword density
  • Keyword proximity
  • Alt attributes for images
  • Text within NOFRAMES tags

By relying so extensively on factors that were still within the webmasters’ exclusive control, search engines continued to suffer from abuse and ranking manipulation. In order to provide better results to their users, search engines had to adapt to ensure their SERPs showed the most relevant search results, rather than useless pages stuffed with keywords by unscrupulous webmasters. This led to the rise of a new kind of search engine.

This guide is licensed under the GNU Free Documentation License. It uses material from the Wikipedia.

Leave a Reply

Your email address will not be published.