How Search Engines Work

Search engines collect information about Web content by utilizing programs called "Web robots" (also known as "spiders" or "Webcrawlers"). These programs are essentially automated browsers that methodically roam the Web, collecting information such as the text, the titles of the page, and sub-heads from the documents they encounter. This information is pulled into their own database where it is "cleaned up" and then indexed into smaller, word-sized chunks of information.

Basically when a user makes a query, the keywords in the query are matched against the search engines database and the appropriate results are sent back to the user as pages of links.

Not all search engines are the same. Some roam the Web looking at more pages, or looking at each page longer, or they are able to crawl the Web more often than others. Some evaluate your query differently as they try to determine the links to send back.

Therefore when you access a Web search engine, these engines only have to search their database instead than the whole Web. This explains why search engines can find the links you are looking for almost instantaneously and with some degree of accuracy.

[ Use the "Back" Button to Return ]