Monday, December 12, 2011

SEO Definition - Google Bot Way

SEO is one of the most important  technique of ranking a page high. When done properly, SEO help to rank a blog or a website higher compared to other sites in response to search query. SEO process involves the use of crawls or spiders to crawl through a blog or a website and get the relevant information which it uses later on to construct a query.

The information got from your blog is stored and whenever someone is looking for a specific information and your blog is selected as the one closest to what one is looking for, it appears among the search result. SEO involves the following steps; crawling indexing, processing, calculating relevancy and retrieving.

The first step for a search engine is to crawl the web and get the information about the website. This is perfomed by the crawler or a spider or Googlebot, as is the case with Google. These crawlers follow links from one page to another and index everything they find on their way.

As you know, spiders cannot be able to crawl all the pages found on the web at the same times. This make it impossible for a spider to visit a site daily just to see if a new page has been added or the existing pages have been modified. Sometimes crawls may not end up visiting your blog for a long time and that is why it is important to submit your sitemap regularly.
 
One way to help the spider in crawling your page is to make sure that,  you check out often what the  crawler find from your site  through the webmasters and run spider simulator below images like  Flash movies, JavaScript, frames, password-protected pages and directories to check whether the crawler was able to find the images. If you find that your images were invisible, this means they were not indexed or simply they are non-existence in search engines.

The next step that follows is indexing. The data got during the crawling stage is stored safely where it can be retrieved later. This step is where the spider identify the words or expression that best describe the page and assign the page to a particular keywords.

Sometimes, the spider may have difficult time identifying the meaning of the page but if you help them by optimizing the page, it will be easier for the crawlers to identify and classify the pages correctly which is very important for high ranking.
 
When the search request comes back, the search engine processes it by comparing the available search string in the search request with the indexed pages already in the database. The search engine immediately start determining the relevancy of each page in it index with the other search string or data available. There are various algorithms that are used to calculate the relevancy with different weight in such thing as; keyword, links and many more.
 
The last step that is performed by search engines is retrieving results. This is sorted from the most relevant to less relevant and this what is displayed in the browser. If your content is relevant enough, then you won't have problem getting high traffic.

Using a major search engines will always have the advantages in the sense of its relevancy and results even though all search engines operates almost the same. For example Yahoo and Bing requires page keywords. On the other hand, Google requires links and the older the site the better while the others do not have any preference when it comes to the age of the web or for how long the web have been in existence.

This simply means that you may have to wait for a longer time before your website getting a good ranking in Google than in Yahoo. Understanding this important information will help you in making a concrete decision  on choosing a search engine that will be better for you and can help you in increasing traffic to your site.
 

1 comment: