Machine Working of SEO

Chapter 2: What is Search engine and how it works?

As we have already discussed in chapter 1, what is SEO? Before starting a technical process we need to understand the machine working scenario so In this article we are going to learn about the Working of search engines.

How search engine works?

A search engine is an AI based machine which works to provide the best and relevant results to the users and to ensure this it follows a three step process:

  1. Crawling of WebPages: Search operation for the content all over the web to fetch the code/content for each URL listed on the search engine.
  2. Indexing of the content: Indexing the content in the database found after the Fetching process.
  3. Ranking the websites: According to the most relevant search, the result search engine gives ranking to the sites indexed in the database.

What is search engine crawling?

Crawling is the process done by the crawlers or bots of search engines. Crawling is done to fetch the data of the particular web page or the whole website and to extract its links in order to discover additional pages.  Crawling is done to check whether any updates had been made to the page if so then fetch the data after the update. Crawling is done with the perspective to store fetched data in an index called Caffeine (Google search index).

Here are some of the Search engine bots:

  • Googlebot User Agent
    Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
  • Bingbot User Agent
    Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
  • Baidu User Agent
    Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)
  • Yandex User Agent
    Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)

What is search engine index?

The content discovered by the search engine is stored in a database in case of Google it’s named as Caffeine.  These databases index all the gathered data which is interpreted as containing valuable search query. After the crawling process these database index the data which it found valuable, compatible, and competitive. Indexing is done on the basis of updated time, date, and category and page quality. 

Whenever a user searches a query, crawler check individual pages for that particular keywords or topic which makes the process slowly to presenting the results to users, instead search engine prefer Inverted indexing.

What is Inverted indexing?

In general terms, an inverted indexing is a indexing method which used for the storage of mapping of Content, such as Keywords numbers or word count, to its location or in a particular document of a set of document.

In this chapter, we have completed the working of search engine. See you in this next chapter…

This is it for this chapter let’s meet at the next chapter here…

What is search engine ranking?

Cheers!

Get Free Email Updates!

Signup now and receive an email once I publish new content.

I agree to have my personal information transfered to MailChimp ( more information )

I will never give away, trade or sell your email address. You can unsubscribe at any time.

Leave a Comment

Your email address will not be published. Required fields are marked *