How do Search Engines Work?
Original Article: https://www.deepcrawl.com/knowledge/technical-seo-library/how-do-search-engines-work/
The Search Engine Index
Webpages that have been discovered by the search engine are added into a data structure called an index.
The index includes all the discovered URLs along with a number of relevant key signals about the contents of each URL such as:
- The keywords discovered within the page’s content – what topics does the page cover?
- The type of content that is being crawled (using microdata called Schema) – what is included on the page?
- The freshness of the page – how recently was it updated?
- The previous user engagement of the page and/or domain – how do people interact with the page?
What is The Aim of a Search Engine Algorithm?
The aim of the search engine algorithm is to present a relevant set of high quality search results that will fulfil the user’s query/question as quickly as possible.
The user then selects an option from the list of search results and this action, along with subsequent activity, then feeds into future learnings which can affect search engine rankings going forward.
What happens when a search is performed?
When a search query is entered into a search engine by a user, all of the pages which are deemed to be relevant are identified from the index and an algorithm is used to hierarchically rank the relevant pages into a set of results.
The algorithms used to rank the most relevant results differ for each search engine. For example, a page that ranks highly for a search query in Google may not rank highly for the same query in Bing.
In addition to the search query, search engines use other relevant data to return results, including:
- Location – Some search queries are location-dependent e.g. ‘cafes near me’ or ‘movie times’.
- Language detected – Search engines will return results in the language of the user, if it can be detected.
- Previous search history – Search engines will return different results for a query dependent on what user has previously searched for.
- Device – A different set of results may be returned based on the device from which the query was made.
Why Might a Page Not be Indexed?
There are a number of circumstances where a URL will not be indexed by a search engine. This may be due to:
- Robots.txt file exclusions – a file which tells search engines what they shouldn’t visit on your site.
- Directives on the webpage telling search engines not to index that page (noindex tag) or to index another similar page (canonical tag).
- Search engine algorithms judging the page to be of low quality, have thin content or contain duplicate content.
- The URL returning an error page (e.g. a 404 Not Found HTTP response code).