June 2, 2021
3 mins read

Search engines: add your content to google

motores de búsqueda search engines
Photo by Pixabay on Pexels.com

Search engines are web pages that examine a user-defined search criteria and display a sorted index. Browsing the world wide web (www) is through hyperlinks. Hyperlinks are texts or images that you have to click on to go somewhere else.

Any web author can link to any other online content. Through the practice of linking all Internet users help organize online information towards a web of interconnected resources.

The index of contents

Importantly, the Web does not provide a centralized index that tracks what is available on the network. Search engines are therefore the most important services to help satisfy the browsing need of Internet users more effectively.

There are different types of search engines. The most important search engine is the crawler-based one. This uses software (called “crawlers” or “spiders”) to search for what is available online and systematically indexes this content. The sophistication and effectiveness of the crawler determines the size and freshness of the index, which are both important measures of a quality search engine.

Search engines crawlers

In simple terms, the spider/crawler follows every link on a page, indexes the pages and then follows the links to those pages, and so on.

The most important operation that search engines perform is to match what a user is looking for and the information that the index shows them. Typically, the output of this pairing process is a classified list of references. These impacts typically consist of a title, snippets of information, and hyperlinks to pages that search engine technology has determined to be possibly relevant.

Along with “organic results” (i.e. pages found by the engine), search engines place sponsored results determined by a keyword bidding process by marketers. The pairing process for organic results are complex and commercial.

Search engines protect your accurate ranking using algorithms such as trade secrets. The PageRank that Google’s algorithm uses is one of the most famous search ranking algorithms on the Web. That pagerank predicts the relevance of websites in the index by analyzing the structure of links on the Web (i.e., the types of pages it links to the page).

Previously I wrote an article about searches that I recommend you read -> Useful google tools available on internet.

You need a tracker

A tracker needs to be made easier to work. That is why it is important to use resources on your website such as having a robots.txt file and having a site map.


It is necessary to have a robots.txt file in the root directory of the website which facilitates the work of the crawler by indicating what to track. For example, this website has this file:

User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php

Allow: /*.css$
Allow: /*.js$

Allow: /*.jpg$
Allow: /*.jpeg$
Allow: /*.png$

Sitemap: https://avertigoland.com/sitemap.xml

This file tells the robot that it does not crawl the wp-admin folder since it does not want it to be crawled or added to any index. This is the administration folder of the page. It also allows .jpg, .jpeg and .png images to be added to the image search engine. And finally it tells robot where the sitemap is, sitemap.xml.


This file is crucial. It’s like the branches of a tree, the branches of a tree. In a simple way (simple for the crawler) it allows all the pages of the site that are in that file to be indexed in the search engine. Whether they are then indexed or not is something else.

Sitemaps are files in xml format that provide information about your site’s pages, images, videos, and other files, as well as the relationships between them. Search engines, such as Google, read these files to crawl sites more effectively. Sitemaps inform Google of which files on a site are important according to the webmaster and also include important data about them.

Cookies and their importance to position a page

Matching user information needs to the index includes analyzing website content and analyzing user data. Search engines use cookies to store users’ search queries, click on links and more individually in their databases for long periods of time.

A “vertical” or specialized search engine focuses on finding a specific type of subject, such as travel, shopping, academics, articles, news, or music. A “metasearch engine” that does not produce its own index and search results, but instead uses the results of one or more. Plus a “directory” is a repository of links classified into different categories. The Yahoo! directory  (now defunct) and the open directory project are famous examples.

How to add your page

Search engines have pages for you to actively submit your website:

Send page to bing -> https://www.bing.com/webmasters

User Avatar

Avelino Dominguez

👨🏻‍🔬 Biologist 👨🏻‍🎓 Teacher 👨🏻‍💻 Technologist 📊 Statistician 🕸 #SEO #SocialNetwork #Web #Data ♟Chess 🐙 Galician

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

p2p peer-to-peer
Previous Story

Redes p2p y cómo se comparte la información

motores de búsqueda search engines
Next Story

Motores de búsqueda: añadir tu contenido a google


Don't Miss