The InternetSearch Engine Optimization

Indexing a site in search engines. How does the indexing of the site in "Yandex" and "Google"

What is indexing of a site? How does it happen? You can find answers to these and other questions in the article. Web-indexing (indexing in search engines) refers to the process of incorporating information about a site to a database by a search engine robot, which is then used to find information on web projects that have undergone such a procedure.

Data on web resources often consist of keywords, articles, links, documents. Indexing can also audio, images and so on. It is known that the algorithm for identifying keywords is dependent on the search appliance.

The types of indexed information (flash files, javascript) there is some limitation.

Management of intercourse

Indexing a site is a complex process. To manage it (for example, to prohibit the inclusion of a particular page), you need to use a robots.txt file and such instructions as Allow, Disallow, Crawl-delay, User-agent and others.

Also, tags and props that hide the content of the resource from Google and Yandex robots (Yahoo applies the tag) are used to index.

In the search engine Goglle new sites are indexed from a couple of days to one week, and in Yandex - from one week to four.

Do you want your site to appear in search engine results queries? Then it should be processed by "Rambler", "Yandex", Google, Yahoo and so on. You must tell search engines (spiders, systems) about the existence of your website, and then they will scan it in whole or in part.

Many sites are not indexed for years. The information that is on them is not visible to anyone, except their owners.

Ways of processing

Indexing a site can be performed in several ways:

  1. The first option is to manually add. You need to enter your site data through special forms offered by search engines.
  2. In the second case, the search robot itself finds your website by links and indexes it. He can track your site for links from other resources that lead to your project. This method is most effective. If the search engine finds the site in this way, it considers it significant.

Timing

Indexing the site is not too fast. Terms are different, from 1-2 weeks. Links from authoritative resources (with excellent PR and Titz) significantly speed up the placement of the site in the database of search engines. Today Google is considered the slowest, although until 2012 he could do this work for a week. Unfortunately, everything changes very quickly. It is known that Mail.ru works with Web sites in this area for about six months.

Indexing a site in search engines can not be done by every specialist. The timing of adding new pages to the database of the site already processed by the search engines is affected by the frequency of updating its content. If the resource is constantly updated, the system finds it often updated and useful for people. In this case, its work is accelerated.

You can follow the progress of site indexing on special sections for webmasters or on search engines.

Changes

So, we have already figured out how the site is indexed. It should be noted that databases of search engines are often updated. Therefore, the number of pages of your project added to them can vary (both decrease and increase) for the following reasons:

  • Sanctions search engine to the site;
  • Presence of errors on the site;
  • Change of algorithms of search engines;
  • Disgusting hosting (inaccessibility of the server on which the project is located) and so on.

The answers of Yandex to ordinary questions

Yandex is a search engine used by many users. It ranks fifth among the world's search systems by the number of processed research requests. If you added a site to it, it may be added to the database for too long.

Adding a URL does not guarantee its indexing. This is just one of the methods by which the robot of the system is informed that a new resource has appeared. If the site has no links from other websites or there are not many of them, the addition will help to find it faster.

If the indexing does not occur, you need to check if there were any failures on the server when the application was created from the robot "Yandex". If the server reports an error, the robot will complete its work and try to execute it in a comprehensive round-robin order. Employees of "Yandex" can not increase the speed of adding pages to the database search engine.

Indexing a site in "Yandex" is quite a difficult task. You do not know how to add a resource to the search engine? If it has links from other websites, then you do not need to add a site specially - the robot will automatically find and index it. If you do not have such links, you can use the "Add URL" form to tell the search engine about the existence of the website.

Remember that adding a URL does not guarantee the indexation of your creation (or its speed).

Many people wonder how long it takes to index the site in Yandex. Employees of this company do not give guarantees and do not predict terms. As a rule, since the robot learned about the site, its pages appear in search in two days, sometimes in a couple of weeks.

Processing process

Yandex is a search engine that requires accuracy and attention. Indexing site consists of three parts:

  1. The crawler crawls resource pages.
  2. Content (content) of the site is recorded in the database (index) of the search system.
  3. After 2-4 weeks, after updating the database, you can see the results. Your site will appear (or will not appear) in the SERP.

Checking Indexing

How to check the indexing of a site? You can do this in three ways:

  1. Enter the name of your company in the search bar (for example, "Yandex") and check each link on the first and second page. If you find the URL of your brainchild there, then the robot fulfilled its task.
  2. You can enter the URL of your site in the search string. You can see how many Internet pages are displayed, that is, indexed.
  3. Register on the pages of webmasters in Mail.ru, Google, Yandex. After you go through the site verification, you can see the results of indexing, and other search engine services created to improve the performance of your resource.

Why Yandex refuses?

Indexing the site in Google is as follows: the robot in the database records all the pages of the site, substandard and qualitative, without choosing. But only useful documents are involved in the ranking. And Yandex excludes all web-stuff immediately. He can index any page, but the search engine will eventually eliminate all garbage.

Both systems have an additional index. Both the one and the other page of poor quality affect the ranking of the website as a whole. A simple philosophy works here. The favored resources of a particular user will occupy higher positions in his issuance. But the same individual with difficulty will find a site that he did not like the last time.

That is why, at first it is necessary to cover up the copies of web documents from indexing, to inspect the presence of blank pages and not to issue poor-quality content.

Acceleration of the work of Yandex

How can I speed up the indexing of the site in Yandex? It is necessary to perform the following steps:

  • Install the Yandex browser on your computer and wander through it on the pages of the site.
  • Confirm the rights to manage the resource in Yandex.Vebmaster.
  • In Twitter, publish a link to the article. It is known that since 2012 Yandex has been cooperating with this company.
  • For the site add a search from Yandex. In the "Indexing" section, you can specify your own URLs.
  • Enter the code "Yandex.Metrica", without indicating the tick "Sending pages for indexing is prohibited."
  • Make a Sitemap that exists only for the robot and is not visible to the audience. Verification will begin with it. The Sitemap address is entered in robots.txt or in the appropriate form in the "Webmaster" - "Indexing Setup" - "Sitemaps".

Intermediate actions

What should I do until the web page "Yandex" is indexed? Domestic search engine should consider the site the primary source. That is why even before the publication of the article, it is necessary to add its content to the "Specific Texts" form. Otherwise, plagiarists copy the record to their own resource and will be the first in the database. As a result, they will be recognized as authors.

Google Database

For Google, the same recommendations as described above will apply, only the services will be different:

  • Google+ (instead of Twitter);
  • Google Chrome
  • Google-devices for programmers - "Scanning" - "Look like Googlebot" - option "Scan" - option "Add to index";
  • Search within the resource from "Google";
  • Google Analytics (instead of "Yandex.Metrica).

Prohibition

What is the prohibition of indexing the site? You can overlay it both on the whole page, and on its separate part (link or piece of text). In fact, there is a global prohibition of indexing, and local. How is this realized?

Consider the ban on adding a search engine to the Web site in Robots.txt. Using the robots.txt file, you can exclude the indexing of a single page or an entire resource rubric like this:

  1. User-agent: *
  2. Disallow: /kolobok.html
  3. Disallow: / foto /

The first item indicates that the instructions are defined for all MSs, the second indicates the prohibition of indexing the file kolobok.html, and the third one does not allow adding the foto folder to the database. If you want to exclude several pages or folders, specify them all in the "Robots".

In order to prevent indexing of a separate Internet sheet, you can use the robots meta tag. It differs from robots.txt in that it gives instructions to all PCs at once. This meta tag follows the general principles of the html format. It should be placed in the header of the page between the tags. The entry for the ban, for example, can be written like this: .

Ajax

And how does the indexing of Ajax-sites Yandex? Today, Ajax technology is used by many web site developers. Of course, she has great opportunities. With it you can create high-speed and productive interactive web pages.

However, the search engine robot web page "sees" not like the user and the browser. For example, a person looks at a comfortable interface with movably loaded Internet pages. For the crawler, the contents of the same page can be empty or represented as the rest of static HTML content, for the generation of which scripts do not go into action.

To create Ajax-sites, you can use URL with #, but its robot-search engine does not use. Usually part of the URL after # is separated. This must be taken into account. Therefore, instead of the URL of the form http://site.ru/#example, it makes an application to the main page of the resource, located at http://site.ru. This means that the content of the Internet-list can not get into the database. As a result, it will not appear in the search results.

To improve the indexing of Ajax-sites, Yandex supported changes in the search robot and rules for processing URLs of such Web sites. Today, webmasters can indicate the search engine "Yandex" for the need for indexing, creating an appropriate scheme in the structure of the resource. For this you need:

  1. Replace in the URL of the pages the symbol # on # !. Now the robot will understand that he will be able to apply for an HTML version of filling this Internet sheet.
  2. The HTML version of the content of such a page should be placed on the URL, where #! Replaced by? _escaped_fragment_ =.

Similar articles

 

 

 

 

Trending Now

 

 

 

 

Newest

Copyright © 2018 en.birmiss.com. Theme powered by WordPress.