Enter a URL
Google's search engine results are based on the Google Index, a comprehensive collection of all pages in the www. Google collects information about all pages that are publicly accessible. Based on the Google Index, search engine queries will show the URLs that match the search query. Strictly speaking, one distinguishes between two indices: the organic index and the paid index. The latter is only interesting for the topic Search Engine Advertising and is a collection of data about websites, which are to be advertised in search engines. In the SEO sector, "Google Index" means the organic index.
The Google Index contains copies of all sites that are accessible to Google Bot. This robot periodically records pages in the www and collects information about them; It archives the pages and related keywords in the Google Index. From this collection, the search engine selects exactly the pages and displays them in the results, which is probably the most useful response to the search query. Because the Google Index is the data base from which Google makes its selection, it is important that all documents, or all of the bottom of a website, are properly tracked in the Google Index. One important factor is the accessibility for the Google crawler that creates the index. There are numerous technical factors for this. The first barrier is the robots.txt, which makes it possible to exclude specific directories of the homepage from the search engine crawler. Files in these directories are not included in the Google Index. A similar function fulfills the HTML meta tag robots; It allows, among other things, the exclusion from the Google Index. Furthermore, it is important to take into account that search engine crawlers can only evaluate text. A page that consists largely of images or flash is not particularly accessible to them - even on a very graphically or multimedially oriented website, the presence of sufficient text is a must if it is captured well in the Google index. A third factor for capturing the Google index is internal linking. The crawler follows links from page to page; A page that can be reached through many links is therefore better captured in the Google Index than a page that is barely linked. However, the crawler only follows in about 100 links per page. All other links are ignored. Too much of the good also hurts the capture in the Google index. Caution is also given when using the nofollow attribute for links: to mark project-internal links with this is only useful in very few exceptional cases.
A simple test of whether a site is properly tracked in the Google Index is a check of how many of your documents are included in the Google Index. The seotoolsearch.com check determines the number of pages in the Google Index. Just enter the URL in the text field, enter the displayed combination of numbers and digits in the CAPTCHA field and click "submit". You can quickly find out how many documents of a domain are recorded in the Google Index.