Search Engine Spider Simulator

Search Engine Optimization

Search Engine Spider Simulator


Enter a URL



About Search Engine Spider Simulator

This is how robots of search engines see your page

Search engines automatically read the information from websites using programs called robots. The most famous of them is the Googlebot. These robots, of course, do not read websites like people - they can, for example, only read text, no videos, pictures or audio files. The analysis of dynamically generated websites, Flash, JavaScript and AJAX code through search engines is also still under development. Static HTML code thus remains "bread and butter" for the assessment of the content by the bots of Google, Bing, Yahoo and other search engines.

Robots offer information right

For the robots, the meta tags - especially Title and Description, but for some bots also the meta keywords - are particularly important. However, the information displayed in the browser window can also be very important: The headings H1 to H4 are evaluated by robots in order to weigh results in the search engines. Likewise, the number of indexable links is important. Last but not least, it is important that robots of search engines find at least sufficient material to judge which search criteria are likely to be found on the site. Providing the robots with enough legible text is therefore essential for every page that can be found in search engines. The Robots view also helps you to consider some other SEO recommendations, such as the recommendation that the search engine generally first see the content of the page, the navigation last. Whether and how well this rule is implemented can be seen in the source view. The Robots view also identifies whether essential information is accessible to search engines at all, or whether they are hidden in images or Flash. In addition to the search engine friendliness, this also promotes the accessibility of the website. The display of the readable text reveals whether the page provides at least sufficiently readable text for robots of search engines. The display of the headings shows whether the levels H1 to H4 are useful to structure information - this is not only important for robots, but also makes it easier for human users to be guided. If robots are not supposed to index a page for search engines, you can do this in three ways:

By means of a corresponding entry in the file robots.txt, which can also allow or prohibit specific robots from reading and indexing,
With the HTML meta tag robots
By IP blocking in a .htaccess file. 
The exclusion of the robots of search engines should however be done with care - otherwise, surprising parts are no longer captured by the robots, which should actually be found in the search engines.

What search engines evaluate

The seotoolsearch.com search engine robots test shows you the information on each given website as search engines robots would see them: meta information, levels 1 to 4, indexable links, readable text, and the source code of the HTML document. This allows you to check how Robots see them for any website, and make sure that search engine robots on your own site also find the information that you want to find on Google.