The SEO Starter Guide to Google Search ConsoleChapter four

Using the Fetch and Render Tool

What is Fetch and Render?

Search Console’s Fetch and Render will test how Google crawls and renders a page on your site. This can help you understand how Google sees a page, tell you about elements that might be hidden within the page, safely check hacked pages, or help debug crawl issues.

Fetch will return the page’s code (how Google sees the page), and Fetch and Render will return the page’s code along with two side-by-side images — one version that users see and one version that Google “sees”.

How to Fetch and Render

Note: You can click on all screenshots below to view at a larger size.

Step 1: Log in to Search Console and navigate to Crawl, then select Fetch as Google

Step 2: Add the relative path of the URL

The relative path of the URL is the portion that does not include the protocol (http, https) or domain ( For example, the relative path of the URL would be “fetch-this-url”.

The domain name is already provided in the Fetch and Render tool, so you only need what comes after the trailing slash of the domain name. Fetch and Render is case-sensitive.

relative path of url for fetch in gsc

Step 3: (Optional) Select what type of Googlebot to perform the fetch as — Desktop or Mobile: Smartphone

fetch as desktop or mobile in gsc

Step 4: Select Fetch or Fetch and Render

  • Fetch: fetches the requested URL and displays the HTTP status
  • Fetch and Render: fetches the requested URL, displays the HTTP status, and renders the page based on the specified platform (desktop or smartphone). This can be understand any visual differences between how a user would see your page compared to how Google would see your page.

Fetch Statuses

After you’ve made your fetch request, Google will use several different status to indicate the HTTP response along with any suggested actions to take based on responses.

  • Complete: If your status is complete then Google was able to make contact with your site and its referenced resources.
  • Partial: If your status is partial it means that Google was able to contact and fetch the site but the robots.txt file blocked a resource referenced by the page. Clicking on the status will show you exactly which resource was blocked and the severity of the blocked resource.
  • Redirected: Search Console will only check the exact URL requested – any redirects to another page will receive a Redirect status. Clicking on the status will show you where the page points to.
  • Not Found: Usually this status is from a 404 error, but can also occur when a site is hacked and Google does not want to index the page.
  • Not Authorized: This could be caused by a 403 error or other type of restricted access.
  • DNS Not Found: If your status is not found, it could be caused by a typo or instance of website downtime.
  • Unreachable robots.txt: Google cannot reach the host of the resource for robots.txt — the robots.txt file may need to be tested and updated.
  • Unreachable: This status is often due to a timeout error.
  • Temporarily Unreachable: This status is also often due to a timeout error or too many fetch requests.
  • Error: This status is uncommon and results from an unknown or unspecified error. If this status occurs multiple times, Google suggests posting to their help forum.

Submitting a URL to Index

If you’ve added new content or made any changes to your site, you can ask Google to recrawl the page by using the Fetch and Render tool.

Step 1: Log in to Search Console and Request a Fetch or Fetch and Render for the page

Step 2: If you have received a Complete, Partial, or Redirected status, a Request Indexing button will appear to the right of the status.

If the request feature does not appear, the fetch didn’t fulfill the status requirements, meaning that your Fetch status needs to be resolved. Your fetch must also be under 4 hours old in order to Request Indexing.

request indexing in gsc

Step 3: Click Request Indexing

Step 4: Select whether to Crawl only this URL or Crawl this URL and its direct links

  • Crawl only this URL submits only the selected URL to Google for re-crawling. You can submit up to 10 individual URLs per day.
  • Crawl this URL and its direct links submits the URL as well as all the other pages that URL links to directly for re-crawling. You can submit up to 2 of these site recrawl requests per day.

crawl url in gsc