In general, Googlebot can follow links provided that they are contained within an <a> anchor tag, including in the following ways:
- Functions trigger on the click of an <a> element, but no in the href attribute
The important factor here is the use of an <a> tag to act as a signpost for Googlebot to recognise that there is a link, and to then determine how to follow that link and crawl the content it points to.
Once the link is followed, the target content should render relatively quickly – ideally within 3 seconds or less – to make sure Googlebot sees it, crawls it correctly and indexes it with the appropriate SEO value.
Server-side rendering gives you good control over what Googlebot sees, as the rendering takes place on your side, rather than at the whim of the bot’s client-side capabilities.
This has benefits for SEO. By serving pre-rendered content to Googlebot, you can essentially optimise that content for SEO just as you would any static web page.
If your page has multiple media elements, for example in a dynamically updated image gallery or slideshow, make sure the search robots can see the location of all the media files, and not just the first one that appears by default on the pre-rendered page.
Paginated content can be problematic but the best practice approach here is relatively simple: make sure each index page has its own URL and is not just dropped dynamically into the original index page.
By creating true pages with distinct URLs, you give Googlebot addresses that it can resolve, and you increase the number of pages on your website that can be crawled and indexed. This applies across all navigation on your site and has natural benefits for accessibility too.
Be consistent about how you update metadata as the search bots move between pages when crawling your site. It’s good to ensure every page has at least the basic metadata added to it, such as:
- <title> tags for the page title that appears in the browser tab or title bar
- <meta name=”description”> tags to provide search bots with a summary
- <link rel=”canonical”> tags for pages that may be duplicated elsewhere
- <link rel=”alternate” hreflang=”en-gb”> tags for pages in multiple languages
The end justifies the means here. Choose a solution that you are happy with, and comfortable using, as long as it leads to complete and correct metadata on every page.
Some examples include:
- Screaming Frog
Fetch as Google and URL Inspection Tool
In the past, Google’s own ‘Fetch as Google’ tool allowed you to see your web pages exactly as Googlebot would do. It has now been replaced by the URL Inspection Tool accessible via Google Search Console.
- Paste the URL you want to test into the search box at the top of Google Search Console
- Click ‘Test Live URL’ at the top-right of the result page
- Click ‘View Tested Page’ on the main ‘URL Inspection’ panel to see your page code and a screenshot of how Googlebot sees your page
Even if you use the third-party tools mentioned above, it’s good practice to run pages through the URL Inspection Tool for a quick and easy impression of how Google sees your site, and any content it cannot see at all.
Reload the page and see what’s changed – you might identify updates you can make to improve your SEO, or you might just find a resource that is not needed for the page to function correctly, which can then be removed to improve your page loading speed.
All SEO, from good old-fashioned meta tags to microformats and microdata markup, is about ensuring content is not only visible but optimised to be crawled and indexed by the search robots.