JS is a scripting language that enables web developers to insert code within a website.
The script also allows performing tasks that are not possible in the traditional HTML markup language.
These are mostly interactive behaviours and dynamic elements, such as displaying hamburger menus, zooming in or out on an image, and animations that would be difficult to achieve using HTML and CSS.
JS is easy to learn, allows rapid prototyping and development, and offers a better user experience.
How does JS work?
There are two ways to use JS to display content to end-users:
In either case, you should keep in mind that your content needs to be visible, crawlable and indexable by the search robots, which have not always been highly capable of executing JS code and understanding its output.
What does Googlebot see?
Search engines are good at crawling content served in this way. You’ll find server-side rendering used on websites using frameworks like Gatsby. You might even have used it without realising if you’ve had a website built in a content management system (CMS) like Magento or WordPress.
This can be resolved through forms of rehydration, such as progressive rehydration or patrial rehydration, which boots up these components over time to become interactive.
Client-side rendering is more resource-intensive, so Googlebot, working flat out to index as much of the web as it can reach, may take several days to crawl a JS-heavy website. It may also miss important SEO content, such as page titles and meta tags, if they are not rendered server-side.
It’s not impossible to create crawlable websites in this way, and plenty of tools emulate what the search robot would see. However, server-side rendering is generally the ‘safer’ option.
JS best practices for SEO
Don’t block JS from robots.txt
As we established above, when Googlebot encounters a page written using JS, it crawls, renders and then indexes the content (subject to any meta tags or robots.txt rules that prevent it from accessing the page).
Historically, search engines were unable to crawl JS files. They were stored in directories and blocked robots.txt, a text file created by website runners to instruct search engines on what content to crawl.
This meant webmasters no longer needed to block robots.txt. It’s importantyou don’t block JS if you want the page to be indexed by search engines. You can check this by logging into your Google Search Console and inspecting a URL.
If you have a lot of JS files across your site, Google could send a message informing you that Googlebot cannot access JS files. Conduct a site crawl and unblock robots.txt access to fix the issue.
Preserve site speed by deferring JS
For a browser to be able to display a web page, it must render its content, whether HTML, CSS, or JS. This means that rendering JS does take a little more time, as the browser must first request the script file and wait until it is downloaded from the server before it is executed and rendered.
Low site speed decreases user experience and slows page crawling. If the page takes longer than five seconds to load, it might not be indexed by search engines.
Only inline small JS in the <head>
Continuing with the previous point, only small-size JS files are suggested for inlining in the <head> of the web page, which is especially important when the text includes other essential SEO elements such as canonical, hreflang, index, and follow tags.
Rethink JS impact on content rendering
To clearly explain the impact of on content rendering, we must first understand the difference between static and dynamic content and between a crawled and indexed page.
Static vs. dynamic content
In the context of a web page, static content includes any text, images and other objects that remain unchanged once they are loaded. Most parts of ordinary pages are ‘static’ in that, once the HTML code has been rendered and displayed, the page remains the same until the user navigates elsewhere via a hyperlink or their browser’s ‘back’ button.
Crawled vs. indexed
The difference between a page being crawled vs. being indexed is quite subtle:
- Crawled means Googlebot has ‘seen’ the page and taken a snapshot of its content.
- Indexed means that content has been analysed and included in the search results.
For the best chance to appear high in the search results, content should be easy to crawl and easy to index. JS makes content opaquer and, if you decide to use it you should test thoroughly to ensure it is as lightweight and fast to load as possible.
It runs on the Chromium engine — the open-source software that powers web browsers including Google Chrome and (since 2020) Microsoft Edge.
Viewing your website in a Chromium-based browser is a good way to get an instant preview of how your content might be seen by Googlebot — there are also plenty of tools that can give you a more exact impression of how Googlebot will render your page.
DOM stands for Document Object Model, which is a digital interface neutral to platform and language and is a representation state of the website in the browser.
Also, as it is a two-step crawling process, there might be issues when the crawled content does not match, and JS rewrites the traditional HTML content. You can disable JS to see if any of the content is affected by it or check it using Google’s Mobile-Friendly Test.
If you detect JS in DOM, switch to creating elements in HTML to improve crawlability, because if the crawlability is suffering, fewer pages will be discovered.
Ensure content is accessible
As well as being discoverable, content should be accessible. Make sure it is rendered correctly by using one of the many ‘view as Googlebot’ tools available online.
Good accessible content is a strong basis for SEO efforts in general, so adhere to SEO best practices: quality content, structured well, and not duplicated elsewhere online (you can and should use rel=”canonical” tags in the page header to indicate that a page is the ‘master copy’ and any others found online are duplicates).
Avoid JS redirects at all costs
Another element that can slow down site speed and perturb user experience is the use of JS redirects. JS redirects are commonly used by developers, as Googlebot can understand it and treat as a standard redirect.
The issue lies in the fact that, as previously mentioned, JS is crawled in the second round of website crawling, meaning that the JS redirects may take days or weeks to get crawled and indexed and can sometimes even fail, which may negatively impact the site’s index.
Beware of unclear internal links
Search robots discover new pages by following hyperlinks from pages they have previously found, so it’s essential to make your navigation visible using <a> anchor tags for your hyperlinks, and recognisable URL values.
This is one of the most important and basic things to implement on a JS website. Without clearly visible internal links, you risk publishing pages that are impossible for the search bots to find, which in turn means they will never be properly crawled or indexed, or appear in search results.
Test pages independently
Testing content in the browser
A quick way to test page content is to right-click on any page element and choose ‘Inspect’. This will bring up the Developer Tools panel and should go directly to that element in the HTML code (you can also launch Developer Tools by pressing F12 in most modern browsers.)
If you can’t see the information, it’s possible the element is being displayed using client-side rendering, and that might mean the bots cannot see any associated SEO content such as ‘alt’ and ‘title’ attributes.
Google Search Console
Google Search Console provides a more powerful way to interpret your page content as Googlebot would see it — the URL Inspection Tool. You can start this process by just pasting the page URL into the search box at the top of Search Console.
This is a good way to get a snapshot of your page as it appears on mobile devices, which should be a priority in any present-day SEO campaign.
If content that renders correctly on desktop devices is not visible to mobile users, consider making the necessary changes to make it accessible across all platforms, operating systems, browsers and screen sizes using responsive web design techniques.
Other testing tools
Finally, here’s a list of some more Google testing tools and third-party search robot emulators to help you render your content as the bots would see it, and diagnose any problems that need to be put right for improved SEO.
Google Mobile-Friendly Test
Google’s Mobile-Friendly Test tool will give you quick results and score your page for accessibility on mobile devices — a fast way to identify any fundamental design flaws.
Google PageSpeed Insights
PageSpeed Insights is another tool that can help you to accelerate your page loading times — crucial for any that are over 3-4 seconds.
There are a huge number of third-party tools to emulate search robot rendering, compare raw code against the rendered output, and test small tweaks directly in the browser.
- BuiltWithL A free tool to identify what framework a website is built on.
- DeepCrawl: Crawls an entire website, ideal for mass testing of sitewide rendering.
- Diffchecker: Looks for differences between original page code and rendered output.
Ultimately, JS websites are not a problem for SEO in the modern era — they just need a little intelligent design and forward planning to keep your content fast, transparent, and easy to navigate.