JavaScript (JS), initially named Live Script, has been around for decades. It was reportedly developed by Brendan Eich in September 1995 in just ten days. 

JS is a scripting language that enables web developers to insert code within a website.

It was originally a method by which HTML pages could interact with web applets written using the programming language Java. The two Java and JavaScript are not the same thing, and when working on web pages, you’ll use JavaScript. 

The script also allows performing tasks that are not possible in the traditional HTML markup language.

These are mostly interactive behaviours and dynamic elements, such as displaying hamburger menus, zooming in or out on an image, and animations that would be difficult to achieve using HTML and CSS.

JS is easy to learn, allows rapid prototyping and development, and offers a better user experience.

In this JavaScript SEO guide, we’ll look at the scripting language, its best practices, common errors, and implications in SEO.

How does JS work?

There are two ways to use JS to display content to end-users:

  • Server-side rendering, which uses JavaScript frameworks as their base, which is then rendered before sending the DOM to the end-user
  • Client-side scripting, which runs JavaScript on the user’s device to dynamically affect the content they see.

In either case, you should keep in mind that your content needs to be visible, crawlable and indexable by the search robots, which have not always been highly capable of executing JS code and understanding its output.

What does Googlebot see?

When Googlebot encounters a page written using JS, it crawls the HTML document, renders the JavaScript through WRS, and then indexes the rendered DOM. 

Some websites use JavaScript frameworks to render the content, like React, Vue.js, Next.js, and others. These can often cause problems for ????????

Server-side rendering

Server-side rendering is a better option for SEO. It gives you direct control over how your JavaScript is interpreted by search robots. Content is pre-rendered by the server so that the client receives complete HTML code, plus any external information like CSS stylesheets.

Search engines are good at crawling content served in this way. You’ll find server-side rendering used on websites using frameworks like Gatsby. You might even have used it without realising if you’ve had a website built in a content management system (CMS) like Magento or WordPress.

Hydration

Some components on your JavaScript website may not be able to be server-side rendered. Depending on what content is in these components, this may be damaging to organic rankings if Google cannot see them.

This can be resolved through forms of rehydration, such as progressive rehydration or patrial rehydration, which boots up these components over time to become interactive.

Client-side rendering

Client-side rendering is more resource-intensive, so Googlebot, working flat out to index as much of the web as it can reach, may take several days to crawl a JS-heavy website. It may also miss important SEO content, such as page titles and meta tags, if they are not rendered server-side.

It’s not impossible to create crawlable websites in this way, and plenty of tools emulate what the search robot would see. However, server-side rendering is generally the ‘safer’ option.

JS best practices for SEO

When deploying a website using JavaScript, it’s important to keep aspects of SEO firmly in mind. These range from the technical ensuring the page renders within 3-4 seconds without errors to content-based SEO issues like meta tags and navigation links. Here are some JS best practices for SEO. 

Don’t block JS from robots.txt

As we established above, when Googlebot encounters a page written using JS, it crawls, renders and then indexes the content (subject to any meta tags or robots.txt rules that prevent it from accessing the page).

Historically, search engines were unable to crawl JS files. They were stored in directories and blocked robots.txt, a text file created by website runners to instruct search engines on what content to crawl.

Search engines have since evolved, and a few years ago, Google announced that it could crawl JavaScript and CSS.

This meant webmasters no longer needed to block robots.txt. It’s importantyou don’t block JS if you want the page to be indexed by search engines. You can check this by logging into your Google Search Console and inspecting a URL. 

If you have a lot of JS files across your site, Google could send a message informing you that Googlebot cannot access JS files. Conduct a site crawl and unblock robots.txt access to fix the issue.

Preserve site speed by deferring JS

For a browser to be able to display a web page, it must render its content, whether HTML, CSS, or JS. This means that rendering JS does take a little more time, as the browser must first request the script file and wait until it is downloaded from the server before it is executed and rendered.

JavaScript loads in order of appearance in the document. To increase your site speed, place it lower in the HTML document so browsers can load all the content from the HTML first before downloading and executing the JavaScript.

Low site speed decreases user experience and slows page crawling. If the page takes longer than five seconds to load, it might not be indexed by search engines.

Assess whether your JavaScript files are essential to the webpage and if they are worth making your users wait longer for it to load.

Only inline small JS in the <head>

Continuing with the previous point, only small-size JS files are suggested for inlining in the <head> of the web page, which is especially important when the text includes other essential SEO elements such as canonical, hreflang, index, and follow tags.

To avoid issues, place the most crucial SEO attributes at the top and before any JavaScript files. To check whether you have any inlined <script> in the <head>, click “view page source”. Place <link> tags near the top of the <head> element, so they are rendered before the JavaScript files. For optimal performance, only inline small scripts.

Rethink JS impact on content rendering

To clearly explain the impact of on content rendering, we must first understand the difference between static and dynamic content and between a crawled and indexed page.

Static vs. dynamic content

In the context of a web page, static content includes any text, images and other objects that remain unchanged once they are loaded. Most parts of ordinary pages are ‘static’ in that, once the HTML code has been rendered and displayed, the page remains the same until the user navigates elsewhere via a hyperlink or their browser’s ‘back’ button.

Dynamic content is any element that can be created or changed after the page has already loaded. That could be an image slideshow, video or animation. JavaScript is a common way to achieve this. Search robots, including Googlebot, will normally wait three-four seconds for any JS to execute on page load, before taking a snapshot of the outputted HTML to index.

Crawled vs. indexed

The difference between a page being crawled vs. being indexed is quite subtle:

  • Crawled means Googlebot has ‘seen’ the page and taken a snapshot of its content.
  • Indexed means that content has been analysed and included in the search results.

For the best chance to appear high in the search results, content should be easy to crawl and easy to index. JS makes content opaquer and, if you decide to use it you should test thoroughly to ensure it is as lightweight and fast to load as possible.

What version of JavaScript does Googlebot use?

Googlebot is regularly updated to ensure it uses a recent version of JavaScript to render content, so you can be confident that it will have the basic capabilities needed to load your page.

It runs on the Chromium engine the open-source software that powers web browsers including Google Chrome and (since 2020) Microsoft Edge.

Viewing your website in a Chromium-based browser is a good way to get an instant preview of how your content might be seen by Googlebot there are also plenty of tools that can give you a more exact impression of how Googlebot will render your page.

If your website relies on JavaScript for content rendering, Google must recrawl it using its Web Rendering Service before anything meaningful is indexed by Google.

DOM stands for Document Object Model, which is a digital interface neutral to platform and language and is a representation state of the website in the browser.

Googlebot initially crawls HTML content, and only then does it crawl JavaScript. Depending on the crawl budget that Google has allocated to your website, the process can take several days or even weeks.

Also, as it is a two-step crawling process, there might be issues when the crawled content does not match, and JS rewrites the traditional HTML content. You can disable JS to see if any of the content is affected by it or check it using Google’s Mobile-Friendly Test.

If you detect JS in DOM, switch to creating elements in HTML to improve crawlability, because if the crawlability is suffering, fewer pages will be discovered.

Ensure content is accessible

As well as being discoverable, content should be accessible. Make sure it is rendered correctly by using one of the many ‘view as Googlebot’ tools available online.

Good accessible content is a strong basis for SEO efforts in general, so adhere to SEO best practices: quality content, structured well, and not duplicated elsewhere online (you can and should use rel=”canonical” tags in the page header to indicate that a page is the ‘master copy’ and any others found online are duplicates).

Avoid JS redirects at all costs

Another element that can slow down site speed and perturb user experience is the use of JS redirects. JS redirects are commonly used by developers, as Googlebot can understand it and treat as a standard redirect.

The issue lies in the fact that, as previously mentioned, JS is crawled in the second round of website crawling, meaning that the JS redirects may take days or weeks to get crawled and indexed and can sometimes even fail, which may negatively impact the site’s index.

Google suggests not using JavaScript redirects but instead using server-side or meta-refresh redirects where possible. 

Beware of unclear internal links

Search robots discover new pages by following hyperlinks from pages they have previously found, so it’s essential to make your navigation visible using <a> anchor tags for your hyperlinks, and recognisable URL values.

It’s OK to use JavaScript to generate the URL, for example, by executing a script once the page loads using a JS onload event. Make sure this is contained within an <a> element on the page, so it’s apparent to the robots that it is a hyperlink.

This is one of the most important and basic things to implement on a JS website. Without clearly visible internal links, you risk publishing pages that are impossible for the search bots to find, which in turn means they will never be properly crawled or indexed, or appear in search results.

Test pages independently

JavaScript can work with server-side session data to store information throughout a user’s visit until they close the browser tab or navigate away from the website. However, search robots typically load each page completely afresh, with no session data stored.

It’s worth remembering this and testing each page independently to ensure there are no JavaScript errors if the page is loaded without a session identifier, cached data, or browser cookies.

Testing content in the browser

A quick way to test page content is to right-click on any page element and choose ‘Inspect’. This will bring up the Developer Tools panel and should go directly to that element in the HTML code (you can also launch Developer Tools by pressing F12 in most modern browsers.)

If you can’t see the information, it’s possible the element is being displayed using client-side rendering, and that might mean the bots cannot see any associated SEO content such as ‘alt’ and ‘title’ attributes.

You can also navigate to the Sources tab in Developer Tools, which will list all the resources required to load the page. Here you can identify any external JavaScripts that were executed, including any you were not already aware were being used.

Google Search Console

Google Search Console provides a more powerful way to interpret your page content as Googlebot would see it the URL Inspection Tool. You can start this process by just pasting the page URL into the search box at the top of Search Console.

This is a good way to get a snapshot of your page as it appears on mobile devices, which should be a priority in any present-day SEO campaign.

If content that renders correctly on desktop devices is not visible to mobile users, consider making the necessary changes to make it accessible across all platforms, operating systems, browsers and screen sizes using responsive web design techniques.

Other testing tools

Finally, here’s a list of some more Google testing tools and third-party search robot emulators to help you render your content as the bots would see it, and diagnose any problems that need to be put right for improved SEO.

Google Mobile-Friendly Test

Google’s Mobile-Friendly Test tool will give you quick results and score your page for accessibility on mobile devices a fast way to identify any fundamental design flaws.

Google PageSpeed Insights

PageSpeed Insights is another tool that can help you to accelerate your page loading times crucial for any that are over 3-4 seconds.

Third-party tools

There are a huge number of third-party tools to emulate search robot rendering, compare raw code against the rendered output, and test small tweaks directly in the browser.

Examples include:

  • BuiltWithL A free tool to identify what framework a website is built on.
  • DeepCrawl: Crawls an entire website, ideal for mass testing of sitewide rendering.
  • Diffchecker: Looks for differences between original page code and rendered output.

By adopting best practices for SEO and JavaScript websites from the outset, you can reduce your reliance on testing tools and should find fewer issues to resolve when you use them.

Ultimately, JS websites are not a problem for SEO in the modern era they just need a little intelligent design and forward planning to keep your content fast, transparent, and easy to navigate.