Excel & SEO – Removing Folders in URL Paths (Just leaves page URN)

The easiest thing to do is use regex and the Search function

Remove text before a forward slash in Excel:

To remove text or URL paths after a slash, use find and replace then /*

To remove the slash after the domain use Find and Replace – Example.com/ and replace with “nothing”:

You can then concatenate the domain back in place at the end, if you need to

Also had this reply from Reddit

*I’ve never really got my head around URLs, URIs and URNs. I think the product-name that I’m after, is a URN.

https://myshop.com/category-folder/product-name

Image source

Image source

Extracting URLs from Page Code in Excel

Removing Code from the End of the URLs

If the links don’t have .html in the protocol (at the end of them), then first concatenate the URLs to add * at the end of them all.

If they do have .HTML, then find and replace .HTML with .html*

Next – use Text to Columns, use * as the “delimiter”

Removing code from the Start of the URLs

If the links are relative:

Use text to columns again and use / as the delimiter

If the links contain the full domain name, then concatenate * at the start of the domain name and use text to columns again to remove the code fro the start of the URLs

Alternative Method

Here is another method for doing it – https://businessdaduk.com/data/data-analysis-with-excel/getting-domains-from-a-list-of-urls/

Ensure all the domains have http:// prefix

Use the formula:

=IF(ISERROR(FIND("//www.",A2)), MID(A2,FIND(":",A2,4)+3,FIND("/",A2,9)-FIND(":",A2,4)-3), MID(A2,FIND(":",A2,4)+7,FIND("/",A2,9)-FIND(":",A2,4)-7))

Javascript Fundamentals for Beginners [2023]

Notes taken from The Complete JavaScript Course 2023 on Udemy and from across the web.

Variables in Javascript

Variables are used to store values.

Variables can be thought of like a box in the ‘real world’.

A box can hold items and can be labelled so we know what is in it.

If we create a variable and label it “firstName”

let firstName = "Jonas"

Now if we “call” or publish “firstName” we will get a value of “Jonas”

This can be extremely useful. For example, if you were to use “firstName” in hundreds of places on a website, you can just change the variable value, and in theory, all the ‘firstNames’ will update with one change in the code.

Naming Conventions for Variables

Camel case

Camel case involves having no spaces between words and leaving the first letter lowercase.

for example – firstNamePerson is written in camel case.

I guess the capital letter looks a bit like a camel’s hump.

This isn’t a hard rule, but general practice.

Hard rules for Variable Names

Variable names can’t start with a number – e.g. 3years = 3; – would result in an error because of the number “3” in the name

Symbols are generally a bad idea in variable names. You can only use letters, numbers, underscores or dollar sign.

You can’t use reserved JS keywords. e.g. “new” is a reserved keyword, as is “function” and “name”.

Don’t use all uppercase letters for variable name either. Unless it is a “constant” that never changes, such as the value of PI

Let

You declare variables with “let”, you can update the variable later, without using “let”.

Constant – used to declare variables that don’t change e.g. date of birth

Strings and Template Literals

Good tutorial here.

Template literals make it easier to build strings.

Template Literals allow you to create:

  • Multiline strings – strings (text) that spans several lines
  • String Formatting – you can change part of the string for values of variables – This is called “String Interpolation”
  • HTML Escaping – making it okay to include in HTML of a webpage

let str = `Template literal here`;

Multiline strings

In older versions of Javascript, to create a new line, or a multi-line string, you had to include the newline code

\n

The template literals allow you to define multiline strings more easily because you need to add a new line in the string wherever you want:

let p =
`This text
can
span multiple lines`;

Type Conversion and Coercion

Type coercion, type conversion, typecasting, and type juggling: all different names that refer to the process of converting one data type into another. This process is present in almost every programming language and is an important concept in computer science.

source

What is Type Conversion?

Type conversion can be :

implicit – done automatically by the code already in place

explicit – done more manually by the developer

What is Type Coercion?

Explicit coercion happens when we want to coerce the value type to a specific type. Most of the time, explicit coercion in JavaScript happens using built-in functions such as String()Number(), and Boolean().

When we try to create operations in JavaScript using different value types, JavaScript coerces the value types for us implicitly.

This is one of the reasons why developers tend to avoid implicit coercion in JavaScript. Most of the time we get unexpected results from the operation if we don’t know exactly how JavaScript coerces the value types.

When coercion is done automatically, it can cause some weird outcomes and issues

image source

Truthy and Falsy Values in Javascript (Boolean thing)

Values are considered either truthy (evaluate to true) or falsy (evaluate to false) depending on how they are evaluated in a Boolean context.

In JS there are 6 incidences that result in, or are considered “Falsyies”

  • The primitive value undefined
  • The primitive value null
  • The empty string (''"")
  • The global property NaN
  • A number or BigInt representing 0 (0-00.0-0.00n)
  • The keyword false

All other values are considered “truthys”

When a value is truthy in Javascript, it does not means that the value is equal to true but it means that the value coerces to true when evaluated in a boolean context.

https://www.hackinbits.com/articles/js/truthy-and-falsy-values-in-javascript
truthyOrFalsy(undefined); // Falsy Value 
truthyOrFalsy(NaN);       // Falsy Value
truthyOrFalsy(null)       // Falsy Value
truthyOrFalsy("");        // Falsy Value
truthyOrFalsy(false)      // Falsy Value
truthyOrFalsy(0);         // Falsy Value
truthyOrFalsy(-0);        // Falsy Value
truthyOrFalsy(0n);        // Falsy Value


Equality Operators == and ===

JavaScript ‘==’ operator: In Javascript, the ‘==’ operator is also known as the loose equality operator which is mainly used to compare two values on both sides and then return true or false. This operator checks equality only after converting both the values to a common type i.e type coercion.

The operator using “two equals signs”, “==” is a “loose equality operator”. It will try and convert (using “coercion”) the values and then compare them.

JavaScript ‘==’ operator: In Javascript, the ‘==’ operator is also known as the loose equality operator which is mainly used to compare two values on both sides and then return true or false. This operator checks equality only after converting both the values to a common type i.e type coercion.

https://www.geeksforgeeks.org/javascript-vs-comparison-operator/

Genearlly, or loosely speaking, “==” just checks the values.

The ‘===’ operator, is the “strict operator” and checks the value and the data-type are the same.

JavaScript ‘===’ operator: Also known as strict equality operator, it compares both the value and the type which is why the name “strict equality”.

Boolean Logic

Boolean Logic is a form of algebra that is centered around three simple words known as Boolean Operators: “Or,” “And,” and “Not.” These Boolean operators are the logical conjunctions between your keywords in a search to help broaden or narrow its scope.

https://www.lotame.com/what-is-boolean-logic/

Boolean logic dictates that all values are either true or false.

Boolean data is a type of “Primitive Data Types” in Javascript. You can think of boolean values, a bit like a switch that turns on or off.

Boolean uses operators such as “AND” and “OR”.

For the result to be true in the example shown above, both A and B in the example, need to be true.

With the “OR” operator, we just need A or B to be true, for the result to be “TRUE”.

NOT operators invert the logical operators.

Returns false if its single operand can be converted to true; otherwise, returns true.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_NOT#:~:text=The%20logical%20NOT%20(%20!%20),true%20%3B%20otherwise%2C%20returns%20true%20.

Logical Operators

Image source

The Switch Statement

A quick, or easier way of doing a complicated “if else” statement.

Screenshots from Udemy Course – The Complete JavaScript Course 2023: From Zero to Expert!

The switch statement can be used so that you can perform different actions, when one of a range of conditions is met.

If the condition – e.g. “Monday” in the screenshot above, is met, then the code associated with Monday, will be executed.

If there is no match at all, then the default code, shown at the bottom left of the screenshot above, will be executed.

You can also use the if/else statement as an alternative to the switch statement

If Else alternative to Switch Statement

Statements & Expressions

An expression is a piece of code that produces a value. e.g. 3 + 4 is an expression, because it creates a value.

Numbers themselves are expressions.

Booleans, which produce “true or false” is an expression

A statement, does not produce a value.

The Conditional (Ternary) Operator

The conditional operator, allows us to write something similar to an “If Else Statement”, but all in one line.

Use a question mark ? after the condition is declared, and that write the code we want executed if condition is true.

The “else” block, or the equivalent of an else block, goes after a colon :

Strict Mode in Javascript

To activate “strict mode”, right at the top of your script, you have to type:

"use strict";

As you would expect,

"use strict"; Defines that JavaScript code should be executed in “strict mode”.

Strict mode, helps developers identify mistakes and bugs in the code.

Strict mode doesn’t allow you to do certain (incorrect) things and in other instances, it will visibly show you that you’ve created an error.

For example, if you spell a variable incorrectly,

SEO and Pagination for E-commerce. Best Practices [2024]

I have made a load of notes about SEO and Pagination from around the web, I’ve summarised them here…

Summary 

  • Don’t point canonical tags to the first pagegive them their own canonical URL
  • Block filtered pages and sorted pages – e.g. sorted by price e.g. “support ?order=price”
  • It is still best to use rel=next and rel=prev
  • Have unique meta tags, including title tags and meta descriptions – e.g.

    Socks for Sale – Page 3 | Sockstore

    (that would be an example for the 3rd page of a sock store, called sockstore)
  • Check log files to see if paginated pages are being crawled
  • Consider using preload, preconnect, or prefetch to optimize the performance for a user moving to the next page.
  • Consider adding a UX friendly amount of unique copy to each page
  • Don’t place the link attributes in the <body> content. 
  • They’re only supported by search engines within the <head> section of your HTML.
  • Don’t Include Paginated Pages in XML Sitemaps
  • If possible give paginated pages their own “sub-optimal” meta titles & descriptions
  • It’s not best practice to use JS to inject/create canonical URLs
  • Don’t have multiple canonical URLs (see reddit thread here)

John Mueller commented, “We don’t treat pagination differently. We treat them as normal pages.”

Meaning paginated pages are not recognized by Google as a series of pages consolidated into one piece of content as they previously advised. Every paginated page is eligible to compete against the root page for ranking.

To encourage Google to return the root page in the SERPs and prevent “Duplicate meta descriptions” or “Duplicate title tags” warnings in Google Search Console, make an easy modification to your code.

Pagination Checklist
Uses a href HTML links?
Links need to be in the <head>
works with JS disabled?
Different page numbers have self-referencing canonical URLs?
Do Not Include Paginated Pages in Sitemaps
Optimize Meta Tags, If Possible – Different title on each page
Ensure that the robots’ meta-tag doesn’t contain noindex
Link to first page should be root page, not a parameter ?p=1 page
“Previous button” < on page 2 – should like to root page, not ?p=1
You shouldn’t have a View All Page

Further Notes on Pagination

If the root page has the title:

Root page SERP

The successive paginated pages could have titles like:

pagination page SERP

These paginated URL page titles and meta descriptions are purposefully suboptimal to dissuade Google from displaying these results, rather than the root page.

If even with such modifications, paginated pages are ranking in the SERPs, try other traditional on-page SEO tactics such as:

eCommerce Pagination Best Practices (Notes from Google.com)

https://developers.google.com/search/docs/advanced/ecommerce/pagination-and-incremental-page-loading

You can improve the experience of users on your site by displaying a subset of results to improve page performance (page experience is a Google Search ranking signal), but you may need to take action to ensure the Google crawler can find all your site content.

For example, you may display a subset of available products in response to a user using the search box on your ecommerce site – the full set of matches may be too large to display on a single web page, or take too long to retrieve.

Beyond search results, you may load partial results on your ecommerce site for:

  • Category pages where all products in a category are displayed
  • Blog posts or newsletter titles that a site has published over time
  • User reviews on a product page
  • Comments on a blog post

Having your site incrementally load content, in response to user actions, can benefit your users by:

  • Improving user experience as the initial page load is faster than loading all results at once.
  • Reducing network traffic, which is particularly important for mobile devices.
  • Improving backend performance by reducing the volume of content retrieved from databases or similar.
  • Improving reliability by avoiding excessively long lists that may hit resource limits leading to errors in the browser and backend systems.

Selecting the best UX pattern for your site 

To display a subset of a larger list, you can choose between different UX patterns:

  • Pagination: Where a user can use links such as “next”, “previous”, and page numbers to navigate between pages that display one page of results at a time.
  • Load more: Buttons that a user can click to extend an initial set of displayed results.
  • Infinite scroll: Where a user can scroll to the end of the page to cause more content to be loaded. (Learn more about infinite scroll search-friendly recommendations.)
UX Pattern
PaginationPros:
Gives users insight into result size and current position
Cons:
More complex controls for users to navigate through resultsContent is split across multiple pages rather than being a single continuous listViewing more requires new page loads
Load morePros:
Uses a single page for all contentCan inform user of total result size (on or near the button)
Cons:
Can’t handle very large numbers of results as all of the results are included on a single web page
Infinite scrollPros:
Uses a single page for all contentIntuitive – the user just keeps scrolling to view more content
Cons:
Can lead to “scrolling fatigue” because of unclear result sizeCan’t handle very large numbers of results

How Google indexes the different strategies 

Once you’ve selected the most appropriate UX strategy for your site and SEO, make sure the Google crawler can find all of your content.

For example, you can implement pagination using links to new pages on your ecommerce site, or using JavaScript to update the current page. Load more and infinite scroll are generally implemented using JavaScript. When crawling a site to find pages to index, Google only follows page links marked up in HTML with <a href> tags. The Google crawler doesn’t follow buttons (unless marked up with <a href>) and doesn’t trigger JavaScript to update the current page contents.

If your site uses JavaScript, follow these JavaScript SEO best practices. In addition to best practices, such as making sure links on your site are crawlable, consider using a sitemap file or a Google Merchant Center feed to help Google find all of the products on your site.

Best practices when implementing pagination 

To make sure Google can crawl and index your paginated content, follow these best practices:

Link pages sequentially 

To make sure search engines understand the relationship between pages of paginated content, include links from each page to the following page using <a href> tags. This can help Googlebot (the Google web crawler) find subsequent pages.

In addition, consider linking from all individual pages in a collection back to the first page of the collection to emphasize the start of the collection to Google. This can give Google a hint that the first page of a collection might be a better landing page than other pages in the collection.

Normally, we recommend that you give web pages distinct titles to help differentiate them. However, pages in a paginated sequence don’t need to follow this recommendation. You can use the same titles and descriptions for all pages in the sequence. Google tries to recognize pages in a sequence and index them accordingly.

Use URLs correctly 

  • Give each page a unique URL. For example, include a ?page=n query parameter, as URLs in a paginated sequence are treated as separate pages by Google.
  • Don’t use the first page of a paginated sequence as the canonical page. Instead, give each page in its own canonical URL.
  • Don’t use URL fragment identifiers (the text after a # in a URL) for page numbers in a collection. Google ignores fragment identifiers. If Googlebot sees a URL to the next page that only differs by the text after the #, it may not follow the link, thinking it has already retrieved the page.
  • Consider using preload, preconnect, or prefetch to optimize the performance for a user moving to the next page.

In the past, Google used <link rel=”next” href=”…”> and <link rel=”prev” href=”…”> to identify next page and previous page relationships. Google no longer uses these tags, although these links may still be used by other search engines.

Avoid indexing URLs with filters or alternative sort orders 

You may choose to support filters or different sort orders for long lists of results on your site. For example, you may support ?order=price on URLs to return the same list of results ordered by price.

To avoid indexing variations of the same list of results, block unwanted URLs from being indexed with the noindex robots meta tag or discourage crawling of particular URL patterns with a robots.txt file.

Pros of infinite scroll

  • It’s more usable on mobile. One of the biggest advantages of infinite scrolling is that it’s incredibly usable on mobile devices. Simply scrolling down to view more content is far easier than asking users to click on a tiny “next” button or number every time they want to go to the next page.
  • Infinite scroll is better for user engagement. There’s a reason why Aussies spend hours on end scrolling through social media. Having content continuously load means that users can browse and engage with your site without being interrupted. This can be beneficial for content marketing and SEO, particularly given that Google is now using user behaviour as a ranking signal.

Cons of infinite scroll

  • Difficulties with crawling. Like pagination, the infinite scroll can also create problems when it comes to having your site crawled by Google (or other search engines). Search bots only have a limited time to crawl a page. If your content is too lengthy or takes too long to load, it loses the opportunity to be crawled — meaning entire chunks of your content might go unindexed.
  • It can be hard to find information. Depending on the length of your page, an infinite scroll can make it difficult for users to go back and revisit previous sections or products that they’re interested in. You might end up losing valuable leads or conversions simply because users found it too difficult to find what they were looking for, and chose to look elsewhere.
  • Limited access to the footer. Website footers contain valuable information for site visitors, such as social media network buttons, shipping policies, FAQs and contact information. However, with infinite scroll, it’s tough for users to access this section on your site.

At the end of the day, while users might appreciate infinite scrolling, this option isn’t as beneficial for SEO as website pagination. Pagination is the ideal option for search engines, provided you handle paginated pages in line with SEO best practices.

Best practices to consider for SEO pagination

1. Include canonical tags on paginated pages

Duplicate content is likely to be one of the biggest challenges you’ll come across when implementing pagination on your website.

To overcome these issues, you’ll need to use a self-referencing rel = “canonical” attribute on all of your paginated pages that directs back to the “View All” version of your page. This tag tells Google to crawl and index the “View All” version only and ignore any duplicated content in your paginated pages.

***If you choose to use a View All page**

In the HTML, it looks like this:

Image source: SEO Clarity

Last but not least, make sure you use internal linking to different paginated URLs using the rel=”next” and rel=”prev” tags along with your canonical tag. These can be incorporated into your HTML like so:

<link rel=”next” href=”https://www.example.com/category?page=2&order=newest” />

<link rel=”canonical” href=”https://www.example.com/category?page=2″ />

Even though these aren’t a ranking factor, they still help Google (and Bing) understand the order of paginated content on your website.

2. Make sure to use crawlable anchor links

The first step to getting Google to crawl and index pages that are paginated? Make sure that the search engine can access them. Throughout your website, you should link to your paginated category pages using crawlable anchor site links with href attributes.

Let’s say you’re linking to page 3 of your product catalogue. Crawlable paginated links would look like this:

<a href=”https://www.mystorehere.com/catalog/products?page=4>

On the flipside, any link without the “a href” attribute won’t be crawlable by Google, such as this link:

<span href=”https://www.mystorehere.com/catalog/products?page=4>

3. Don’t include paginated pages in your sitemap

Even though your paginated pages are indexable, paginated URLs shouldn’t be included on your XML sitemap. Adding them in will only use up your ‘crawl budget’ with Google and could even lead to Google picking a random page to rank (such as page 3 in your product catalogue).

The only exception to this is when you choose to have important pages consolidated into a “View All” page, which absolutely needs to be included in your XML sitemap.

A final word on this one: don’t noindex paginated pages. While the no-index tag tells Google not to index paginated pages, it could lead to Google eventually no-following internal links from that page. In turn, this might cause other pages that are linked from your paginated pages to be removed from Google’s index.

4. Ensure you optimise your on-page SEO

Even if your paginated pages use self-referencing canonical URL tags, feature crawlable anchor links and are excluded from your XML sitemap, you should still follow best practices for on-page SEO.

As we touched on earlier, paginated pages are treated as unique pages in Google’s search index. This means that each page needs to follow on-page SEO guidelines if you want to rank in search results.

In case you needed more proof, here are John Mueller’s recommendations on this topic:

“I’d also recommend making sure the pagination pages can kind of stand on their own. So similar to two category pages where if users were to go to those pages directly, there would be something useful for the user to see there. So it’s not just like a list of text items that go from zero to 100 and links to different products. It’s actually something useful kind of like a category page where someone is looking for a specific type of a product they can go there, and they get that information.” – John Mueller, Google Webmaster English Hangouts

This means that every paginated page should:

  • Have unique meta tags, including title tags and meta descriptions
  • Feature mobile-friendly design that’s optimised for smaller screens
  • Load quickly on desktop and mobile devices
  • Include filters to help narrow down products (if you’re running an online store)
  • Deliver value for visitors

Tip: If you’re running an online store with eCommerce category pages, Google’s UX Playbook for Retail contains all the best practices you need to know to turn clicks into customers.

https://www.searchenginejournal.com/technical-seo/pagination/

SEO-Friendly Pagination: A Complete Best Practices Guide

Summary

  • Canonical tags to the same page (not to the view all or first page)
  • Use rel=next and rel=prev
  • If possible give paginated pages their own “sub-optimal” meta titles & descriptions
  • Not sure if this is an issue:

Pagination Causes Duplicate Content

Correct if pagination has been improperly implemented, such as having both a “View All” page and paginated pages without a correct rel=canonical or if you have created a page=1 in addition to your root page.

Incorrect when you have SEO friendly pagination. Even if your H1 and meta tags are the same, the actual page content differs. So it’s not duplication.

Pagination Creates Thin Content

Correct if you have split an article or photo gallery across multiple pages (in order to drive ad revenue by increasing pageviews), leaving too little content on each page.

Incorrect when you put the desires of the user to easily consume your content above that of banner ad revenues or artificially inflated pageviews. Put a UX-friendly amount of content on each page.

Pagination Uses Crawl Budget

Correct if you’re allowing Google to crawl paginated pages. And there are some instances where you would want to use that budget.

For example, for Googlebot to travel through paginated URLs to reach deeper content pages.

Often incorrect when you set Google Search Console pagination parameter handling to “Do not crawl” or set a robots.txt disallow, in the case where you wish to conserve your crawl budget for more important pages. (use robots.txt for this as no longer in Search Console)

Managing Pagination According to SEO Best Practices

Use Crawlable Anchor Links

For search engines to efficiently crawl paginated pages, the site must have anchor links with href attributes to these paginated URLs.

Be sure your site uses <a href=”your-paginated-url-here”> for internal linking to paginated pages. Don’t load paginated anchor links or href attribute via JavaScript.

Additionally, you should indicate the relationship between component URLs in a paginated series with rel=”next” and rel=”prev” attributes.

Yes, even after Google’s infamous Tweet that they no longer use these link attributes at all.

Google is not the only search engine in town. Here is Bing’s take on the issue.

Complement the rel=”next” / “prev” with a self-referencing rel=”canonical” link. 

So /category?page=4 should rel=”canonical” to /category?page=4.

This is appropriate as pagination changes the page content and so is the master copy of that page.

If the URL has additional parameters, include these in the rel=”prev” / “next” links, but don’t include them in the rel=”canonical”.

For example:

<link rel=”next” href=”https://www.example.com/category?page=2&order=newest&#8221; />

<link rel=”canonical” href=”https://www.example.com/category?page=2&#8243; />

Doing so will indicate a clear relationship between the pages and prevent the potential of duplicate content.

Common errors to avoid:

  • Placing the link attributes in the <body> content. They’re only supported by search engines within the <head> section of your HTML.
  • Adding a rel=”prev” link to the first page (a.k.a. the root page) in the series or a rel=”next” link to the last. For all other pages in the chain, both link attributes should be present.
  • Beware of your root page canonical URL. Chances are on ?page=2, rel=prev should link to the canonical, not a ?page=1.

The <head> code of a four-page series will look something like this:

Modify Paginated Pages On-Page Elements

John Mueller commented, “We don’t treat pagination differently. We treat them as normal pages.”

Meaning paginated pages are not recognized by Google as a series of pages consolidated into one piece of content as they previously advised. Every paginated page is eligible to compete against the root page for ranking.

To encourage Google to return the root page in the SERPs and prevent “Duplicate meta descriptions” or “Duplicate title tags” warnings in Google Search Console, make an easy modification to your code.

If the root page has the formula:

Root page SERP

The successive paginated pages could have the formula:

pagination page SERP

These paginated URL page titles and meta description are purposefully suboptimal to dissuade Google from displaying these results, rather than the root page.

If even with such modifications, paginated pages are ranking in the SERPs, try other traditional on-page SEO tactics such as:

  • De-optimize paginated page H1 tags.
  • Add useful on-page text to the root page, but not paginated pages.
  • Add a category image with an optimized file name and alt tag to the root page, but not paginated pages.

Don’t Include Paginated Pages in XML Sitemaps

While paginated URLs are technically indexable, they aren’t an SEO priority to spend crawl budget on.

As such, they don’t belong in your XML sitemap.

Handle Pagination Parameters in Google Search Console

If you have a choice, run pagination via a parameter rather than a static URL. 

For example:

example.com/category?page=2

 over 

example.com/category/page-2

While there is no advantage using one over the other for ranking or crawling purposes, research has shown that Googlebot seems to guess URL patterns based on dynamic URLs. Thus, increasing the likelihood of swift discovery.

On the downside, it can potentially cause crawling traps if the site renders empty pages for guesses that aren’t part of the current paginated series.

For example, say a series contains four pages.

URLs with a content stop at http://www.example.com/category?page=4

If Google guesses http://www.example.com/category?page=7 and a live, but empty, page is loaded, the bot wastes crawl budget and potentially get lost in an infinite number of pages.

Make sure a 404 HTTP status code is sent for any paginated pages which are not part of the current series.

Another advantage of the parameter approach is the ability to configure the parameter in Google Search Console to “Paginates” and at any time change the signal to Google to crawl “Every URL” or “No URLs”, based on how you wish to use your crawl budget. No developer needed!

Don’t ever map paginated page content to fragment identifiers (#) as it is not crawlable or indexable, and as such not search engine friendly.

Sources for KPIs can include:

  • Server log files for the number of paginated page crawls.
  • Site: search operator (for example site:example.com inurl:page) to understand how many paginated pages Google has indexed.
  • Google Search Console Search Analytics Report filtered by pages containing pagination to understand the number of impressions.
  • Google Analytics landing page report filtered by paginated URLs to understand on-site behavior.

Also – if you have more than 3 pages in a sequence (and pagination only shows 3 pages), then consider keeping the first page visible and linked-to:

Just found this on Reddit from John M:

Javascript and SEO – Notes from Across the Web [2023]

SEO and Vue.JS Notes – I recently did some research on this for a new website we’re building. I thought it’s worth a post for future reference and for anyone interested!

Summary

  • 90% of what I’ve found suggests you need to use Server-side or pre-rendering Javascript
  • Make sure all links are available in the proper a href HTML markup in the “View Source” of page
  • Official Google Video

If a website requires JS to show links, then Googlebot has to add an extra element to crawl them – “rendering” is required

JS can be “costly” it needs to be downloaded, parsed and executed

  • Official Google Video

“Make sure the content you care about the most, is part of the markup you see in the source of your website”

– All of the homepage meta and all the links in the body render without JS and can be found in the “view source” code

  • Official Google Video

Mobile Friendly Test –

Shows the page-rendered source code

If code is missing, or page doesn’t render as expected, check the recommendations

Search Console

https://search.google.com/test/rich-results

“Google can crawl JS….Google may not necessarily fetch all the JS resources”

“Google won’t click or scroll”

“The main content and links won’t be visible to Google”

“The problem is rendering counts towards crawl budget”

– Could be a problem for big eCommerce stores with lots of pages

“Don’t block JS files in robots.txt”

“Problem when websites don’t use traditional ahref links”

Tools

https://chrome.google.com/webstore/detail/view-rendered-source/ejgngohbdedoabanmclafpkoogegdpob?hl=en

https://developers.google.com/search/docs/crawling-indexing/links-crawlable

Make your links crawlable

Google can follow your links only if they use proper <a> tags with resolvable URLs:

Render tree that creates the sizes, elements etc to display

Finally, JS might add, change or remove tree elements – especially when interacting.

“View Source” – shows the Source HTML

Elements Tab in the Developer Tools shows current Dom content, including images etc added by JS

Search Console – Inspect URL – get rendered HTML that Google uses for indexing a page

https://itnext.io/yes-here-are-4-ways-to-handle-seo-with-vue-even-without-node-ssr-719f7d8b02bb

Not everyone can have a Node server for their project. And there may be a lot of reasons for that: shared webhost, no root access…

So here are 4 ways to handle SEO in 2021 with an SPA.

1. SEO on the client side with Google crawlers

React, Vue, Svelte… All these are frontend frameworks initially used to create SPAs, aka websites/webapps with CSR (Client Side Rendering).

What does this mean? It means the rendering is done in the browser. Therefore, the HTML sent to the browser & search engine crawlers is empty!

No HTML content = No SEO.

Remember, you need to handle SEO tags (title, meta…) on the client side! You can use vue-meta or vue-head for that (personally, I prefer vue-meta).

2. SEO with Node-based Server Side Rendering (SSR)

So SSR aka Sever Side Rendering, is a “new” concept that came with frontend frameworks. It’s based on Isomorphic programming which means the same app and code is executed on backend context and frontend context.

Because your app is executed on the backend, the server returns your component tree as an HTML string to the browser.

What does this mean? Since each initial request is done by a Node server that sends HTML, this even works for social media crawlers or any other crawler.

SSR with Vue can be done in 2 ways, DIY or with a framework on top of Vue:

Of course, SEO with Node-based SSR has it’s drawbacks:

You need… A Node server! Don’t worry, you only need it for the initial HTML rendering, not for your API.

3. SEO using “classic” Server Side Rendering (SSR)

So, based on what we learnt in 1 & 2, we can acheive something similar with a any backend language.

To solve this, we need to do 4 actions with any type of backend:

  1. Use a backend router that mirrors the frontend router, so that the initial response can render content based on the url asked
  2. In the backend response, we will only generate title & meta tags since our backend can’t execute frontend code
  3. We will store some initial data in a variable on the window object so that the SPA can access it at runtime on the client
  4. On the client, you check if there’s data on the window object. If there is, you have nothing to do. If there isn’t, you do a request to the API server.

That’s pretty much it for the backend, nothing more. You only need a single “view” file that takes title, meta, initialData or whatever parameters you need for SEO/SMO and that’s it.

The “window.initialData = @ json($state)” is also very important here, but not mandatory for SEO. It’s for performance/UX purposes. It’s just for you to have initial data in the frontend, to avoid an initial AJAX request to your API server.

Of course, SEO with classic SSR has it’s drawbacks:

  • You have to mirror each route were you need SEO on the backend
  • You have to pass “the same data” to the frontend and to APIs, sometimes if feels like duplicating stuff

4. JAMStack aka Static Site Generation aka Prerendering

This is my the method I love the most, but it isn’t meant for all situations.

So what is JAMStack? Well it’s a fancy word for something that existed before that we called: static websites.

So what’s JAMStack then? JavaScript API Markup.

JAMStack is the concept of prerendering, but automated and modernized.

It’s an architecture solely based on the fact that you will prerender markup with initial data, that markup would use JavaScript to bring interaction and eventually more data from APIs (yours and/or others).

In a JAMStack architecture, you would usually use a frontend framework to prerender your static files that would then turn to an SPA.

It’s mostly based on the fact that you would rebuild pages on-the-fly anytime data changes in your APIs, through webhooks with CI/CD.

So it’s really nice, but not great for websites/webapps that have daily updates with a lot of pages.

Why? Because all pages are regenerated each time.

It’s the fastest, most SEO-friendly and “cheapest” method.

You only need your API server, a static host (Netlify, Vercel, S3, Firebase Hosting, etc…), and a CI/CD system for rebuilds which you most likely already have to handle tests or deployment.

Prerendering tools

Any other SSG (static site generator) would be good but, you won’t have hydration with those not Vue-driven.

APIs: You can create your own API but, usually when you do JAMStack, it’s for content-drive websites/webapps. That’s why we often use what we call: Headless CMSs.

A headless CMS, is a CMS that can render HTTP APIs as a response.

There are many of them: Strapi, Directus (Node), WordPress (yep it can), Cockpit CMS (PHP), Contentful, Dato, Prismic (hosted)…

You can find more here: https://jamstack.org/headless-cms

Conclusion: What’s the best SEO method then?

There isn’t a silver bullet. It depends on your stack, budget, team, type of app and some other parameters.

In a nutshell, I would say:

  • If you don’t care a lot about it: an optimized SPA with Vue meta is fine
  • If you can use Node: do Node-based SSR
  • If you can’t use Node: do classic SSR with initial data rendering
  • If you don’t have daily page updates or too many pages: JAMStack

That’s it. Remember: there’s never ONLY ONE WAY to do something.

https://www.smashingmagazine.com/2019/05/vue-js-seo-reactive-websites-search-engines-bots/

These frameworks allow one to achieve new, previously-unthinkable things on a website or app, but how do they perform in terms of SEO? Do the pages that have been created with these frameworks get indexed by Google? Since with these frameworks all — or most — of the page rendering gets done in JavaScript (and the HTML that gets downloaded by bots is mostly empty), it seems that they’re a no-go if you want your websites to be indexed in search engines or even parsed by bots in general.

This seems to imply that we don’t have to worry about providing Google with server-side rendered HTML anymore. However, we see all sorts of tools for server-side rendering and pre-rendering made available for JavaScript frameworks, it seems this is not the case. Also, when dealing with SEO agencies on big projects, pre-rendering seems to be considered mandatory. How come?

COMPETITIVE SEO #

Okay, so the content gets indexed, but what this experiment doesn’t tell us is: will the content be ranked competitively? Will Google prefer a website with static content to a dynamically-generated website? This is not an easy question to answer.

From my experience, I can tell that dynamically-generated content can rank in the top positions of the SERPS. I’ve worked on the website for a new model of a major car company, launching a new website with a new third-level domain. The site was fully generated with Vue.js — with very little content in the static HTML besides <title> tags and meta descriptions.

WHAT ABOUT PRE-RENDERING? #

So, why all the fuss about pre-rendering — be it done server-side or at project compilation time? Is it really necessary? Although some frameworks, like Nuxt, make it much easier to perform, it is still no picnic, so the choice whether to set it up or not is not a light one.

I think it is not compulsory. It is certainly a requirement if a lot of the content you want to get indexed by Google comes from external web service and is not immediately available at rendering time, and might — in some unfortunate cases — not be available at all due to, for example, web service downtime. If during Googlebot’s visits some of your content arrives too slowly, then it might not be indexed. If Googlebot indexes your page exactly at a moment in which you are performing maintenance on your web services, it might not index any dynamic content at all.

Furthermore, I have no proof of ranking differences between static content and dynamically-generated content. That might require another experiment. I think that it is very likely that, if content comes from external web service and does not load immediately, it might impact on Google’s perception of your site’s performance, which is a very important factor for ranking.

JAVASCRIPT ERRORS #

If you rely on Googlebot executing your JavaScript to render vital content, then major JavaScript errors which could prevent the content from rendering must be avoided at all costs. While bots might parse and index HTML which is not perfectly valid (although it is always preferable to have valid HTML on any site!), if there is a JavaScript error that prevents the loading of some content, then there is no way Google will index that content.

OTHER SEARCH ENGINES #

The other search engines do not work as well as Google with dynamic content. Bing does not seem to index dynamic content at all, nor do DuckDuckGo or Baidu. Probably those search engines lack the resources and computing power that Google has in spades.

Parsing a page with a headless browser and executing JavaScript for a couple of seconds to parse the rendered content is certainly more resource-heavy than just reading plain HTML. Or maybe these search engines have made the choice not to scan dynamic content for some other reasons. Whatever the cause of this, if your project needs to support any of those search engines, you need to set up pre-rendering.

Note: To get more information on other search engines’ rendering capabilities, you can check this article by Bartosz Góralewicz. It is a bit old, but according to my experience, it is still valid.

OTHER BOTS #

Remember that your site will be visited by other bots as well. The most important examples are Twitter, Facebook, and other social media bots that need to fetch meta information about your pages in order to show a preview of your page when it is linked by their users. These bots will not index dynamic content, and will only show the meta information that they find in the static HTML. This leads us to the next consideration.

SUBPAGES #

If your site is a so-called “One Page website”, and all the relevant content is located in one main HTML, you will have no problem having that content indexed by Google. However, if you need Google to index and show any secondary page on the website, you will still need to create static HTML for each of those — even if you rely on your JavaScript Framework to check the current URL and provide the relevant content to put in that page. My advice, in this case, is to create server-side (or static) pages that at least provide the correct title tag and meta description/information.

  1. If you need your site to perform on search engines other than Google, you will definitely need pre-rendering of some sort.
  2.  

Vue SEO Tutorial with Prerendering

“No search engines will be able to see the content, therefore it’s not going to rank…”

Solutions:

  1. Server side rendering
  2. Pre-rendering

https://www.youtube.com/watch?v=Op8Q8bUAKNc (Google video)

“We do not execute JS due to resource constraints” (in the first wave of indexing)

“Eventually we will do a second wave of indexing, where we execute JS and index your content again”

“…but if you have a large site or lots of frequently changing content, this might not be optimum”

A way around this is pre-rendering or server-side rendering.

https://davidkunnen.com/how-to-get-250k-pages-indexed-by-google/

When creating Devsnap I was pretty naive. I used create-react-app for my frontend and Go with GraphQL for my backend. A classic SPA with client side rendering.

I knew for that kind of site I would have Google to index a lot of pages, but I wasn’t worried, since I knew Google Bot is rendering JavaScript by now and would index it just fine

Oh boy, was I wrong.

At first, everything was fine. Google was indexing the pages bit by bit and I got the first organic traffic.

First indexing

1. Enter SSR

I started by implementing SSR, because I stumbled across some quote from a Googler, stating that client side rendered websites have to get indexed twice. The Google Bot first looks at the initial HTML and immediately follows all the links it can find. The second time, after it has sent everything to the renderer, which returns the final HTML. That is not only very costly for Google, but also slow. That’s why I decided I wanted Google Bot to have all the links in the initial HTML.

Google Bot indexing zycle

I was doing that, by following this fantastic guide. I thought it would take me days to implement SSR, but it actually only took a few hours and the result was very nice.

Indexing with SSR

Without SSR I was stuck at around 20k pages indexed, but now it was steadily growing to >100k.

But it was still not fast enough

Google was not indexing more pages, but it was still too slow. If I ever wanted to get those 250k pages indexed and new job postings discovered fast, I needed to do more.

2. Enter dynamic Sitemap

With a site of that size, I figured I’d have to guide Google somehow. I couldn’t just rely on Google to crawl everything bit by bit. That’s why I created a small service in Go that would create a new Sitemap two times a day and upload it to my CDN.

Since sitemaps are limited to 50k pages, I had to split it up and focused on only the pages that had relevant content.

Sitemap index

After submitting it, Google instantly started to crawl faster.

But it was still not fast enough

I noticed the Google Bot was hitting my site faster, but it was still only 5-10 times per minute. I don’t really have an indexing comparison to #1 here, since I started implementing #3 just a day later.

3. Enter removing JavaScript

I was thinking why it was still so slow. I mean, there are other websites out there with a lot of pages as well and they somehow managed too.

That’s when I thought about the statement of #1. It is reasonable that Google only allocates a specific amount of resources to each website for indexing and my website was still very costly, because even though Google was seeing all the links in the initial HTML, it still had to send it to the renderer to make sure there wasn’t anything to index left. It simply doesn’t know everything was already in the initial HTML when there is still JavaScript left.

So all I did was removing the JavaScript for Bots.

if(isBot(req)) {

    completeHtml = completeHtml.replace(/<script[^>]*>(?:(?!<\/script>)[^])*<\/script>/g, “”)

Immediately after deploying that change the Google Bot went crazy. It was now crawling 5-10 pages – not per minute – per second.

Google Bot crawling speed

Conclusion

If you want to have Google index a big website, only feed it the final HTML and remove all the JavaScript (except for inline Schema-JS of course).

https://openai.com/blog/chatgpt/

Scripts in the <head>

John Mueller said to move scripts below the <head> whenever possible (source)

SEO JS Checks

Javacript
Screaming Frog – enable Configuration>Spider>Extraction – Store HTML, Store Rendered HTML
Screaming Frog – enable Configuration>Spider>Rendering- enable JS crawling etc.
https://www.reddit.com/r/TechSEO/comments/10l45od/how_to_view_actual_javascript_links_in/
Check page copy, H1s etc is present in HTML of the mobile friendly tool – https://search.google.com/test/mobile-friendly/
Check the drop down filters in the Javascript Tab in Screaming Frog for any issues

Checking JS links in Screaming Frog:

Check javascript content vs “HTML” content

Excluding URLs in Screaming Frog Crawl [2024]

To exclude URLs just go to:

Configuration > Exclude (in the very top menu bar)

To exclude URLs within a specific folder, use the following regex:

^https://www.mydomain.com/customer/account/.*
^https://www.mycomain.com/checkout/cart/.*

The above regex, will stop Screaming Frog from Crawling the customer/account folder and the cart folder.

Or – this is easier for me, as I have to check and crawl lots of international domains with the same site structure and folders:

^https?://[^/]+/customer/account/.*
^https?://[^/]+/checkout/cart/.*

Excluding Images –

Ive just been using the image extensions to block them in the crawl, e.g.

.*jpg

Although you can block them in the Configuration>Spider menu too.

Excluding Parameter URLs

this appears to do the job:

^.*\?.*

My typical “Excludes” looks like this:

^https?://[^/]+/customer/account/.*
^https?://[^/]+/checkout/cart/.*

^.*\?.*
jpg$

png$

.js$

.css$

Update – you can just use this to block any URLs containing “cart” or “account”

/account/|/cart/

Update:

Currently using this for my excludes config, as I actually want to crawl images:

^https?://[^/]+/customer/account/.*
^https?://[^/]+/checkout/cart/.*

^.\?.

.js$

.css$

  • You can just exclude crawling JS and CSS in the crawl > Configuration but I find it slightly quicker this way
  • If you are using JS rendering to crawl, you might want to crawl JS files too. Depending on if they’re required to follow any JS links etc (generally bad idea to have JS links, if you do, have a HTML backup or prerendering in place)

SiteNavigationElement Schema – How to Implement [2024]

  • Summary 

To associate name and other elements with the URL, it appears best to use ItemList in the schema markup, below is an example of SiteNavigationElement schema:

Without “position” for ItemList:

script type="application/ld+json">
{"@context":"http://schema.org",
"@type":"ItemList",
"itemListElement":
[
{
@type: "SiteNavigationElement",
name: MMA Equipment",
url:"https://www.blackbeltwhitehat.com/mma"
},
{
"@type": "SiteNavigationElement",
"name": "Cricket Equipment",
"url": "https://www.blackbeltwhitehat.com/cricket"
},
{
@type: "SiteNavigationElement",
name: "Tennis Equipment",
url:"https://www.blackbeltwhitehat.com/tennis"
},
{
@type: "SiteNavigationElement",
name: "Golf Equipment",
url:"https://www.blackbeltwhitehat.com//golf"
},
{
@type: "SiteNavigationElement",
name: "Rugby Equipment",
url:"https://www.blackbeltwhitehat.com/"
},
{
@type: "SiteNavigationElement",
name: "Gym Equipment",
url:"https://www.blackbeltwhitehat.com//gym-equipment"
}
]
}}
</script>
  • Put the SChema in the <head> or <body> tags.
  • Just replace the name and the URL if you want to use the code above.

SiteNavigationSchema – seems like a good idea for most websites to use this schema.

SiteNavigationElement Schema Example 2

Whilst the above schema code validates in all the tests – you’ll ideally want the position “element” in the schema too:

<script type="application/ld+json">
{
 "@context":"https://schema.org",
 "@type":"ItemList",
 "itemListElement":[
 {
 "@type":"ListItem",
 "position":1,
 "url":"https://example.com/dave/mma-guy"
 },
 {
 "@type":"ListItem",
 "position":2,
 "url":"https://example.com/dave/cheeky-exec"
 },
 {
 "@type":"ListItem",
 "position":3,
 "url":"https://example.com/dave/ppc-seo"
 }
 ]
}
</script>

It is in schema format so directly informs Google of page locations and what they’re about.

You can put it separately from the main navigation markup, in either the <head> or the <body> when using the recommended JSON format. Which effectively gives Googlebot an additional number of links to crawl or at least acknowledged with some additional data describing what the links are about.

There are some old posts saying Navigation Schema is not approved by Google, but it now appears to be on the list of approved schema – screenshotted below “SiteNavigationElement”:

https://schema.org/docs/full.html

SiteNavigationElement Schema Generator

Chat GPT can do it, but you have to give it the example above or it lays it all out weird.

This is the prompt that I used:

and the result:

<script type="application/ld+json">
{
"@context":"https://schema.org",
"@type":"ItemList",
"itemListElement":[
{
"@type":"ListItem",
"position":1,
"url":"https://www.bikes.co.uk/"
},
{
"@type":"ListItem",
"position":2,
"url":"https://www.bikes.co.uk/electric-mountain-bikes"
},
{
"@type":"ListItem",
"position":3,
"url":"https://www.bikes.co.uk/infusion-grey-electric-bike"
},
{
"@type":"ListItem",
"position":4,
"url":"https://www.bikes.co.uk/hybrid-electric-bikes"
},
{
"@type":"ListItem",
"position":5,
"url":"https://www.bikes.co.uk/electric-bikes-under-1000"
},
{
"@type":"ListItem",
"position":6,
"url":"https://www.bikes.co.uk/swift-electric-heritage-step-through-ebike-dark-grey"
},
{
"@type":"ListItem",
"position":7,
"url":"https://www.bikes.co.uk/electric-bike-batteries"
},
{
"@type":"ListItem",
"position":8,
"url":"https://www.bikes.co.uk/careers"
},
{
"@type":"ListItem",
"position":9,
"url":"https://www.bikes.co.uk/electric-bikes-under-400"
},
{
"@type":"ListItem",
"position":10,
"url":"https://www.bikes.co.uk/ebikes-patriot-grey-orange"
},
{
"@type":"ListItem",
"position":11,
"url":"https://www.bikes.co.uk/privacy-policy-cookie-restriction-mode"
},
{
"@type":"ListItem",
"position":12,
"url":"https://www.bikes.co.uk/hardtail-electric-mountain-bikes"
},
{
"@type":"ListItem",
"position":13,
"url":"https://www.bikes.co.uk/returns-policy"
},
{
"@type":"ListItem",
"position":14,
"url":"https://www.bikes.co.uk/contact"
},
{
"@type":"ListItem",
"position":15,
"url":"https://www.bikes.co.uk/warranty"
},
{
"@type":"ListItem",
"position":16,
"url":"https://www.bikes.co.uk/folding-electric-bikes"
},
{
"@type":"ListItem",
"position":17,
"url":"https://www.bikes.co.uk/electric-bike-accessories"
},
{
"@type":"ListItem",
"position":18,
"url":"https://www.bikes.co.uk/step-through-electric-bikes"
},
{
"@type":"ListItem",
"position":19,
"url":"https://www.bikes.co.uk/swifty-liberte-folding-electric-bike-pink"
},
{
"@type":"ListItem",
"position":20,
"url":"https://www.bikes.co.uk/electric-hybrid-bikes"
},
{
"@type":"ListItem",
"position":21,
"url":"https://www.bikes.co.uk/klarna-finance-info"
},
{
"@type":"ListItem",
"position":22,
"url":"https://www.bikes.co.uk/faqs"
},
{
"@type":"ListItem",
"position":23,
"url":"https://www.bikes.co.uk/customer-reviews"
},
{
"@type":"ListItem",
"position":24,
"url":"https://www.bikes.co.uk/womens-electric-bikes"
},
{
"@type":"ListItem",
"position":25,
"url":"https://www.bikes.co.uk/delivery"
}
]
}
</script>

From what I’ve read and from the example I’ve been sent during my ‘research’, it appears you can have the schema code, completely separate to the main HTML navigation code – so effectively adds an additional incidence of HTML links (which is good).

Implementing Navigation Schema

If using JSON – put the schema code in <head> or <body> of HTML page

The schema can be placed on all of the site’s pages.