The best report in Screaming Frog to see the source and destination of all 404s – is to go to Bulk Export at the top menu:

And then Response Codes – Client Error Inlinks

The best report in Screaming Frog to see the source and destination of all 404s – is to go to Bulk Export at the top menu:
And then Response Codes – Client Error Inlinks
Crawl the website with Screaming Frog
Download the “internal” report on first tab
Filter relevant columns to find all URLs that return 200 (Status Code), are indexable (Indexability)
Paste the filtered sheet into a new tab/sheet and delete irrelevant columns so you just have URL and Canonical columns
Use =Exact formula to find all URLs that match the canonical URL
Filter Canonical URL column to “true”
You should be left with all URLs that are indexable – i.e. 200 status codes and URLs that exactly match canonical URLs.
Copy all the URLs that remain
Use List Mode and Copy the URLs into screaming frog and crawl.
Check the crawl and go to the hreflang tab – order by hreflang “Occurences”
Export hreflang
Just discovered we have lots of non-200 hreflang links.
It’s definitely worth checking the filters on the hreflang tab
Official documentation from Screaming Frog here.
The easiest thing to do is use regex and the Search function
Remove text before a forward slash in Excel:
To remove text or URL paths after a slash, use find and replace then /*
To remove the slash after the domain use Find and Replace – Example.com/ and replace with “nothing”:
You can then concatenate the domain back in place at the end, if you need to
Also had this reply from Reddit
*I’ve never really got my head around URLs, URIs and URNs. I think the product-name that I’m after, is a URN.
https://myshop.com/category-folder/product-name
If the links don’t have .html in the protocol (at the end of them), then first concatenate the URLs to add * at the end of them all.
If they do have .HTML, then find and replace .HTML with .html*
Next – use Text to Columns, use * as the “delimiter”
If the links are relative:
Use text to columns again and use / as the delimiter
If the links contain the full domain name, then concatenate * at the start of the domain name and use text to columns again to remove the code fro the start of the URLs
Here is another method for doing it – https://businessdaduk.com/data/data-analysis-with-excel/getting-domains-from-a-list-of-urls/
Ensure all the domains have http:// prefix
Use the formula:
=IF(ISERROR(FIND("//www.",A2)), MID(A2,FIND(":",A2,4)+3,FIND("/",A2,9)-FIND(":",A2,4)-3), MID(A2,FIND(":",A2,4)+7,FIND("/",A2,9)-FIND(":",A2,4)-7))
I have made a load of notes about SEO and Pagination from around the web, I’ve summarised them here…
John Mueller commented, “We don’t treat pagination differently. We treat them as normal pages.”
Meaning paginated pages are not recognized by Google as a series of pages consolidated into one piece of content as they previously advised. Every paginated page is eligible to compete against the root page for ranking.
To encourage Google to return the root page in the SERPs and prevent “Duplicate meta descriptions” or “Duplicate title tags” warnings in Google Search Console, make an easy modification to your code.
If the root page has the formula:
The successive paginated pages could have the formula:
These paginated URL page titles and meta descriptions are purposefully suboptimal to dissuade Google from displaying these results, rather than the root page.
If even with such modifications, paginated pages are ranking in the SERPs, try other traditional on-page SEO tactics such as:
https://developers.google.com/search/docs/advanced/ecommerce/pagination-and-incremental-page-loading
You can improve the experience of users on your site by displaying a subset of results to improve page performance (page experience is a Google Search ranking signal), but you may need to take action to ensure the Google crawler can find all your site content.
For example, you may display a subset of available products in response to a user using the search box on your ecommerce site – the full set of matches may be too large to display on a single web page, or take too long to retrieve.
Beyond search results, you may load partial results on your ecommerce site for:
Having your site incrementally load content, in response to user actions, can benefit your users by:
Selecting the best UX pattern for your site
To display a subset of a larger list, you can choose between different UX patterns:
UX Pattern | ||
Pagination | Pros: Gives users insight into result size and current position | Cons: More complex controls for users to navigate through resultsContent is split across multiple pages rather than being a single continuous listViewing more requires new page loads |
Load more | Pros: Uses a single page for all contentCan inform user of total result size (on or near the button) | Cons: Can’t handle very large numbers of results as all of the results are included on a single web page |
Infinite scroll | Pros: Uses a single page for all contentIntuitive – the user just keeps scrolling to view more content | Cons: Can lead to “scrolling fatigue” because of unclear result sizeCan’t handle very large numbers of results |
How Google indexes the different strategies
Once you’ve selected the most appropriate UX strategy for your site and SEO, make sure the Google crawler can find all of your content.
For example, you can implement pagination using links to new pages on your ecommerce site, or using JavaScript to update the current page. Load more and infinite scroll are generally implemented using JavaScript. When crawling a site to find pages to index, Google only follows page links marked up in HTML with <a href> tags. The Google crawler doesn’t follow buttons (unless marked up with <a href>) and doesn’t trigger JavaScript to update the current page contents.
If your site uses JavaScript, follow these JavaScript SEO best practices. In addition to best practices, such as making sure links on your site are crawlable, consider using a sitemap file or a Google Merchant Center feed to help Google find all of the products on your site.
Best practices when implementing pagination
To make sure Google can crawl and index your paginated content, follow these best practices:
Link pages sequentially
To make sure search engines understand the relationship between pages of paginated content, include links from each page to the following page using <a href> tags. This can help Googlebot (the Google web crawler) find subsequent pages.
In addition, consider linking from all individual pages in a collection back to the first page of the collection to emphasize the start of the collection to Google. This can give Google a hint that the first page of a collection might be a better landing page than other pages in the collection.
Normally, we recommend that you give web pages distinct titles to help differentiate them. However, pages in a paginated sequence don’t need to follow this recommendation. You can use the same titles and descriptions for all pages in the sequence. Google tries to recognize pages in a sequence and index them accordingly.
Use URLs correctly
In the past, Google used <link rel=”next” href=”…”> and <link rel=”prev” href=”…”> to identify next page and previous page relationships. Google no longer uses these tags, although these links may still be used by other search engines.
Avoid indexing URLs with filters or alternative sort orders
You may choose to support filters or different sort orders for long lists of results on your site. For example, you may support ?order=price on URLs to return the same list of results ordered by price.
To avoid indexing variations of the same list of results, block unwanted URLs from being indexed with the noindex robots meta tag or discourage crawling of particular URL patterns with a robots.txt file.
SEO-Friendly Pagination: Your Best Practice Guide for 2022
Pros of infinite scroll
Cons of infinite scroll
At the end of the day, while users might appreciate infinite scrolling, this option isn’t as beneficial for SEO as website pagination. Pagination is the ideal option for search engines, provided you handle paginated pages in line with SEO best practices.
1. Include canonical tags on paginated pages
Duplicate content is likely to be one of the biggest challenges you’ll come across when implementing pagination on your website.
To overcome these issues, you’ll need to use a self-referencing rel = “canonical” attribute on all of your paginated pages that directs back to the “View All” version of your page. This tag tells Google to crawl and index the “View All” version only and ignore any duplicated content in your paginated pages.
In the HTML, it looks like this:
Image source: SEO Clarity
Last but not least, make sure you use internal linking to different paginated URLs using the rel=”next” and rel=”prev” tags along with your canonical tag. These can be incorporated into your HTML like so:
<link rel=”next” href=”https://www.example.com/category?page=2&order=newest” />
<link rel=”canonical” href=”https://www.example.com/category?page=2″ />
Even though these aren’t a ranking factor, they still help Google (and Bing) understand the order of paginated content on your website.
2. Make sure to use crawlable anchor links
The first step to getting Google to crawl and index pages that are paginated? Make sure that the search engine can access them. Throughout your website, you should link to your paginated category pages using crawlable anchor site links with href attributes.
Let’s say you’re linking to page 3 of your product catalogue. Crawlable paginated links would look like this:
<a href=”https://www.mystorehere.com/catalog/products?page=4>
On the flipside, any link without the “a href” attribute won’t be crawlable by Google, such as this link:
<span href=”https://www.mystorehere.com/catalog/products?page=4>
3. Don’t include paginated pages in your sitemap
Even though your paginated pages are indexable, paginated URLs shouldn’t be included on your XML sitemap. Adding them in will only use up your ‘crawl budget’ with Google and could even lead to Google picking a random page to rank (such as page 3 in your product catalogue).
The only exception to this is when you choose to have important pages consolidated into a “View All” page, which absolutely needs to be included in your XML sitemap.
A final word on this one: don’t noindex paginated pages. While the no-index tag tells Google not to index paginated pages, it could lead to Google eventually no-following internal links from that page. In turn, this might cause other pages that are linked from your paginated pages to be removed from Google’s index.
4. Ensure you optimise your on-page SEO
Even if your paginated pages use self-referencing canonical URL tags, feature crawlable anchor links and are excluded from your XML sitemap, you should still follow best practices for on-page SEO.
As we touched on earlier, paginated pages are treated as unique pages in Google’s search index. This means that each page needs to follow on-page SEO guidelines if you want to rank in search results.
In case you needed more proof, here are John Mueller’s recommendations on this topic:
“I’d also recommend making sure the pagination pages can kind of stand on their own. So similar to two category pages where if users were to go to those pages directly, there would be something useful for the user to see there. So it’s not just like a list of text items that go from zero to 100 and links to different products. It’s actually something useful kind of like a category page where someone is looking for a specific type of a product they can go there, and they get that information.” – John Mueller, Google Webmaster English Hangouts
This means that every paginated page should:
Tip: If you’re running an online store with eCommerce category pages, Google’s UX Playbook for Retail contains all the best practices you need to know to turn clicks into customers.
https://www.searchenginejournal.com/technical-seo/pagination/
SEO-Friendly Pagination: A Complete Best Practices Guide
Summary
Correct if pagination has been improperly implemented, such as having both a “View All” page and paginated pages without a correct rel=canonical or if you have created a page=1 in addition to your root page.
Incorrect when you have SEO friendly pagination. Even if your H1 and meta tags are the same, the actual page content differs. So it’s not duplication.
Correct if you have split an article or photo gallery across multiple pages (in order to drive ad revenue by increasing pageviews), leaving too little content on each page.
Incorrect when you put the desires of the user to easily consume your content above that of banner ad revenues or artificially inflated pageviews. Put a UX-friendly amount of content on each page.
Correct if you’re allowing Google to crawl paginated pages. And there are some instances where you would want to use that budget.
For example, for Googlebot to travel through paginated URLs to reach deeper content pages.
Often incorrect when you set Google Search Console pagination parameter handling to “Do not crawl” or set a robots.txt disallow, in the case where you wish to conserve your crawl budget for more important pages. (use robots.txt for this as no longer in Search Console)
Use Crawlable Anchor Links
For search engines to efficiently crawl paginated pages, the site must have anchor links with href attributes to these paginated URLs.
Be sure your site uses <a href=”your-paginated-url-here”> for internal linking to paginated pages. Don’t load paginated anchor links or href attribute via JavaScript.
Additionally, you should indicate the relationship between component URLs in a paginated series with rel=”next” and rel=”prev” attributes.
Yes, even after Google’s infamous Tweet that they no longer use these link attributes at all.
Google is not the only search engine in town. Here is Bing’s take on the issue.
Complement the rel=”next” / “prev” with a self-referencing rel=”canonical” link.
So /category?page=4 should rel=”canonical” to /category?page=4.
This is appropriate as pagination changes the page content and so is the master copy of that page.
If the URL has additional parameters, include these in the rel=”prev” / “next” links, but don’t include them in the rel=”canonical”.
For example:
<link rel=”next” href=”https://www.example.com/category?page=2&order=newest” />
<link rel=”canonical” href=”https://www.example.com/category?page=2″ />
Doing so will indicate a clear relationship between the pages and prevent the potential of duplicate content.
Common errors to avoid:
The <head> code of a four-page series will look something like this:
Modify Paginated Pages On-Page Elements
John Mueller commented, “We don’t treat pagination differently. We treat them as normal pages.”
Meaning paginated pages are not recognized by Google as a series of pages consolidated into one piece of content as they previously advised. Every paginated page is eligible to compete against the root page for ranking.
To encourage Google to return the root page in the SERPs and prevent “Duplicate meta descriptions” or “Duplicate title tags” warnings in Google Search Console, make an easy modification to your code.
If the root page has the formula:
The successive paginated pages could have the formula:
These paginated URL page titles and meta description are purposefully suboptimal to dissuade Google from displaying these results, rather than the root page.
If even with such modifications, paginated pages are ranking in the SERPs, try other traditional on-page SEO tactics such as:
Don’t Include Paginated Pages in XML Sitemaps
While paginated URLs are technically indexable, they aren’t an SEO priority to spend crawl budget on.
As such, they don’t belong in your XML sitemap.
Handle Pagination Parameters in Google Search Console
If you have a choice, run pagination via a parameter rather than a static URL.
For example:
example.com/category?page=2
over
example.com/category/page-2
While there is no advantage using one over the other for ranking or crawling purposes, research has shown that Googlebot seems to guess URL patterns based on dynamic URLs. Thus, increasing the likelihood of swift discovery.
On the downside, it can potentially cause crawling traps if the site renders empty pages for guesses that aren’t part of the current paginated series.
For example, say a series contains four pages.
URLs with a content stop at http://www.example.com/category?page=4
If Google guesses http://www.example.com/category?page=7 and a live, but empty, page is loaded, the bot wastes crawl budget and potentially get lost in an infinite number of pages.
Make sure a 404 HTTP status code is sent for any paginated pages which are not part of the current series.
Another advantage of the parameter approach is the ability to configure the parameter in Google Search Console to “Paginates” and at any time change the signal to Google to crawl “Every URL” or “No URLs”, based on how you wish to use your crawl budget. No developer needed!
Don’t ever map paginated page content to fragment identifiers (#) as it is not crawlable or indexable, and as such not search engine friendly.
Sources for KPIs can include:
SEO and Vue.JS Notes – I recently did some research on this for a new website we’re building. I thought it’s worth a post for future reference and for anyone interested!
If a website requires JS to show links, then Googlebot has to add an extra element to crawl them – “rendering” is required
JS can be “costly” it needs to be downloaded, parsed and executed
“Make sure the content you care about the most, is part of the markup you see in the source of your website”
– All of the homepage meta and all the links in the body render without JS and can be found in the “view source” code
Mobile Friendly Test –
Shows the page-rendered source code
If code is missing, or page doesn’t render as expected, check the recommendations
Search Console
https://search.google.com/test/rich-results
“Google can crawl JS….Google may not necessarily fetch all the JS resources”
“Google won’t click or scroll”
“The main content and links won’t be visible to Google”
“The problem is rendering counts towards crawl budget”
– Could be a problem for big eCommerce stores with lots of pages
“Don’t block JS files in robots.txt”
“Problem when websites don’t use traditional ahref links”
Tools
https://developers.google.com/search/docs/crawling-indexing/links-crawlable
Make your links crawlable
Google can follow your links only if they use proper <a> tags with resolvable URLs:
Render tree that creates the sizes, elements etc to display
Finally, JS might add, change or remove tree elements – especially when interacting.
“View Source” – shows the Source HTML
Elements Tab in the Developer Tools shows current Dom content, including images etc added by JS
Search Console – Inspect URL – get rendered HTML that Google uses for indexing a page
https://itnext.io/yes-here-are-4-ways-to-handle-seo-with-vue-even-without-node-ssr-719f7d8b02bb
Not everyone can have a Node server for their project. And there may be a lot of reasons for that: shared webhost, no root access…
So here are 4 ways to handle SEO in 2021 with an SPA.
1. SEO on the client side with Google crawlers
React, Vue, Svelte… All these are frontend frameworks initially used to create SPAs, aka websites/webapps with CSR (Client Side Rendering).
What does this mean? It means the rendering is done in the browser. Therefore, the HTML sent to the browser & search engine crawlers is empty!
No HTML content = No SEO.
Remember, you need to handle SEO tags (title, meta…) on the client side! You can use vue-meta or vue-head for that (personally, I prefer vue-meta).
2. SEO with Node-based Server Side Rendering (SSR)
So SSR aka Sever Side Rendering, is a “new” concept that came with frontend frameworks. It’s based on Isomorphic programming which means the same app and code is executed on backend context and frontend context.
Because your app is executed on the backend, the server returns your component tree as an HTML string to the browser.
What does this mean? Since each initial request is done by a Node server that sends HTML, this even works for social media crawlers or any other crawler.
SSR with Vue can be done in 2 ways, DIY or with a framework on top of Vue:
Of course, SEO with Node-based SSR has it’s drawbacks:
You need… A Node server! Don’t worry, you only need it for the initial HTML rendering, not for your API.
3. SEO using “classic” Server Side Rendering (SSR)
So, based on what we learnt in 1 & 2, we can acheive something similar with a any backend language.
To solve this, we need to do 4 actions with any type of backend:
That’s pretty much it for the backend, nothing more. You only need a single “view” file that takes title, meta, initialData or whatever parameters you need for SEO/SMO and that’s it.
The “window.initialData = @ json($state)” is also very important here, but not mandatory for SEO. It’s for performance/UX purposes. It’s just for you to have initial data in the frontend, to avoid an initial AJAX request to your API server.
Of course, SEO with classic SSR has it’s drawbacks:
4. JAMStack aka Static Site Generation aka Prerendering
This is my the method I love the most, but it isn’t meant for all situations.
So what is JAMStack? Well it’s a fancy word for something that existed before that we called: static websites.
So what’s JAMStack then? JavaScript API Markup.
JAMStack is the concept of prerendering, but automated and modernized.
It’s an architecture solely based on the fact that you will prerender markup with initial data, that markup would use JavaScript to bring interaction and eventually more data from APIs (yours and/or others).
In a JAMStack architecture, you would usually use a frontend framework to prerender your static files that would then turn to an SPA.
It’s mostly based on the fact that you would rebuild pages on-the-fly anytime data changes in your APIs, through webhooks with CI/CD.
So it’s really nice, but not great for websites/webapps that have daily updates with a lot of pages.
Why? Because all pages are regenerated each time.
It’s the fastest, most SEO-friendly and “cheapest” method.
You only need your API server, a static host (Netlify, Vercel, S3, Firebase Hosting, etc…), and a CI/CD system for rebuilds which you most likely already have to handle tests or deployment.
Prerendering tools
Any other SSG (static site generator) would be good but, you won’t have hydration with those not Vue-driven.
APIs: You can create your own API but, usually when you do JAMStack, it’s for content-drive websites/webapps. That’s why we often use what we call: Headless CMSs.
A headless CMS, is a CMS that can render HTTP APIs as a response.
There are many of them: Strapi, Directus (Node), WordPress (yep it can), Cockpit CMS (PHP), Contentful, Dato, Prismic (hosted)…
You can find more here: https://jamstack.org/headless-cms
Conclusion: What’s the best SEO method then?
There isn’t a silver bullet. It depends on your stack, budget, team, type of app and some other parameters.
In a nutshell, I would say:
That’s it. Remember: there’s never ONLY ONE WAY to do something.
https://www.smashingmagazine.com/2019/05/vue-js-seo-reactive-websites-search-engines-bots/
These frameworks allow one to achieve new, previously-unthinkable things on a website or app, but how do they perform in terms of SEO? Do the pages that have been created with these frameworks get indexed by Google? Since with these frameworks all — or most — of the page rendering gets done in JavaScript (and the HTML that gets downloaded by bots is mostly empty), it seems that they’re a no-go if you want your websites to be indexed in search engines or even parsed by bots in general.
This seems to imply that we don’t have to worry about providing Google with server-side rendered HTML anymore. However, we see all sorts of tools for server-side rendering and pre-rendering made available for JavaScript frameworks, it seems this is not the case. Also, when dealing with SEO agencies on big projects, pre-rendering seems to be considered mandatory. How come?
COMPETITIVE SEO #
Okay, so the content gets indexed, but what this experiment doesn’t tell us is: will the content be ranked competitively? Will Google prefer a website with static content to a dynamically-generated website? This is not an easy question to answer.
From my experience, I can tell that dynamically-generated content can rank in the top positions of the SERPS. I’ve worked on the website for a new model of a major car company, launching a new website with a new third-level domain. The site was fully generated with Vue.js — with very little content in the static HTML besides <title> tags and meta descriptions.
WHAT ABOUT PRE-RENDERING? #
So, why all the fuss about pre-rendering — be it done server-side or at project compilation time? Is it really necessary? Although some frameworks, like Nuxt, make it much easier to perform, it is still no picnic, so the choice whether to set it up or not is not a light one.
I think it is not compulsory. It is certainly a requirement if a lot of the content you want to get indexed by Google comes from external web service and is not immediately available at rendering time, and might — in some unfortunate cases — not be available at all due to, for example, web service downtime. If during Googlebot’s visits some of your content arrives too slowly, then it might not be indexed. If Googlebot indexes your page exactly at a moment in which you are performing maintenance on your web services, it might not index any dynamic content at all.
Furthermore, I have no proof of ranking differences between static content and dynamically-generated content. That might require another experiment. I think that it is very likely that, if content comes from external web service and does not load immediately, it might impact on Google’s perception of your site’s performance, which is a very important factor for ranking.
JAVASCRIPT ERRORS #
If you rely on Googlebot executing your JavaScript to render vital content, then major JavaScript errors which could prevent the content from rendering must be avoided at all costs. While bots might parse and index HTML which is not perfectly valid (although it is always preferable to have valid HTML on any site!), if there is a JavaScript error that prevents the loading of some content, then there is no way Google will index that content.
OTHER SEARCH ENGINES #
The other search engines do not work as well as Google with dynamic content. Bing does not seem to index dynamic content at all, nor do DuckDuckGo or Baidu. Probably those search engines lack the resources and computing power that Google has in spades.
Parsing a page with a headless browser and executing JavaScript for a couple of seconds to parse the rendered content is certainly more resource-heavy than just reading plain HTML. Or maybe these search engines have made the choice not to scan dynamic content for some other reasons. Whatever the cause of this, if your project needs to support any of those search engines, you need to set up pre-rendering.
Note: To get more information on other search engines’ rendering capabilities, you can check this article by Bartosz Góralewicz. It is a bit old, but according to my experience, it is still valid.
OTHER BOTS #
Remember that your site will be visited by other bots as well. The most important examples are Twitter, Facebook, and other social media bots that need to fetch meta information about your pages in order to show a preview of your page when it is linked by their users. These bots will not index dynamic content, and will only show the meta information that they find in the static HTML. This leads us to the next consideration.
SUBPAGES #
If your site is a so-called “One Page website”, and all the relevant content is located in one main HTML, you will have no problem having that content indexed by Google. However, if you need Google to index and show any secondary page on the website, you will still need to create static HTML for each of those — even if you rely on your JavaScript Framework to check the current URL and provide the relevant content to put in that page. My advice, in this case, is to create server-side (or static) pages that at least provide the correct title tag and meta description/information.
Vue SEO Tutorial with Prerendering
“No search engines will be able to see the content, therefore it’s not going to rank…”
Solutions:
https://www.youtube.com/watch?v=Op8Q8bUAKNc (Google video)
“We do not execute JS due to resource constraints” (in the first wave of indexing)
“Eventually we will do a second wave of indexing, where we execute JS and index your content again”
“…but if you have a large site or lots of frequently changing content, this might not be optimum”
A way around this is pre-rendering or server-side rendering.
https://davidkunnen.com/how-to-get-250k-pages-indexed-by-google/
When creating Devsnap I was pretty naive. I used create-react-app for my frontend and Go with GraphQL for my backend. A classic SPA with client side rendering.
I knew for that kind of site I would have Google to index a lot of pages, but I wasn’t worried, since I knew Google Bot is rendering JavaScript by now and would index it just fine
Oh boy, was I wrong.
At first, everything was fine. Google was indexing the pages bit by bit and I got the first organic traffic.
I started by implementing SSR, because I stumbled across some quote from a Googler, stating that client side rendered websites have to get indexed twice. The Google Bot first looks at the initial HTML and immediately follows all the links it can find. The second time, after it has sent everything to the renderer, which returns the final HTML. That is not only very costly for Google, but also slow. That’s why I decided I wanted Google Bot to have all the links in the initial HTML.
I was doing that, by following this fantastic guide. I thought it would take me days to implement SSR, but it actually only took a few hours and the result was very nice.
Without SSR I was stuck at around 20k pages indexed, but now it was steadily growing to >100k.
But it was still not fast enough
Google was not indexing more pages, but it was still too slow. If I ever wanted to get those 250k pages indexed and new job postings discovered fast, I needed to do more.
With a site of that size, I figured I’d have to guide Google somehow. I couldn’t just rely on Google to crawl everything bit by bit. That’s why I created a small service in Go that would create a new Sitemap two times a day and upload it to my CDN.
Since sitemaps are limited to 50k pages, I had to split it up and focused on only the pages that had relevant content.
After submitting it, Google instantly started to crawl faster.
But it was still not fast enough
I noticed the Google Bot was hitting my site faster, but it was still only 5-10 times per minute. I don’t really have an indexing comparison to #1 here, since I started implementing #3 just a day later.
I was thinking why it was still so slow. I mean, there are other websites out there with a lot of pages as well and they somehow managed too.
That’s when I thought about the statement of #1. It is reasonable that Google only allocates a specific amount of resources to each website for indexing and my website was still very costly, because even though Google was seeing all the links in the initial HTML, it still had to send it to the renderer to make sure there wasn’t anything to index left. It simply doesn’t know everything was already in the initial HTML when there is still JavaScript left.
So all I did was removing the JavaScript for Bots.
if(isBot(req)) {
completeHtml = completeHtml.replace(/<script[^>]*>(?:(?!<\/script>)[^])*<\/script>/g, “”)
}
Immediately after deploying that change the Google Bot went crazy. It was now crawling 5-10 pages – not per minute – per second.
If you want to have Google index a big website, only feed it the final HTML and remove all the JavaScript (except for inline Schema-JS of course).
https://openai.com/blog/chatgpt/
John Mueller said to move scripts below the <head> whenever possible (source)
Javacript |
Screaming Frog – enable Configuration>Spider>Extraction – Store HTML, Store Rendered HTML |
Screaming Frog – enable Configuration>Spider>Rendering- enable JS crawling etc. |
https://www.reddit.com/r/TechSEO/comments/10l45od/how_to_view_actual_javascript_links_in/ |
Check page copy, H1s etc is present in HTML of the mobile friendly tool – https://search.google.com/test/mobile-friendly/ |
Check the drop down filters in the Javascript Tab in Screaming Frog for any issues |
Checking JS links in Screaming Frog:
Check javascript content vs “HTML” content
To exclude URLs just go to:
Configuration > Exclude (in the very top menu bar)
To exclude URLs within a specific folder, use the following regex:
^https://www.mydomain.com/customer/account/.*
^https://www.mycomain.com/checkout/cart/.*
The above regex, will stop Screaming Frog from Crawling the customer/account folder and the cart folder.
Ive just been using the image extensions to block them in the crawl, e.g.
.*jpg
Although you can block them in the Configuration>Spider menu too.
this appears to do the job:
^.*\?.*
To associate name and other elements with the URL, it appears best to use ItemList in the schema markup, below is an example of SiteNavigationElement schema:
script type="application/ld+json">
{"@context":"http://schema.org",
"@type":"ItemList",
"itemListElement":
[
{
@type: "SiteNavigationElement",
name: MMA Equipment",
url:"https://www.blackbeltwhitehat.com/mma"
},
{
"@type": "SiteNavigationElement",
"name": "Cricket Equipment",
"url": "https://www.blackbeltwhitehat.com/cricket"
},
{
@type: "SiteNavigationElement",
name: "Tennis Equipment",
url:"https://www.blackbeltwhitehat.com/tennis"
},
{
@type: "SiteNavigationElement",
name: "Golf Equipment",
url:"https://www.blackbeltwhitehat.com//golf"
},
{
@type: "SiteNavigationElement",
name: "Rugby Equipment",
url:"https://www.blackbeltwhitehat.com/"
},
{
@type: "SiteNavigationElement",
name: "Gym Equipment",
url:"https://www.blackbeltwhitehat.com//gym-equipment"
}
]
}}
</script>
SiteNavigationSchema – seems like a good idea for most websites to use this schema.
It is in schema format so directly informs Google of page locations and what they’re about.
You can put it separately from the main navigation markup, in either the <head> or the <body> when using the recommended JSON format. Which effectively gives Googlebot an additional number of links to crawl or at least acknowledged with some additional data describing what the links are about.
There are some old posts saying Navigation Schema is not approved by Google, but it now appears to be on the list of approved schema – screenshotted below “SiteNavigationElement”:
https://schema.org/docs/full.html
From what I’ve read and from the example I’ve been sent during my ‘research’, it appears you can have the schema code, completely separate to the main HTML navigation code – so effectively adds an additional incidence of HTML links (which is good).
If using JSON – put the schema code in <head> or <body> of HTML page
The schema can be placed on all of the site’s pages.
Google documentation about Schema generally:
This is handy if you have data in different cells, that you want to put into a single cell, separated by a comma.
For example:
Image source ablebits.com
I needed a way of combining a load of commerce product identifier numbers into one cell, separated by columns.
You can download the spreadsheet with the formula here.
TEXTJOIN(",",TRUE,F6:F35)
The comma in speech marks, adds the comma between the numbers
Not sure what “TRUE” does to be honest!
The F6:F35 is just the cells that the original list, that’s aligned vertically in this case, was in.
Go to a YouTube video – on YouTube.com
Default Embed Code:
I changed the default width="560" at the start to width="100%"
Also added style="max-width: 600px;" - after allowfullscreen:
I can’t show an example unfortunately, as my Free WordPress.com blog doesn’t allow to add any code. The embed above isn’t fully responsive.