Javascript Fundamentals for Beginners [2023]

Notes taken from The Complete JavaScript Course 2023 on Udemy and from across the web.

Variables in Javascript

Variables are used to store values.

Variables can be thought of like a box in the ‘real world’.

A box can hold items and can be labelled so we know what is in it.

If we create a variable and label it “firstName”

let firstName = "Jonas"

Now if we “call” or publish “firstName” we will get a value of “Jonas”

This can be extremely useful. For example, if you were to use “firstName” in hundreds of places on a website, you can just change the variable value, and in theory, all the ‘firstNames’ will update with one change in the code.

Naming Conventions for Variables

Camel case

Camel case involves having no spaces between words and leaving the first letter lowercase.

for example – firstNamePerson is written in camel case.

I guess the capital letter looks a bit like a camel’s hump.

This isn’t a hard rule, but general practice.

Hard rules for Variable Names

Variable names can’t start with a number – e.g. 3years = 3; – would result in an error because of the number “3” in the name

Symbols are generally a bad idea in variable names. You can only use letters, numbers, underscores or dollar sign.

You can’t use reserved JS keywords. e.g. “new” is a reserved keyword, as is “function” and “name”.

Don’t use all uppercase letters for variable name either. Unless it is a “constant” that never changes, such as the value of PI

Let

You declare variables with “let”, you can update the variable later, without using “let”.

Constant – used to declare variables that don’t change e.g. date of birth

Strings and Template Literals

Good tutorial here.

Template literals make it easier to build strings.

Template Literals allow you to create:

  • Multiline strings – strings (text) that spans several lines
  • String Formatting – you can change part of the string for values of variables – This is called “String Interpolation”
  • HTML Escaping – making it okay to include in HTML of a webpage

let str = `Template literal here`;

Multiline strings

In older versions of Javascript, to create a new line, or a multi-line string, you had to include the newline code

\n

The template literals allow you to define multiline strings more easily because you need to add a new line in the string wherever you want:

let p =
`This text
can
span multiple lines`;

Type Conversion and Coercion

Type coercion, type conversion, typecasting, and type juggling: all different names that refer to the process of converting one data type into another. This process is present in almost every programming language and is an important concept in computer science.

source

What is Type Conversion?

Type conversion can be :

implicit – done automatically by the code already in place

explicit – done more manually by the developer

What is Type Coercion?

Explicit coercion happens when we want to coerce the value type to a specific type. Most of the time, explicit coercion in JavaScript happens using built-in functions such as String()Number(), and Boolean().

When we try to create operations in JavaScript using different value types, JavaScript coerces the value types for us implicitly.

This is one of the reasons why developers tend to avoid implicit coercion in JavaScript. Most of the time we get unexpected results from the operation if we don’t know exactly how JavaScript coerces the value types.

When coercion is done automatically, it can cause some weird outcomes and issues

image source

Truthy and Falsy Values in Javascript (Boolean thing)

Values are considered either truthy (evaluate to true) or falsy (evaluate to false) depending on how they are evaluated in a Boolean context.

In JS there are 6 incidences that result in, or are considered “Falsyies”

  • The primitive value undefined
  • The primitive value null
  • The empty string (''"")
  • The global property NaN
  • A number or BigInt representing 0 (0-00.0-0.00n)
  • The keyword false

All other values are considered “truthys”

When a value is truthy in Javascript, it does not means that the value is equal to true but it means that the value coerces to true when evaluated in a boolean context.

https://www.hackinbits.com/articles/js/truthy-and-falsy-values-in-javascript
truthyOrFalsy(undefined); // Falsy Value 
truthyOrFalsy(NaN);       // Falsy Value
truthyOrFalsy(null)       // Falsy Value
truthyOrFalsy("");        // Falsy Value
truthyOrFalsy(false)      // Falsy Value
truthyOrFalsy(0);         // Falsy Value
truthyOrFalsy(-0);        // Falsy Value
truthyOrFalsy(0n);        // Falsy Value


Equality Operators == and ===

JavaScript ‘==’ operator: In Javascript, the ‘==’ operator is also known as the loose equality operator which is mainly used to compare two values on both sides and then return true or false. This operator checks equality only after converting both the values to a common type i.e type coercion.

The operator using “two equals signs”, “==” is a “loose equality operator”. It will try and convert (using “coercion”) the values and then compare them.

JavaScript ‘==’ operator: In Javascript, the ‘==’ operator is also known as the loose equality operator which is mainly used to compare two values on both sides and then return true or false. This operator checks equality only after converting both the values to a common type i.e type coercion.

https://www.geeksforgeeks.org/javascript-vs-comparison-operator/

Genearlly, or loosely speaking, “==” just checks the values.

The ‘===’ operator, is the “strict operator” and checks the value and the data-type are the same.

JavaScript ‘===’ operator: Also known as strict equality operator, it compares both the value and the type which is why the name “strict equality”.

Boolean Logic

Boolean Logic is a form of algebra that is centered around three simple words known as Boolean Operators: “Or,” “And,” and “Not.” These Boolean operators are the logical conjunctions between your keywords in a search to help broaden or narrow its scope.

https://www.lotame.com/what-is-boolean-logic/

Boolean logic dictates that all values are either true or false.

Boolean data is a type of “Primitive Data Types” in Javascript. You can think of boolean values, a bit like a switch that turns on or off.

Boolean uses operators such as “AND” and “OR”.

For the result to be true in the example shown above, both A and B in the example, need to be true.

With the “OR” operator, we just need A or B to be true, for the result to be “TRUE”.

NOT operators invert the logical operators.

Returns false if its single operand can be converted to true; otherwise, returns true.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_NOT#:~:text=The%20logical%20NOT%20(%20!%20),true%20%3B%20otherwise%2C%20returns%20true%20.

Logical Operators

Image source

The Switch Statement

A quick, or easier way of doing a complicated “if else” statement.

Screenshots from Udemy Course – The Complete JavaScript Course 2023: From Zero to Expert!

The switch statement can be used so that you can perform different actions, when one of a range of conditions is met.

If the condition – e.g. “Monday” in the screenshot above, is met, then the code associated with Monday, will be executed.

If there is no match at all, then the default code, shown at the bottom left of the screenshot above, will be executed.

You can also use the if/else statement as an alternative to the switch statement

If Else alternative to Switch Statement

Statements & Expressions

An expression is a piece of code that produces a value. e.g. 3 + 4 is an expression, because it creates a value.

Numbers themselves are expressions.

Booleans, which produce “true or false” is an expression

A statement, does not produce a value.

The Conditional (Ternary) Operator

The conditional operator, allows us to write something similar to an “If Else Statement”, but all in one line.

Use a question mark ? after the condition is declared, and that write the code we want executed if condition is true.

The “else” block, or the equivalent of an else block, goes after a colon :

Strict Mode in Javascript

To activate “strict mode”, right at the top of your script, you have to type:

"use strict";

As you would expect,

"use strict"; Defines that JavaScript code should be executed in “strict mode”.

Strict mode, helps developers identify mistakes and bugs in the code.

Strict mode doesn’t allow you to do certain (incorrect) things and in other instances, it will visibly show you that you’ve created an error.

For example, if you spell a variable incorrectly,

SEO and Pagination for E-commerce. Best Practices [2023]

I have made a load of notes about SEO and Pagination from around the web, I’ve summarised them here…

Summary 

  • Don’t point canonical tags to the first pagegive them their own canonical URL
  • Block filtered pages and sorted pages – e.g. sorted by price e.g. “support ?order=price”
  • It is still best to use rel=next and rel=prev
  • Check log files to see if paginated pages are being crawled
  • Consider using preload, preconnect, or prefetch to optimize the performance for a user moving to the next page.
  • Consider adding a UX friendly amount of unique copy to each page
  • Don’t place the link attributes in the <body> content. 
  • They’re only supported by search engines within the <head> section of your HTML.
  • Don’t Include Paginated Pages in XML Sitemaps
  • If possible give paginated pages their own “sub-optimal” meta titles & descriptions

John Mueller commented, “We don’t treat pagination differently. We treat them as normal pages.”

Meaning paginated pages are not recognized by Google as a series of pages consolidated into one piece of content as they previously advised. Every paginated page is eligible to compete against the root page for ranking.

To encourage Google to return the root page in the SERPs and prevent “Duplicate meta descriptions” or “Duplicate title tags” warnings in Google Search Console, make an easy modification to your code.

If the root page has the formula:

Root page SERP

The successive paginated pages could have the formula:

pagination page SERP

These paginated URL page titles and meta descriptions are purposefully suboptimal to dissuade Google from displaying these results, rather than the root page.

If even with such modifications, paginated pages are ranking in the SERPs, try other traditional on-page SEO tactics such as:

eCommerce Pagination Best Practices (Notes from Google.com)

https://developers.google.com/search/docs/advanced/ecommerce/pagination-and-incremental-page-loading

You can improve the experience of users on your site by displaying a subset of results to improve page performance (page experience is a Google Search ranking signal), but you may need to take action to ensure the Google crawler can find all your site content.

For example, you may display a subset of available products in response to a user using the search box on your ecommerce site – the full set of matches may be too large to display on a single web page, or take too long to retrieve.

Beyond search results, you may load partial results on your ecommerce site for:

  • Category pages where all products in a category are displayed
  • Blog posts or newsletter titles that a site has published over time
  • User reviews on a product page
  • Comments on a blog post

Having your site incrementally load content, in response to user actions, can benefit your users by:

  • Improving user experience as the initial page load is faster than loading all results at once.
  • Reducing network traffic, which is particularly important for mobile devices.
  • Improving backend performance by reducing the volume of content retrieved from databases or similar.
  • Improving reliability by avoiding excessively long lists that may hit resource limits leading to errors in the browser and backend systems.

Selecting the best UX pattern for your site 

To display a subset of a larger list, you can choose between different UX patterns:

  • Pagination: Where a user can use links such as “next”, “previous”, and page numbers to navigate between pages that display one page of results at a time.
  • Load more: Buttons that a user can click to extend an initial set of displayed results.
  • Infinite scroll: Where a user can scroll to the end of the page to cause more content to be loaded. (Learn more about infinite scroll search-friendly recommendations.)
UX Pattern
PaginationPros:
Gives users insight into result size and current position
Cons:
More complex controls for users to navigate through resultsContent is split across multiple pages rather than being a single continuous listViewing more requires new page loads
Load morePros:
Uses a single page for all contentCan inform user of total result size (on or near the button)
Cons:
Can’t handle very large numbers of results as all of the results are included on a single web page
Infinite scrollPros:
Uses a single page for all contentIntuitive – the user just keeps scrolling to view more content
Cons:
Can lead to “scrolling fatigue” because of unclear result sizeCan’t handle very large numbers of results

How Google indexes the different strategies 

Once you’ve selected the most appropriate UX strategy for your site and SEO, make sure the Google crawler can find all of your content.

For example, you can implement pagination using links to new pages on your ecommerce site, or using JavaScript to update the current page. Load more and infinite scroll are generally implemented using JavaScript. When crawling a site to find pages to index, Google only follows page links marked up in HTML with <a href> tags. The Google crawler doesn’t follow buttons (unless marked up with <a href>) and doesn’t trigger JavaScript to update the current page contents.

If your site uses JavaScript, follow these JavaScript SEO best practices. In addition to best practices, such as making sure links on your site are crawlable, consider using a sitemap file or a Google Merchant Center feed to help Google find all of the products on your site.

Best practices when implementing pagination 

To make sure Google can crawl and index your paginated content, follow these best practices:

Link pages sequentially 

To make sure search engines understand the relationship between pages of paginated content, include links from each page to the following page using <a href> tags. This can help Googlebot (the Google web crawler) find subsequent pages.

In addition, consider linking from all individual pages in a collection back to the first page of the collection to emphasize the start of the collection to Google. This can give Google a hint that the first page of a collection might be a better landing page than other pages in the collection.

Normally, we recommend that you give web pages distinct titles to help differentiate them. However, pages in a paginated sequence don’t need to follow this recommendation. You can use the same titles and descriptions for all pages in the sequence. Google tries to recognize pages in a sequence and index them accordingly.

Use URLs correctly 

  • Give each page a unique URL. For example, include a ?page=n query parameter, as URLs in a paginated sequence are treated as separate pages by Google.
  • Don’t use the first page of a paginated sequence as the canonical page. Instead, give each page in its own canonical URL.
  • Don’t use URL fragment identifiers (the text after a # in a URL) for page numbers in a collection. Google ignores fragment identifiers. If Googlebot sees a URL to the next page that only differs by the text after the #, it may not follow the link, thinking it has already retrieved the page.
  • Consider using preload, preconnect, or prefetch to optimize the performance for a user moving to the next page.

In the past, Google used <link rel=”next” href=”…”> and <link rel=”prev” href=”…”> to identify next page and previous page relationships. Google no longer uses these tags, although these links may still be used by other search engines.

Avoid indexing URLs with filters or alternative sort orders 

You may choose to support filters or different sort orders for long lists of results on your site. For example, you may support ?order=price on URLs to return the same list of results ordered by price.

To avoid indexing variations of the same list of results, block unwanted URLs from being indexed with the noindex robots meta tag or discourage crawling of particular URL patterns with a robots.txt file.

SEO-Friendly Pagination: Your Best Practice Guide for 2022

Pros of infinite scroll

  • It’s more usable on mobile. One of the biggest advantages of infinite scrolling is that it’s incredibly usable on mobile devices. Simply scrolling down to view more content is far easier than asking users to click on a tiny “next” button or number every time they want to go to the next page.
  • Infinite scroll is better for user engagement. There’s a reason why Aussies spend hours on end scrolling through social media. Having content continuously load means that users can browse and engage with your site without being interrupted. This can be beneficial for content marketing and SEO, particularly given that Google is now using user behaviour as a ranking signal.

Cons of infinite scroll

  • Difficulties with crawling. Like pagination, the infinite scroll can also create problems when it comes to having your site crawled by Google (or other search engines). Search bots only have a limited time to crawl a page. If your content is too lengthy or takes too long to load, it loses the opportunity to be crawled — meaning entire chunks of your content might go unindexed.
  • It can be hard to find information. Depending on the length of your page, an infinite scroll can make it difficult for users to go back and revisit previous sections or products that they’re interested in. You might end up losing valuable leads or conversions simply because users found it too difficult to find what they were looking for, and chose to look elsewhere.
  • Limited access to the footer. Website footers contain valuable information for site visitors, such as social media network buttons, shipping policies, FAQs and contact information. However, with infinite scroll, it’s tough for users to access this section on your site.

At the end of the day, while users might appreciate infinite scrolling, this option isn’t as beneficial for SEO as website pagination. Pagination is the ideal option for search engines, provided you handle paginated pages in line with SEO best practices.

Best practices to consider for SEO pagination

1. Include canonical tags on paginated pages

Duplicate content is likely to be one of the biggest challenges you’ll come across when implementing pagination on your website.

To overcome these issues, you’ll need to use a self-referencing rel = “canonical” attribute on all of your paginated pages that directs back to the “View All” version of your page. This tag tells Google to crawl and index the “View All” version only and ignore any duplicated content in your paginated pages.

In the HTML, it looks like this:

Image source: SEO Clarity

Last but not least, make sure you use internal linking to different paginated URLs using the rel=”next” and rel=”prev” tags along with your canonical tag. These can be incorporated into your HTML like so:

<link rel=”next” href=”https://www.example.com/category?page=2&order=newest” />

<link rel=”canonical” href=”https://www.example.com/category?page=2″ />

Even though these aren’t a ranking factor, they still help Google (and Bing) understand the order of paginated content on your website.

2. Make sure to use crawlable anchor links

The first step to getting Google to crawl and index pages that are paginated? Make sure that the search engine can access them. Throughout your website, you should link to your paginated category pages using crawlable anchor site links with href attributes.

Let’s say you’re linking to page 3 of your product catalogue. Crawlable paginated links would look like this:

<a href=”https://www.mystorehere.com/catalog/products?page=4>

On the flipside, any link without the “a href” attribute won’t be crawlable by Google, such as this link:

<span href=”https://www.mystorehere.com/catalog/products?page=4>

3. Don’t include paginated pages in your sitemap

Even though your paginated pages are indexable, paginated URLs shouldn’t be included on your XML sitemap. Adding them in will only use up your ‘crawl budget’ with Google and could even lead to Google picking a random page to rank (such as page 3 in your product catalogue).

The only exception to this is when you choose to have important pages consolidated into a “View All” page, which absolutely needs to be included in your XML sitemap.

A final word on this one: don’t noindex paginated pages. While the no-index tag tells Google not to index paginated pages, it could lead to Google eventually no-following internal links from that page. In turn, this might cause other pages that are linked from your paginated pages to be removed from Google’s index.

4. Ensure you optimise your on-page SEO

Even if your paginated pages use self-referencing canonical URL tags, feature crawlable anchor links and are excluded from your XML sitemap, you should still follow best practices for on-page SEO.

As we touched on earlier, paginated pages are treated as unique pages in Google’s search index. This means that each page needs to follow on-page SEO guidelines if you want to rank in search results.

In case you needed more proof, here are John Mueller’s recommendations on this topic:

“I’d also recommend making sure the pagination pages can kind of stand on their own. So similar to two category pages where if users were to go to those pages directly, there would be something useful for the user to see there. So it’s not just like a list of text items that go from zero to 100 and links to different products. It’s actually something useful kind of like a category page where someone is looking for a specific type of a product they can go there, and they get that information.” – John Mueller, Google Webmaster English Hangouts

This means that every paginated page should:

  • Have unique meta tags, including title tags and meta descriptions
  • Feature mobile-friendly design that’s optimised for smaller screens
  • Load quickly on desktop and mobile devices
  • Include filters to help narrow down products (if you’re running an online store)
  • Deliver value for visitors

Tip: If you’re running an online store with eCommerce category pages, Google’s UX Playbook for Retail contains all the best practices you need to know to turn clicks into customers.

https://www.searchenginejournal.com/technical-seo/pagination/

SEO-Friendly Pagination: A Complete Best Practices Guide

Summary

  • Canonical tags to the same page (not to the view all or first page)
  • Use rel=next and rel=prev
  • If possible give paginated pages their own “sub-optimal” meta titles & descriptions
  • Not sure if this is an issue:

Pagination Causes Duplicate Content

Correct if pagination has been improperly implemented, such as having both a “View All” page and paginated pages without a correct rel=canonical or if you have created a page=1 in addition to your root page.

Incorrect when you have SEO friendly pagination. Even if your H1 and meta tags are the same, the actual page content differs. So it’s not duplication.

Pagination Creates Thin Content

Correct if you have split an article or photo gallery across multiple pages (in order to drive ad revenue by increasing pageviews), leaving too little content on each page.

Incorrect when you put the desires of the user to easily consume your content above that of banner ad revenues or artificially inflated pageviews. Put a UX-friendly amount of content on each page.

Pagination Uses Crawl Budget

Correct if you’re allowing Google to crawl paginated pages. And there are some instances where you would want to use that budget.

For example, for Googlebot to travel through paginated URLs to reach deeper content pages.

Often incorrect when you set Google Search Console pagination parameter handling to “Do not crawl” or set a robots.txt disallow, in the case where you wish to conserve your crawl budget for more important pages. (use robots.txt for this as no longer in Search Console)

Managing Pagination According to SEO Best Practices

Use Crawlable Anchor Links

For search engines to efficiently crawl paginated pages, the site must have anchor links with href attributes to these paginated URLs.

Be sure your site uses <a href=”your-paginated-url-here”> for internal linking to paginated pages. Don’t load paginated anchor links or href attribute via JavaScript.

Additionally, you should indicate the relationship between component URLs in a paginated series with rel=”next” and rel=”prev” attributes.

Yes, even after Google’s infamous Tweet that they no longer use these link attributes at all.

Google is not the only search engine in town. Here is Bing’s take on the issue.

Complement the rel=”next” / “prev” with a self-referencing rel=”canonical” link. 

So /category?page=4 should rel=”canonical” to /category?page=4.

This is appropriate as pagination changes the page content and so is the master copy of that page.

If the URL has additional parameters, include these in the rel=”prev” / “next” links, but don’t include them in the rel=”canonical”.

For example:

<link rel=”next” href=”https://www.example.com/category?page=2&order=newest&#8221; />

<link rel=”canonical” href=”https://www.example.com/category?page=2&#8243; />

Doing so will indicate a clear relationship between the pages and prevent the potential of duplicate content.

Common errors to avoid:

  • Placing the link attributes in the <body> content. They’re only supported by search engines within the <head> section of your HTML.
  • Adding a rel=”prev” link to the first page (a.k.a. the root page) in the series or a rel=”next” link to the last. For all other pages in the chain, both link attributes should be present.
  • Beware of your root page canonical URL. Chances are on ?page=2, rel=prev should link to the canonical, not a ?page=1.

The <head> code of a four-page series will look something like this:

Modify Paginated Pages On-Page Elements

John Mueller commented, “We don’t treat pagination differently. We treat them as normal pages.”

Meaning paginated pages are not recognized by Google as a series of pages consolidated into one piece of content as they previously advised. Every paginated page is eligible to compete against the root page for ranking.

To encourage Google to return the root page in the SERPs and prevent “Duplicate meta descriptions” or “Duplicate title tags” warnings in Google Search Console, make an easy modification to your code.

If the root page has the formula:

Root page SERP

The successive paginated pages could have the formula:

pagination page SERP

These paginated URL page titles and meta description are purposefully suboptimal to dissuade Google from displaying these results, rather than the root page.

If even with such modifications, paginated pages are ranking in the SERPs, try other traditional on-page SEO tactics such as:

  • De-optimize paginated page H1 tags.
  • Add useful on-page text to the root page, but not paginated pages.
  • Add a category image with an optimized file name and alt tag to the root page, but not paginated pages.

Don’t Include Paginated Pages in XML Sitemaps

While paginated URLs are technically indexable, they aren’t an SEO priority to spend crawl budget on.

As such, they don’t belong in your XML sitemap.

Handle Pagination Parameters in Google Search Console

If you have a choice, run pagination via a parameter rather than a static URL. 

For example:

example.com/category?page=2

 over 

example.com/category/page-2

While there is no advantage using one over the other for ranking or crawling purposes, research has shown that Googlebot seems to guess URL patterns based on dynamic URLs. Thus, increasing the likelihood of swift discovery.

On the downside, it can potentially cause crawling traps if the site renders empty pages for guesses that aren’t part of the current paginated series.

For example, say a series contains four pages.

URLs with a content stop at http://www.example.com/category?page=4

If Google guesses http://www.example.com/category?page=7 and a live, but empty, page is loaded, the bot wastes crawl budget and potentially get lost in an infinite number of pages.

Make sure a 404 HTTP status code is sent for any paginated pages which are not part of the current series.

Another advantage of the parameter approach is the ability to configure the parameter in Google Search Console to “Paginates” and at any time change the signal to Google to crawl “Every URL” or “No URLs”, based on how you wish to use your crawl budget. No developer needed!

Don’t ever map paginated page content to fragment identifiers (#) as it is not crawlable or indexable, and as such not search engine friendly.

Sources for KPIs can include:

  • Server log files for the number of paginated page crawls.
  • Site: search operator (for example site:example.com inurl:page) to understand how many paginated pages Google has indexed.
  • Google Search Console Search Analytics Report filtered by pages containing pagination to understand the number of impressions.
  • Google Analytics landing page report filtered by paginated URLs to understand on-site behavior.

Javascript and SEO – Notes from Across the Web [2023]

SEO and Vue.JS Notes – I recently did some research on this for a new website we’re building. I thought it’s worth a post for future reference and for anyone interested!

Summary

  • 90% of what I’ve found suggests you need to use Server-side or pre-rendering Javascript
  • Make sure all links are available in the proper a href HTML markup in the “View Source” of page
  • Official Google Video

If a website requires JS to show links, then Googlebot has to add an extra element to crawl them – “rendering” is required

JS can be “costly” it needs to be downloaded, parsed and executed

  • Official Google Video

“Make sure the content you care about the most, is part of the markup you see in the source of your website”

– All of the homepage meta and all the links in the body render without JS and can be found in the “view source” code

  • Official Google Video

Mobile Friendly Test –

Shows the page-rendered source code

If code is missing, or page doesn’t render as expected, check the recommendations

Search Console

https://search.google.com/test/rich-results

“Google can crawl JS….Google may not necessarily fetch all the JS resources”

“Google won’t click or scroll”

“The main content and links won’t be visible to Google”

“The problem is rendering counts towards crawl budget”

– Could be a problem for big eCommerce stores with lots of pages

“Don’t block JS files in robots.txt”

“Problem when websites don’t use traditional ahref links”

Tools

https://chrome.google.com/webstore/detail/view-rendered-source/ejgngohbdedoabanmclafpkoogegdpob?hl=en

https://developers.google.com/search/docs/crawling-indexing/links-crawlable

Make your links crawlable

Google can follow your links only if they use proper <a> tags with resolvable URLs:

Render tree that creates the sizes, elements etc to display

Finally, JS might add, change or remove tree elements – especially when interacting.

“View Source” – shows the Source HTML

Elements Tab in the Developer Tools shows current Dom content, including images etc added by JS

Search Console – Inspect URL – get rendered HTML that Google uses for indexing a page

https://itnext.io/yes-here-are-4-ways-to-handle-seo-with-vue-even-without-node-ssr-719f7d8b02bb

Not everyone can have a Node server for their project. And there may be a lot of reasons for that: shared webhost, no root access…

So here are 4 ways to handle SEO in 2021 with an SPA.

1. SEO on the client side with Google crawlers

React, Vue, Svelte… All these are frontend frameworks initially used to create SPAs, aka websites/webapps with CSR (Client Side Rendering).

What does this mean? It means the rendering is done in the browser. Therefore, the HTML sent to the browser & search engine crawlers is empty!

No HTML content = No SEO.

Remember, you need to handle SEO tags (title, meta…) on the client side! You can use vue-meta or vue-head for that (personally, I prefer vue-meta).

2. SEO with Node-based Server Side Rendering (SSR)

So SSR aka Sever Side Rendering, is a “new” concept that came with frontend frameworks. It’s based on Isomorphic programming which means the same app and code is executed on backend context and frontend context.

Because your app is executed on the backend, the server returns your component tree as an HTML string to the browser.

What does this mean? Since each initial request is done by a Node server that sends HTML, this even works for social media crawlers or any other crawler.

SSR with Vue can be done in 2 ways, DIY or with a framework on top of Vue:

Of course, SEO with Node-based SSR has it’s drawbacks:

You need… A Node server! Don’t worry, you only need it for the initial HTML rendering, not for your API.

3. SEO using “classic” Server Side Rendering (SSR)

So, based on what we learnt in 1 & 2, we can acheive something similar with a any backend language.

To solve this, we need to do 4 actions with any type of backend:

  1. Use a backend router that mirrors the frontend router, so that the initial response can render content based on the url asked
  2. In the backend response, we will only generate title & meta tags since our backend can’t execute frontend code
  3. We will store some initial data in a variable on the window object so that the SPA can access it at runtime on the client
  4. On the client, you check if there’s data on the window object. If there is, you have nothing to do. If there isn’t, you do a request to the API server.

That’s pretty much it for the backend, nothing more. You only need a single “view” file that takes title, meta, initialData or whatever parameters you need for SEO/SMO and that’s it.

The “window.initialData = @ json($state)” is also very important here, but not mandatory for SEO. It’s for performance/UX purposes. It’s just for you to have initial data in the frontend, to avoid an initial AJAX request to your API server.

Of course, SEO with classic SSR has it’s drawbacks:

  • You have to mirror each route were you need SEO on the backend
  • You have to pass “the same data” to the frontend and to APIs, sometimes if feels like duplicating stuff

4. JAMStack aka Static Site Generation aka Prerendering

This is my the method I love the most, but it isn’t meant for all situations.

So what is JAMStack? Well it’s a fancy word for something that existed before that we called: static websites.

So what’s JAMStack then? JavaScript API Markup.

JAMStack is the concept of prerendering, but automated and modernized.

It’s an architecture solely based on the fact that you will prerender markup with initial data, that markup would use JavaScript to bring interaction and eventually more data from APIs (yours and/or others).

In a JAMStack architecture, you would usually use a frontend framework to prerender your static files that would then turn to an SPA.

It’s mostly based on the fact that you would rebuild pages on-the-fly anytime data changes in your APIs, through webhooks with CI/CD.

So it’s really nice, but not great for websites/webapps that have daily updates with a lot of pages.

Why? Because all pages are regenerated each time.

It’s the fastest, most SEO-friendly and “cheapest” method.

You only need your API server, a static host (Netlify, Vercel, S3, Firebase Hosting, etc…), and a CI/CD system for rebuilds which you most likely already have to handle tests or deployment.

Prerendering tools

Any other SSG (static site generator) would be good but, you won’t have hydration with those not Vue-driven.

APIs: You can create your own API but, usually when you do JAMStack, it’s for content-drive websites/webapps. That’s why we often use what we call: Headless CMSs.

A headless CMS, is a CMS that can render HTTP APIs as a response.

There are many of them: Strapi, Directus (Node), WordPress (yep it can), Cockpit CMS (PHP), Contentful, Dato, Prismic (hosted)…

You can find more here: https://jamstack.org/headless-cms

Conclusion: What’s the best SEO method then?

There isn’t a silver bullet. It depends on your stack, budget, team, type of app and some other parameters.

In a nutshell, I would say:

  • If you don’t care a lot about it: an optimized SPA with Vue meta is fine
  • If you can use Node: do Node-based SSR
  • If you can’t use Node: do classic SSR with initial data rendering
  • If you don’t have daily page updates or too many pages: JAMStack

That’s it. Remember: there’s never ONLY ONE WAY to do something.

https://www.smashingmagazine.com/2019/05/vue-js-seo-reactive-websites-search-engines-bots/

These frameworks allow one to achieve new, previously-unthinkable things on a website or app, but how do they perform in terms of SEO? Do the pages that have been created with these frameworks get indexed by Google? Since with these frameworks all — or most — of the page rendering gets done in JavaScript (and the HTML that gets downloaded by bots is mostly empty), it seems that they’re a no-go if you want your websites to be indexed in search engines or even parsed by bots in general.

This seems to imply that we don’t have to worry about providing Google with server-side rendered HTML anymore. However, we see all sorts of tools for server-side rendering and pre-rendering made available for JavaScript frameworks, it seems this is not the case. Also, when dealing with SEO agencies on big projects, pre-rendering seems to be considered mandatory. How come?

COMPETITIVE SEO #

Okay, so the content gets indexed, but what this experiment doesn’t tell us is: will the content be ranked competitively? Will Google prefer a website with static content to a dynamically-generated website? This is not an easy question to answer.

From my experience, I can tell that dynamically-generated content can rank in the top positions of the SERPS. I’ve worked on the website for a new model of a major car company, launching a new website with a new third-level domain. The site was fully generated with Vue.js — with very little content in the static HTML besides <title> tags and meta descriptions.

WHAT ABOUT PRE-RENDERING? #

So, why all the fuss about pre-rendering — be it done server-side or at project compilation time? Is it really necessary? Although some frameworks, like Nuxt, make it much easier to perform, it is still no picnic, so the choice whether to set it up or not is not a light one.

I think it is not compulsory. It is certainly a requirement if a lot of the content you want to get indexed by Google comes from external web service and is not immediately available at rendering time, and might — in some unfortunate cases — not be available at all due to, for example, web service downtime. If during Googlebot’s visits some of your content arrives too slowly, then it might not be indexed. If Googlebot indexes your page exactly at a moment in which you are performing maintenance on your web services, it might not index any dynamic content at all.

Furthermore, I have no proof of ranking differences between static content and dynamically-generated content. That might require another experiment. I think that it is very likely that, if content comes from external web service and does not load immediately, it might impact on Google’s perception of your site’s performance, which is a very important factor for ranking.

JAVASCRIPT ERRORS #

If you rely on Googlebot executing your JavaScript to render vital content, then major JavaScript errors which could prevent the content from rendering must be avoided at all costs. While bots might parse and index HTML which is not perfectly valid (although it is always preferable to have valid HTML on any site!), if there is a JavaScript error that prevents the loading of some content, then there is no way Google will index that content.

OTHER SEARCH ENGINES #

The other search engines do not work as well as Google with dynamic content. Bing does not seem to index dynamic content at all, nor do DuckDuckGo or Baidu. Probably those search engines lack the resources and computing power that Google has in spades.

Parsing a page with a headless browser and executing JavaScript for a couple of seconds to parse the rendered content is certainly more resource-heavy than just reading plain HTML. Or maybe these search engines have made the choice not to scan dynamic content for some other reasons. Whatever the cause of this, if your project needs to support any of those search engines, you need to set up pre-rendering.

Note: To get more information on other search engines’ rendering capabilities, you can check this article by Bartosz Góralewicz. It is a bit old, but according to my experience, it is still valid.

OTHER BOTS #

Remember that your site will be visited by other bots as well. The most important examples are Twitter, Facebook, and other social media bots that need to fetch meta information about your pages in order to show a preview of your page when it is linked by their users. These bots will not index dynamic content, and will only show the meta information that they find in the static HTML. This leads us to the next consideration.

SUBPAGES #

If your site is a so-called “One Page website”, and all the relevant content is located in one main HTML, you will have no problem having that content indexed by Google. However, if you need Google to index and show any secondary page on the website, you will still need to create static HTML for each of those — even if you rely on your JavaScript Framework to check the current URL and provide the relevant content to put in that page. My advice, in this case, is to create server-side (or static) pages that at least provide the correct title tag and meta description/information.

  1. If you need your site to perform on search engines other than Google, you will definitely need pre-rendering of some sort.
  2.  

Vue SEO Tutorial with Prerendering

“No search engines will be able to see the content, therefore it’s not going to rank…”

Solutions:

  1. Server side rendering
  2. Pre-rendering

https://www.youtube.com/watch?v=Op8Q8bUAKNc (Google video)

“We do not execute JS due to resource constraints” (in the first wave of indexing)

“Eventually we will do a second wave of indexing, where we execute JS and index your content again”

“…but if you have a large site or lots of frequently changing content, this might not be optimum”

A way around this is pre-rendering or server-side rendering.

https://davidkunnen.com/how-to-get-250k-pages-indexed-by-google/

When creating Devsnap I was pretty naive. I used create-react-app for my frontend and Go with GraphQL for my backend. A classic SPA with client side rendering.

I knew for that kind of site I would have Google to index a lot of pages, but I wasn’t worried, since I knew Google Bot is rendering JavaScript by now and would index it just fine

Oh boy, was I wrong.

At first, everything was fine. Google was indexing the pages bit by bit and I got the first organic traffic.

First indexing

1. Enter SSR

I started by implementing SSR, because I stumbled across some quote from a Googler, stating that client side rendered websites have to get indexed twice. The Google Bot first looks at the initial HTML and immediately follows all the links it can find. The second time, after it has sent everything to the renderer, which returns the final HTML. That is not only very costly for Google, but also slow. That’s why I decided I wanted Google Bot to have all the links in the initial HTML.

Google Bot indexing zycle

I was doing that, by following this fantastic guide. I thought it would take me days to implement SSR, but it actually only took a few hours and the result was very nice.

Indexing with SSR

Without SSR I was stuck at around 20k pages indexed, but now it was steadily growing to >100k.

But it was still not fast enough

Google was not indexing more pages, but it was still too slow. If I ever wanted to get those 250k pages indexed and new job postings discovered fast, I needed to do more.

2. Enter dynamic Sitemap

With a site of that size, I figured I’d have to guide Google somehow. I couldn’t just rely on Google to crawl everything bit by bit. That’s why I created a small service in Go that would create a new Sitemap two times a day and upload it to my CDN.

Since sitemaps are limited to 50k pages, I had to split it up and focused on only the pages that had relevant content.

Sitemap index

After submitting it, Google instantly started to crawl faster.

But it was still not fast enough

I noticed the Google Bot was hitting my site faster, but it was still only 5-10 times per minute. I don’t really have an indexing comparison to #1 here, since I started implementing #3 just a day later.

3. Enter removing JavaScript

I was thinking why it was still so slow. I mean, there are other websites out there with a lot of pages as well and they somehow managed too.

That’s when I thought about the statement of #1. It is reasonable that Google only allocates a specific amount of resources to each website for indexing and my website was still very costly, because even though Google was seeing all the links in the initial HTML, it still had to send it to the renderer to make sure there wasn’t anything to index left. It simply doesn’t know everything was already in the initial HTML when there is still JavaScript left.

So all I did was removing the JavaScript for Bots.

if(isBot(req)) {

    completeHtml = completeHtml.replace(/<script[^>]*>(?:(?!<\/script>)[^])*<\/script>/g, “”)

Immediately after deploying that change the Google Bot went crazy. It was now crawling 5-10 pages – not per minute – per second.

Google Bot crawling speed

Conclusion

If you want to have Google index a big website, only feed it the final HTML and remove all the JavaScript (except for inline Schema-JS of course).

https://openai.com/blog/chatgpt/

Excluding URLs in Screaming Frog Crawl [2023]

To exclude URLs just go to:

Configuration > Exclude (in the very top menu bar)

To exclude URLs within a specific folder, use the following regex:

^https://www.mydomain.com/customer/account/.*
^https://www.mycomain.com/checkout/cart/.*

The above regex, will stop Screaming Frog from Crawling the customer/account folder and the cart folder.

Excluding Images –

Ive just been using the image extensions to block them in the crawl, e.g.

.*jpg

Although you can block them in the Configuration>Spider menu too.

SiteNavigationElement Schema – How to Implement [2023]

  • Summary 

To associate name and other elements with the URL, it appears best to use ItemList in the schema markup, below is an example of SiteNavigationElement schema:

script type="application/ld+json">
{"@context":"http://schema.org",
"@type":"ItemList",
"itemListElement":
[
{
@type: "SiteNavigationElement",
name: MMA Equipment",
url:"https://www.blackbeltwhitehat.com/mma"
},
{
"@type": "SiteNavigationElement",
"name": "Cricket Equipment",
"url": "https://www.blackbeltwhitehat.com/cricket"
},
{
@type: "SiteNavigationElement",
name: "Tennis Equipment",
url:"https://www.blackbeltwhitehat.com/tennis"
},
{
@type: "SiteNavigationElement",
name: "Golf Equipment",
url:"https://www.blackbeltwhitehat.com//golf"
},
{
@type: "SiteNavigationElement",
name: "Rugby Equipment",
url:"https://www.blackbeltwhitehat.com/"
},
{
@type: "SiteNavigationElement",
name: "Gym Equipment",
url:"https://www.blackbeltwhitehat.com//gym-equipment"
}
]
}}
</script>
  • Put the SChema in the <head> or <body> tags.
  • Just replace the name and the URL if you want to use the code above.

SiteNavigationSChema seems like a good idea for most websites to use this schema.

It is in schema format so directly informs Google of page locations and what they’re about.

You can put it separately from the main navigation markup, in either the <head> or the <body> when using the recommended JSON format. Which effectively gives Googlebot an additional number of links to crawl or at least acknowledged with some additional data describing what the links are about.

There are some old posts saying Navigation Schema is not approved by Google, but it now appears to be on the list of approved schema – screenshotted below “SiteNavigationElement”:

https://schema.org/docs/full.html

From what I’ve read and from the example I’ve been sent during my ‘research’, it appears you can have the schema code, completely separate to the main HTML navigation code – so effectively adds an additional incidence of HTML links (which is good).

Implementing Navigation Schema

If using JSON – put the schema code in <head> or <body> of HTML page

The schema can be placed on all of the site’s pages.

Google documentation about Schema generally:

SiteNavigation Schema

This schema helps search engines to get a better and prompt understanding of the site structure and navigation elements along with improving the website’s click-through rate

SiteNavigation Schema Template:

<script type=”application/ld+json”>{

“@context”:”http://schema.org”,

“@type”:”ItemList”,

“itemListElement”:[

{

name”: “Navigation Element 1“,

description”: “Navigation Element 1 Description“,

url”:”https://example.com

},

{

“@type”: “SiteNavigationElement”,

“position”: 2,

“name”: “Navigation Element 2 (About us)“,

“description”: “Navigation Element 2 Description“,

“url”:”https://example.com/xyz”

},

{

“name”: “Navigation Element 3 (Products)“,

“description”: “Navigation Element 3 Description“,

“url”:”https://example.com/abc”

},

{

“@type”: “SiteNavigationElement”,

“position”: 4,

“name”: “Navigation Element 4 (contact us)“,

“description”: “Navigation Element 4 Description“,

“url”:”https://example.com/pqr”

}

]

}

</script>

The SiteNavigationElement markup can help increase search engines’ understanding of your site structure and navigation and can be used to influence organic sitelinks:

    • Organic sitelinks: (influenced by navigation schema):

Example using Microdata includes “href links”:

Site Navigation Schema Markup For your website

Currently used on 250,000 to 500,000 domains, what is the SiteNavigationElement markup? This schema markup basically represents the navigation element of the page.

HTML Example

<nav class=”firstNav”>

 <ul itemscope itemtype=”https://schema.org/SiteNavigationElement”&gt;

 <li itemprop=”name”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 </ul>

 </nav>

Some web developers, digital agencies and SEO gurus (SEO nemos) suggest that you should use SiteNavigationElement on the nav element, in fact, there are many WordPress Theme developers who add this markup to the nav element of HTML. Let’s say that’s fair because everyone seems to be an SEO expert in today’s freelancer, upwork and fiver world.

All this sounds logical doesn’t it? However, upon closer inspection of HTML5, then you will realize that your Web Page Elements can have attributes which can also have role attribute like this:

Menu: offers a list of choices to the user.

menuitem: a link in a menu. This is an option in a group of choices contained in a menu.

role=”menu”

role=”menuitem”

And since accessibility is very important for Google Search Engine (including rankings) and since making your pages more meaningful can only be a good thing. 

Then, our example can become even more meaningful like this:

<nav class=”firstNav”>

 <ul itemscope itemtype=”https://schema.org/SiteNavigationElement&#8221; role=”menu”>

 <li itemprop=”name” role=”menuitem”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name” role=”menuitem”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name” role=”menuitem”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name” role=”menuitem”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name” role=”menuitem”><a href=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 <li itemprop=”name” role=”menuitem”><a hre


f=”https://www.example.com/&#8221; title=”title of hyperlink”>Menu Text</a></li>

 </ul>

 </nav>

https://searchengineland.com

https://rankya.com

Example from earlier

seems simplest format to use, perhaps worth adding “description” too

CSS Flexbox for Beginners [2022]

Flexbox helps you to align elements. It’s much easier and better than Float etc

Flexbox is generally used for small-scale layouts, whereas Grid is used for larger stuff.

Flexbox has 2 axes. The main axes and the cross axes.

The main axis is defined by the flex-direction property.

The cross axis will automatically run perpendicular to the main axis.

Image Source

Flexbox Main Axis

The main axis is defined by flex-direction.

Flex-direction has 4 possible values:

  • Row
  • Row-reverse
  • Column
  • Column-reverse

In the screenshot below,

display:flex makes container-1 a flex item

flex:1 – for each of the boxes, makes them evenly distributed in terms of their width. As all the flex items are given a value of 1, they are all the same size.

If you were to change the flex value for .box-1 to 4, then box-1 would take up four-sixths (4/6) of the page:

Change the order of Flex Items

With flexbox you can change the order of the flex-items, (the boxes) without changing the HTML.

To do this use the “order” property.

The code below will put box-1 in the second position, left to right.

Flex-direction: Column

By giving the container the flex-direction property, with a value of “column”, the boxes / flex items will stack on top of each other

Justify-Content

You can use the “justify-content” property to align the boxes within the flex container.

justify-content: flex-end; will push all the flex items to the right

Justify-content:center; will place the boxes in the centre of the container

Justify-content:space-between;

Aligns the content spaced evenly across the entire width of the container, with margins in between:

Space-around will add some “space around” the items, so that they have margins in between the items and also to the sides of the left-most and right-most items:

Flex Grow

If after the dimensions of the flex-items have been set, they leave room or space, you can use the “flex-grow” property.

If you give each flex item a flex grow value of 1, then the items will take up an equal amount of the remaining space.

It could be 1, or 100, it won’t make a difference if all the items have the same value.

Below, flex item “one”, is the only flex item with a “flex grow” property, and it therefore takes up all the space that was left over from left to right by the 3 boxes.

Finding & Fixing CSS Spacing Issues with Dev Tools

One of the things I’ve found a bit of a ball ache as a developing developer, is sorting out spacing i.e. padding and margin issues when creating pages using Magento.

For example, let’s say I want to change the spacing above an image.

Best thing I’ve found to do to find the margin or padding that I want to change:

  • Right click on image – choose Inspect Element
  • In the “styles” tab of dev tools, look for margin and padding styles

In this instance, it’s not the image, or the images div or container that’s causing the spacing that I don’t want:

In this instance, I click on the “Div” or container, that the image resides within (the parent element)

Now I’ve found the issue with the spacing at the top that is too large:

  • I can now go into the stylesheet and amend the class “content-image section” or just add an in-line style to this individual incidence of the class, and change “margin-top” to something like 15px.

When you are inspecting an element in Chrome, Dev tools will also tend to highlight (in a dark shade) the spacing around that element too, so it shows you which element is generating the padding or margin.

Smart Slider for WordPress – Simple Guide [2022]

Adding image slider in WordPress

In the admin panel, on the left side there is now an option called “smart slider”.

 When going into there you can select – “dashboard” from the side-menu that appears and then select to create a “new project”.

When creating a new project you can give that slider a specific name etc if you want to

The project type you’d likely want to use is slider and slider type simple, you’d probably want the name to be a reference to the post it is for.

When you add a slide, you can select image which will give you a prompt to add an image from the media library.

Once it’s been set up you need to add the shortcode in the blog editor section and it should populate itself on the frontend, this slider in the example was given an ID of 2, so you’d add [smartslider3 slider=”2”]

CSS Transitions for Beginners [2022]

Image source

What are Transitions in CSS?

Transitions are used to change the properties or the style of an element.

The transition is kind of like the “animation” that occurs between the two states of the element. Although CSS animations are a different thing. Which is confusing. Sorry.

Transitions and animations are great for grabbing people’s attention (for example in a banner ad) or for enhancing User Experience (UX). In other words, transitions look nice.

Transitions are the bit that happens when an element, like a box for example, changes size to a new state.

Original State —Transition—> New State

Box with 200px width —user clicks to cause transition—> Box with 400px

Gif Source

You might use transitions for example, to dictate how an element changes when it is “hovered” over with a mouse pointer.

With the code above, the element given the class of “box”, will change from 200px width to 400px when someone hovers over it with their mouse.

The code above will have no transition, it will just change from one to the other.

With transition-property and transition duration added, the box will move gradually from 200px to 400px, over the course of 1 second.

Transition-duration can be set in ms (milliseconds) or seconds in CSS, but Javascript, only uses ms.

Transition Properties

To create a CSS transition, you need to specify the transition-property and the transition duration.

The transition-property dictates, what is going to change, for example, the width and height:

transition-property: width, height;

The transition duration dictates the length of time a transition should take, e.g.

 transition-duration: 2s;

Transition-delay, specifies how long a transition should take to being, and transition-timing-function dictates how fast and slow the transition should occur (more info below).

CSS Transition Examples

In the example below, the box is given a class of “box” inside the HTML.

In the CSS sheet, this class is selected with the dot/period/fullstop – “.box” and given styles that include 300px height and width.

Here the box is also given the transition-duration of 350ms and the transition-property style states that the background is what will change.

The .box:hover dictates how the style will change when the box is hovered over with a mouse pointer. In the example, it will rotate 45 degrees and change colour.

The transition-property property specifies the name of the CSS property the transition effect is for (the transition effect will start when the specified CSS property changes).

Tip: A transition effect could typically occur when a user hover over an element.

Note: Always specify the transition-duration property, otherwise the duration is 0, and the transition will have no effect.

https://www.w3schools.com/cssref/css3_pr_transition-property.php

Here’s a real-life example of a transition:

You can just write “all”, but best-practice is to state all of the properties you want to change with your transition:

css transition property

Transition-Timing-Function

By default, your transitions will “ease” into the new styles.

ease

Ease is the default value. Ease increases in velocity towards the middle of the transition, slowing back down at the end.

linear

Transitions at an even speed.

ease-in

Starts off slowly, with the transition speed increasing until complete.

ease-out

Starts transitioning quickly, slowing down the transition continues.

ease-in-out

Starts transitioning slowly, speeds up, and then slows down again.cubic-bezier(p1, p2, p3, p4)

Cubic-Bezier

If you really want to, you can also define your own transition speeds using cubic-bezier. The easiest way to do this is to use a generator.

Image Source

Testing Transitions with Chrome Dev Tools

You can “visualise” transitions, using Inspect Element/Chrome Developer Tools and clicking on “ease” or the transition-property you’ve created/stated:

You can also play around with rotation

Remember to copy the code before you exit Chrome dev tools.

Warning about CSS Transitions

If you can limit transitions to transforms and opacity.

A browser can use a graphics card for transforms and opacity.

For other transitions, you can create transitions which will look strange and very jerky for some users. This is especially true if you set the transition-property to “all”, rather than specific elements.

Be careful when using transitions on box shadows, borders, backgrounds etc.

CSS Positions Explained [2022]

Static

Static is the default position for all the HTML elements.

Static, effectively doesn’t do much, it just means the element like an image or text will follow the normal flow of the page/DOM.

Static elements are a bit shit and kinda lazy, static positioned elements cannot have the z-indextopleftright, or bottom properties applied to them.

If Static was a TV character, it would be Onslow from Keeping Up Appearances

Static example

Relative Position

Relative positions are pretty similar to static positions, but you can add z-index, top, left, right, and bottom properties to it.

Relative positions go with the normal flow, but can be taken out of the flow (kind of) by adding properties for its position.

Think of relative position like a dog surfer. For no reason at all.

Only joking, because it goes with the flow, but if you shout at it really loudly, you can make it move in specific directions. Maybe.

Absolute Position

Absolute position, removes the element from the normal document flow.

It goes where it wants, regardless of other elements.

The other elements are not moved or effected by an absolute element.

Absolute positioned elements have their width defaulted to auto instead of being full width like a div

absolute css

https://blog.webdevsimplified.com/2022-01/css-position/

In the above image, the “one” class/element is given 0 for the top and left properties, so it remains in the default position, on the top left of the screen.

Fixed Position

A fixed position in css, is based on the users viewport. A fixed postion element stays in the same place on the users screen, even when the user scrolls.

below is how w3 schools describe fixed positioning:

Sticky Positioning

Sticky elements, stick to the users viewport.

Here’s an example, the heading “do you know all about sticky element”:

source

source

Block

Block doesn’t appear to be a type of positioning in CSS, but I still see it a lot when looking at code and is kind of related to positioning, so thought I’d better cover it.

Most things are blocks. A paragraph for example is a block.

Block elements always stack on top of one another, even if they have room to go side by side, they don’t.

By default, bock elements have a default of 100% – meaning that they take up the whole width of the page.

The only thing that limits there width, is usually the parent element’s padding or margins.

Inline-elements, are a bit different, in that unlike paragraphs etc. they don’t create a new line. For example, a link uses an inline-block element:

You can’t add padding or margins to inline-elements.

Inline-Block

However! You can give margin and padding to an inline-block.

This can be a good block to use, if you don’t want an element, for example a button, to take up the entire width of the page.

For example, if you want to place three buttons in a row, you can use inline-block.

Grid Positioning Example

One final, pratical example of positioning :

CSS Grid with Image and Text Side by Side Example

Here’s another example from css-tricks.com

Grid and flexbox are classed as layouts, rather than positions, which is kind of rubbish (according to my Dunning Kruger opinion) in that you use them to position elements.