Weekly Digest
My information highlights for the week:
Read More →My information highlights for the week:
Read More →By default Gatsby will embed CSS into the <head>
of each HTML page. This is not ideal. In this post I take a look at how to move that CSS into an external file and how the contents of that file can be optimised to remove unused CSS.
A short week for me since I’m travelling. A small sample of highlights:
Read More →In a previous post we deployed our Gatsby site on Netlify. Now let’s take a look at another platform: Vercel.
Read More →Suppose that you want to make your site routing a little more flexible. For example, rather than just going straight to a 404 page if the path is not found, you might want to try and guess an appropriate (and valid!) path. This is where dynamic routing comes into play.
Read More →Highlights from this week:
Read More →Setting up a custom 404 page can add something special to your site. It provides you with the opportunity to do something memorable in the unfortunate event that a user asks for an unknown page.
Read More →Deploying a Gatsby site can feel like a daunting task, especially with the array of potential hosting platforms. Among them, Netlify is a strong contender due to its seamless integration and deployment process. Netlify is actively engaged in improving support for Gatsby on their platform. This post will show how to get your Gatsby site live on Netlify.
Read More →One of my standard approaches to scraping content from a dynamic website is to diagnose the API behind the site and then use it to retrieve data directly. This means that I can make efficient HTTP requests using the requests
package and I don’t need to worry about all of the complexity around scraping with Selenium. However, it’s often the case that the API requests require a collection of cookies and headers, and those need to be gathered using Selenium.
There are a couple files which can have an impact on the SEO performance of a site: (1) a sitemap and (2) a robots.txt
. In a previous post we set up a sitemap which includes only the canonical pages on the site. In this post we’ll add a robots.txt
.
A Gatsby site will not have a robots.txt
file by default. There’s a handy package which makes it simple though. We’ll take a look at how to add it to the site and a couple of ways to configure it too.
Highlights from this week (some cloud, a bit of Docker, Spark and AI):
The principal purpose of a sitemap file is to inform search engines about the pages on a website that are available for crawling. It provides a list of URLs along with additional metadata about each URL to help search engines more intelligently crawl the site. If there are multiple page versions on a site then the sitemap should include only the canonical versions of those pages.
Read More →In the previous post we completed the implementation of multiple site versions. There’s now more than one version of each of the content pages. From a developer and user perspective this is ideal: we have granular documentation for each version of our fictitious site. However, for SEO purposes this is not ideal.
Read More →We’re now going to bring together what we have been building in the previous two blog posts. First we added the raw AsciiDoc source into the GraphQL schema. Next we used AsciiDoc preprocessor directives to include conditional content into the rendered content pages. Specifically, we conditionally included content on pages depending on the value of a version
attribute which was dynamically inserted into the raw AsciiDoc front matter. Now we are going to set up a URL structure which includes a version number and list the available documentation versions from the landing page.
Suppose that you have a product which is undergoing rapid development. Each new release of the product is assigned a unique version number. The product documentation is diligently updated in line with the evolving product. Ideally the documentation should be consistent with the latest release of the product. However, not all of your users will be using the latest version, so they should also be able to access older versions of the documentation.
Read More →Using AsciiDoc attributes it’s possible to have conditional content, which will appear under some conditions but be absent in others.
Read More →It’s useful to be able to add fields to the GraphQL schema. In this post I’ll illustrate how to do this by adding nodes for the raw AsciiDoc source and linking the raw data to the processed content.
Read More →Highlights from this week (mostly cloud with a bit of Docker and CSS thrown in):
Read More →It’s often the case that we want pages on a site to be presented in a specific order. It’s possible to do this systematically by sorting on some existing aspect of the content (for example, sort alphabetically by page title) or by introducing a page attribute that’s specifically intended for sorting.
Read More →It can be useful to embed additional metadata into content pages on a Gatsby site. In this post we’ll take a look at how to add fields to the header of AsciiDoc files. These fields will be accessible via GraphQL.
Read More →Redirects instruct web browsers to automatically reroute from one URL to another. They are especially vital when website structures change, pages get deleted, or content moves to a new location. Whether you’re rebranding, restructuring, or simply optimizing your site’s user experience, Gatsby offers powerful tools for handling redirects seamlessly. In this post, we’ll delve into the intricacies of implementing and managing redirects with Gatsby, ensuring your visitors always land in the right place.
Read More →