KINDERAS.COM

QuickTip: Matching the local version of Hugo on AWS Amplify

When using Hugo to generate a static website it is quite important to match the Hugo versions locally with the version used by AWS Amplify.

If you don’t you can end up in a situation where for example syntax highlighting is working fine locally, but isn’t working on the deployed site. This happened to me, and this is how I fixed it.

  1. Open your AWS Amplify deploy project and choose «Build Settings» from the left menu.
  2. Scroll all the way down to «Build image settings» and click the edit button on the right side.
  3. Add a package version override
    1. Select “Hugo” as the package
    2. The version should be set to “latest” (or the version you prefer).
    3. Save

Now Amplify will use the version of Hugo you specified for all future builds. You’ll need to trigger a new build for any changes to your site to take effect.

PS: Remember to keep your local version of Hugo updated to the same version you selected in the Amplify console.

References:

JAMStack: Integrating Sanity.io and Hugo

In this article I’m discussing my approach to creating a so-called JAMStack site, using this site (kinderas.com) as an example. The site is integrated with a CMS called Sanity for easy content management.

Goals

I wanted something simple, cheap, fast (load-times) and secure. After testing several combinations of CMS’es and static site generators I landed on Sanity.io as the CMS and Hugo as the static site generator.

Sanity

Sanity.io is the platform for structured content. With Sanity.io you can manage your text, images, and other media with APIs. Source: The Sanity.io website

I chose Sanity because I like the approach to defining data structures as code. This makes it much easier to keep a backup and redeploy the CMS to new instances. Sanity is also responsive, great to work with and have a reasonable pricing model.

Hugo

Hugo is one of the most popular open-source static site generators. With its amazing speed and flexibility, Hugo makes building websites fun again. Source: Hugo website

The main reason I went with Hugo instead of the massively popular Gatsby was build-speed and the approach to building pages. Building templates in Hugo consists mostly about writing HTML and CSS. In Gatsby everything is build using React, something which can be an advantage if you need to mix build time and runtime react components. I had no such need since a wanted a static blog site. Then there is the build speed… Hugo is way..way faster than Gatsby. This matter a great deal when you need for the entire site to build every time you click «publish» in the CMS.

With the setup explained below, deploying this site takes about 1 min and 30 seconds from clicking publish to the change is visible

AWS Amplify

For building and deploying the site I’m using AWS Amplify Deploy which has build in support for Hugo, Node.js and is powered by the AWS Cloudfront CDN, which makes things fast and cheap to host. You can probably use Netlify or similar achieving the same results.

The setup

Normally when using Hugo you would store all the content in a folder named content at the root level. The structure within the content folder would then define the url structure of the resulting site. For example a blog posts with the url technology/2020/01/28/something would mean that the markdown file something.md was store at the /content/technology/2020/01/28 path.

// «/content/technology/2020/01/28/something}»

content/
└── technology/
   └── 2020/
      └── 01/
         └── 28/
            └── something.md

Since the content is now stored in Sanity we need a way to import the content from Sanity into the project at build time. For doing this I wrote a small JavaScript file which is executed by Node.js at build time. The script uses the Sanity JS client with a GROQ query (see example below).

In my case the query covers both the technology blog and the work blog in one request. This might have to be tweaked for sites with a large content base by adding pagination.

// PS: This query is based on my Sanity schema

const query = `*[_type == "workPost" || _type == "techPost"] {
	_type,
	title,
	metaDescription,
	slug,
	publishedAt,
	body
  }`;

You would then go through each post and generate the appropriate markdown file at the correct path in the content folder.

// Again, this is my setup, yours will probably differ
const slug = fields.slug.current;
const date = new Date(fields.publishedAt);
const year = date.getFullYear().toString();
const month = (date.getMonth() + 1).toString().padStart(2, "0");
const day = date.getDate().toString().padStart(2, "0");

// The full folder path for the markdown file
const dirPath = path.join(contentFolderPath, dirName, year, month, day);
// The name of the markdown file
const fileName = `${slug}.md`;

// The markdown content with front-matter
const content = `---
title: '${fields.title}'
date: ${fields.publishedAt}
draft: false
slug: "${slug}"
description: '${fields.metaDescription}'
---

${fields.body}
`;

try {
	// Create the full path for the markdown file
	// This can be multiple folders, hence the recursive flag
	await fs.promises.mkdir(dirPath, { recursive: true });
} catch (error) {
	// Ignore the folder exists error since this will
	// happen if one day has multiple posts
	if (error.code !== "EEXIST") {
		throw error;
	}
}

try {
	// Write the markdown file
	await fs.promises.writeFile(path.join(dirPath, fileName), content);
	return console.log(`✅ Wrote file ${fileName}`);
} catch (error) {
	throw error;
}

When the script has completed, which will probably take less than a second unless you have many many posts, then you’ll run Hugo build and wait for Amplify to do its thing.

The full build setup looks something like this.

The build setup
The build setup

A complete amplify config file can look like this

version: 0.1
frontend:
	phases:
		preBuild:
			commands:
				# Install node modules for the importer script
                - yarn install
                # Import data from Sanity and create Markdown files
                - node sanityToMarkdownImporter.js
        build:
            commands:
            	# Run the Hugo build
                - hugo --minify
    artifacts:
        baseDirectory: public
        files:
            - "**/*"
    cache:
    	# Cache node_modules for next deploy
        paths: [node_modules/]
    customHeaders:
        # Cache css in the browser for a year
        - pattern: "**/*.css"
          headers: [{ key: "Cache-Control", value: "public,max-age=31536000,immutable" }]
        # Cache images in the browser for a year
        - pattern: "**/*.gif"
          headers: [{ key: "Cache-Control", value: "public,max-age=31536000,immutable" }]
        - pattern: "**/*.jpg"
          headers: [{ key: "Cache-Control", value: "public,max-age=31536000,immutable" }]
        - pattern: "**/*.png"
          headers: [{ key: "Cache-Control", value: "public,max-age=31536000,immutable" }]
        - pattern: "**/*.svg"
          headers: [{ key: "Cache-Control", value: "public,max-age=31536000,immutable" }]

The result is .. well, the site you’re reading now. It loads fast, it is cheap to host and works pretty well.

Hosting a static web-app with AWS S3 and Cloudfront

This is a step by step guide on hosting a static web-site on AWS S3 and serving it through AWS Cloudfront over https.

This article contains much of the same content as the «Hosting a static web-app on a custom domain with AWS S3 and Cloudflare» article, the main difference is that this article uses AWS Cloudfront instead of Cloudflare.

Setting up a basic web-app using AWS S3

The first step is to create the code for a web app. For this example, I’ll be using a simple index.html file which prints “An example web app …..”. The only thing to note here is that you do need an «index» file to act as the entry point / root for your web-app.

Configuring Amazon S3

Log in to the AWS console and navigate to the S3 service. Here you’ll want to create a new bucket, and if you ever want to expose this app on a custom domain, then the name of the bucket must exactly match the name of your web app. It’s important that the name of the bucket matches the name of the URL exactly, not including the protocol (HTTP(s)). This is because S3 uses the bucket name for routing. Let’s for instance say that we want the url to be test.kinderas.com.

Since the example URL is test.kinderas.com, we’ll create a bucket with that name. Choose a region and click next and then next again.

On the third screen in the bucket creation wizard, uncheck the boxes under «Manage public Access control lists (ACLs) for selected buckets». If you forget this you can do it later by selecting the bucket and clicking the «Edit public access setting» button on the overview page.

S3 website setup
S3 website setup

In order to have S3 use this bucket for website hosting, we need to configure a couple of things. Click on the name of the bucket and select the «Properties» tab on the top. Find the «Static website hosting» tab and click it. For the «index document» enter the filename of the entry point file, in this example, this is called «index.html». You can also upload a file which is used for errors.

Note the «Endpoint» URL at the top of the dialog. We’ll need this URL for configuring the Cloudfront distribution.

S3 website setup
S3 website setup

Now we need to upload our web-app to have something to use for testing. Select the «Overview» tab in your bucket and upload your app, making sure that the «index.html» file is at the root of the bucket.

The web-app will not work just yet. If you visit the endpoint URL from the static website dialog it will just give you a permission error. To make it work we’ll need to allow access to the bucket from the outside world.

Select the «Permissions» tab from the top of the bucket settings and then select the «Bucket policy» tab. You should now see the «Bucket policy editor». Paste the JSON below into the editor and do notice the «YOUR BUCKET NAME HERE» part, replace that part with your bucket name. Also replace the «SOME LONG STRING» with an actual (different) long string. This string will be used in Cloudfront to make sure no one can visit the S3 bucket directly over http. See the image below for a full example using the name from the example web-app.

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "PublicReadGetObject",
			"Effect": "Allow",
			"Principal": "*",
			"Action": "s3:GetObject",
			"Resource": "arn:aws:s3:::YOUR BUCKET NAME HERE/*",
			"Condition": {
				"StringEquals": {
					"aws:UserAgent": "SOME LONG STRING"
				}
			}
		}
	]
}
S3 website setup
S3 website setup

Configuring Cloudfront

Navigate to Cloudfront in the AWS console and click the «Create Distribution» button. You’ll want a «Web» distribution.

Origin settings

Origin Domain Name:

Paste the URL (without http) from S3 the website hosting dialog here.

Origin Custom Headers:

The header name should be User-Agent and the value is that «SOME LONG STRING» you added in the S3 Bucket Policy json.

Default cache behavior setting

If you don’t want caching then set «Minimum TTL», «Maximum TTL» and «Default TTL» to 0 (zero). You can also configure how to handle http to https requests and forwarding cookies and query strings here.

Distribution Settings

(optional) If you want to use a custom domain name you need to list it in the «Alternate Domain Names». You should also create a custom certificate if you want to use a custom domain.

When you’re done click «Create distribution» and wait…quite a while. What it’s done you can visit your web-app at either the custom domain or the default Cloudfront domain, you’ll find this by opening the distribution and looking for «Domain Name».

S3 website setup
S3 website setup

Optimizing your AWS Lambda bundle size - Quicktip

When working with Node.js on AWS Lambda there are some limits one should be aware of. One of these concerns the bundle size, in other words the total file size of your code folder running on the Lambda function. Currently your code bundle must be below 250 MB (unzipped, including layers).

The notorious «node_modules» folder might cause you some grievance in the bundle size departement. Especially if you’re importing the aws-sdk and/or other humongous dependencies. However, there is one really simple method you can use to minimize the size of the «node_modules» folder.

In two steps

  1. Move all the dependencies not needed for actually executing the code* to either «devDependencies» or «optionalDependencies». This might be things like TypeScript, testing frameworks, serverless and so on.
  2. When you build for deploy, install dependencies using «npm install —only=prod —no-optional». This will skip all those dependencies you moved in step 1.
  3. Watch your «node_modules» folder shrink in size

(*) Node.js projects often contain dependencies used for building and testing the project during development. Packages like TypeScript, Jest, Mocha, Serverless are usually not needed when the code is running in the AWS Lambda environment. Hence the code for such dependencies will just add unnecessary bytes to the bundle size and should be omitted from the production bundle.

Carry on.

AWS: Hosting a static web-app on a custom domain with AWS S3 and Cloudflare

This is a step by step guide on hosting a static web-app on Amazon S3 and exposing it using a custom (https) domain through Cloudflare.

This article assumes some prior knowledge of both Amazon S3 and Cloudflare.

The example web-app used in the article will be hosted using a subdomain, however, the same approach will work for root domains as well.

About Amazon S3 and Cloudflare

If you are familiar with both S3 and Cloudflare you can skip this section.

Amazon S3 is a hosted storage service which allows you to upload files and configure permissions for these files. S3 also allows a so called bucket (a container for files) to be used as a static web hosting service. You can read more about Amazon S3 by visiting their site. https://aws.amazon.com/s3/

Cloudflare is a service which offers CDN, DNS and many other products. We’ll be using it for its DNS product in this article. Read more about Cloudflare on their own site. https://www.cloudflare.com

Setting up a basic web-app

The first step is to create the code for a web app. For this example, I’ll be using a simple index.html file which prints “An example web app …..”. The only thing to note here is that you do need an «index» file to act as the entry point / root for your web-app.

The example web-app is be hosted at https://web-app-test.kinderas.com.

Configuring Amazon S3

Log in to the AWS console and navigate to the S3 service. Here you’ll want to create a new bucket with the same name as the URL for your web app. It’s important that the name of the bucket matches the name of the URL exactly, without the protocol (HTTP(s)), since S3 uses the bucket name for routing.

The example URL is web-app-test.kinderas.com, so we’ll create a bucket with that name. Choose a region and click next and then next again.

The bucket name must match the URL
The bucket name must match the URL

On the third screen for the bucket creation wizard, uncheck the boxes under «Manage public Access control lists (ACLs) for selected buckets». If you forget this you can do it later by selecting the bucket and clicking the «Edit public access setting» button on the overview page.

Allow the bucket to be public
Allow the bucket to be public

In order to have S3 use this bucket for website hosting, we need to configure a couple of things. Click on the name of the bucket and select the «Properties» tab on the top. Find the «Static website hosting» tile and click it. For the «index document» enter the filename of the entry point file, in this example, this is called «index.html». You can also upload a file which is used for errors.

Note the «Endpoint» URL at the top of the dialog. We’ll need this URL for testing and for configuring the domain in Cloudflare.

Website hosting dialog
Website hosting dialog

Now we need to upload our web-app to have something to use for testing. Select the «Overview» tab in your bucket and upload your app, making sure that the «index.html» file is at the root of the bucket.

S3 file console
S3 file console

The web-app will not work just yet. If you visit the endpoint URL from the static website dialog it will just give you a permission error. To make it work we’ll need to allow access to the bucket from the outside world.

Select the «Permissions» tab from the top of the bucket settings and then select the «Bucket policy» tab. You should now see the «Bucket policy editor». First We’ll create a simple policy to verify that the basic are working. Paste the JSON below into the editor and do notice the «YOUR BUCKET NAME HERE» part, replace that part with your bucket name. See the image below for a full example using the name from the example web-app.

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "PublicReadGetObject",
			"Effect": "Allow",
			"Principal": "*",
			"Action": "s3:GetObject",
			"Resource": "arn:aws:s3:::YOUR BUCKET NAME HERE/*"
		}
	]
}
Replace the bucket name with your own bucket name
Replace the bucket name with your own bucket name

The policy above grants public access to anybody wanting to visit your web-app at the S3 generated URL. If this is all you want, then you are done. However, if you want to use browser features dependent on https you’ll want to read on.

Next, we’ll modify the permissions to only allow traffic from Cloudflare. We’ll do this by adding a condition to the policy stating that only IP addresses from Cloudflare should be able to access the bucket.

The new full policy looks like this (below). Notice the «YOUR BUCKET NAME HERE» on line 9. You know what to do. Update it, save it. You will no longer be able to access the bucket from the S3 URL. Moving on to the Cloudflare setup below the policy gist. In the gist we are whitelisting the Cloudflare URLs found here: https://www.cloudflare.com/ips/

Configuring Cloudflare

Important note about security! Configuring Cloudflare to expose your web-app on an https URL does not mean that the traffic from Cloudflare to the S3 bucket is encrypted. It only implies to the browser that the web-app is hosted on an https server, allowing certain features which would otherwise be unavailable.

Log in to Cloudflare and navigate to the DNS tab. You’ll need the URL from the static website dialog in the S3 setup section.

Add a new CNAME record. In this example, I’m using a subdomain so I’ll add «web-app-test» for the «Name» field and then the URL (without http://) from the static website dialog in the S3 setup in the «Domain» field. Leave the little orange cloud on. Click «Add record».

CloudFlare setup
CloudFlare setup

If you have the Crypto -> SSL section of Cloudflare set to “Full” you’ll need to take an additional step. If it’s set to “Flexible” you are done and the URL should work. Please don’t change this setting in the SSL tab to Flexible if it’s set to Full, doing this will affect your entire domain!

[Optional - see above]. In Cloudflare, go to «Page Rules» and create a new rule. The rule should state that when a request to the web-app is encountered it should use “Flexible” SSL. See the image below for how this applies to the example URL.

CloudFlare setup - Replace the URL used with your own url
CloudFlare setup - Replace the URL used with your own url

You can now test your web-app at the proper URL, for this example that would be https://web-app-test.kinderas.com

Sources