Hosting a static web-app on a custom domain with AWS S3 and Cloudflare

This is a step by step guide on hosting a static web-app on Amazon S3 and exposing it using a custom (https) domain through Cloudflare.

This article assumes some prior knowledge of both Amazon S3 and Cloudflare.

The example web-app used in the article will be hosted using a subdomain, however, the same approach will work for root domains as well.

About Amazon S3 and Cloudflare

If you are familiar with both S3 and Cloudflare you can skip this section.

Amazon S3 is a hosted storage service which allows you to upload files and configure permissions for these files. S3 also allows a so called bucket (a container for files) to be used as a static web hosting service. You can read more about Amazon S3 by visiting their site. https://aws.amazon.com/s3/

Cloudflare is a service which offers CDN, DNS and many other products. We'll be using it for its DNS product in this article. Read more about Cloudflare on their own site. https://www.cloudflare.com

Setting up a basic web-app

The first step is to create the code for a web app. For this example, I'll be using a simple `index.html` file which prints "An example web app …..". The only thing to note here is that you do need an «index» file to act as the entry point / root for your web-app.

The example web-app is be hosted at https://web-app-test.kinderas.com.

Configuring Amazon S3

Log in to the AWS console and navigate to the S3 service. Here you'll want to create a new bucket with the same name as the URL for your web app. It's important that the name of the bucket matches the name of the URL exactly, without the protocol (HTTP(s)), since S3 uses the bucket name for routing.

The example URL is web-app-test.kinderas.com, so we'll create a bucket with that name. Choose a region and click next and then next again.

The bucket name must match the URL

The bucket name must match the URL

On the third screen for the bucket creation wizard, uncheck the boxes under «Manage public Access control lists (ACLs) for selected buckets». If you forget this you can do it later by selecting the bucket and clicking the «Edit public access setting» button on the overview page.

Allow the bucket to be public

Allow the bucket to be public

In order to have S3 use this bucket for website hosting, we need to configure a couple of things. Click on the name of the bucket and select the «Properties» tab on the top. Find the «Static website hosting» tile and click it. For the «index document» enter the filename of the entry point file, in this example, this is called «index.html». You can also upload a file which is used for errors.

Note the «Endpoint» URL at the top of the dialog. We'll need this URL for testing and for configuring the domain in Cloudflare.

img3.png

Now we need to upload our web-app to have something to use for testing. Select the «Overview» tab in your bucket and upload your app, making sure that the «index.html» file is at the root of the bucket.

img4.png

The web-app will not work just yet. If you visit the endpoint URL from the static website dialog it will just give you a permission error. To make it work we'll need to allow access to the bucket from the outside world.

Select the «Permissions» tab from the top of the bucket settings and then select the «Bucket policy» tab. You should now see the «Bucket policy editor». First We’ll create a simple policy to verify that the basic are working. Paste the JSON below into the editor and do notice the «YOUR BUCKET NAME HERE» part, replace that part with your bucket name. See the image below for a full example using the name from the example web-app.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YOUR BUCKET NAME HERE/*"
        }
    ]
}
Replace the bucket name with your own bucket name

Replace the bucket name with your own bucket name

The policy above grants public access to anybody wanting to visit your web-app at the S3 generated URL. If this is all you want, then you are done. However, if you want to use browser features dependent on https you'll want to read on.

Next, we'll modify the permissions to only allow traffic from Cloudflare. We'll do this by adding a condition to the policy stating that only IP addresses from Cloudflare should be able to access the bucket.

The new full policy looks like this (below). Notice the «YOUR BUCKET NAME HERE» on line 9. You know what to do. Update it, save it. You will no longer be able to access the bucket from the S3 URL. Moving on to the Cloudflare setup below the policy gist. In the gist we are whitelisting the Cloudflare URLs found here: https://www.cloudflare.com/ips/

Configuring Cloudflare

Important note about security! Configuring Cloudflare to expose your web-app on an https URL does not mean that the traffic from Cloudflare to the S3 bucket is encrypted. It only implies to the browser that the web-app is hosted on an https server, allowing certain features which would otherwise be unavailable.

Log in to Cloudflare and navigate to the DNS tab. You'll need the URL from the static website dialog in the S3 setup section.

Add a new CNAME record. In this example, I'm using a subdomain so I'll add «web-app-test» for the «Name» field and then the URL (without http://) from the static website dialog in the S3 setup in the «Domain» field. Leave the little orange cloud on. Click «Add record».

img6.png

If you have the Crypto -> SSL section of Cloudflare set to "Full" you'll need to take an additional step. If it's set to "Flexible" you are done and the URL should work. Please don't change this setting in the SSL tab to Flexible if it’s set to Full, doing this will affect your entire domain!

[Optional - see above]. In Cloudflare, go to «Page Rules» and create a new rule. The rule should state that when a request to the web-app is encountered it should use "Flexible" SSL. See the image below for how this applies to the example URL.

Replace the URL used with your own url

Replace the URL used with your own url

You can now test your web-app at the proper URL, for this example that would be https://web-app-test.kinderas.com

Sources

AWS: Using Elastic Beanstalk as a private backend for the API Gateway

When working with an API Gateway in Amazon Web Services (AWS) you typically don't want the services themselves to be publicly available on the Internet. A public service would only create the need for extra layers of protections such as client certificates or service keys, neither of which would properly prevent a, for example, a DoS attack. A better approach is to either use Lambdas (which are private by default) or (if you can't use Lambdas) simply keep the services within a private cloud (a «VPC» in Amazon terminology) and provide access to them only through the API itself.

This quick-guide intends to explain the basics of configuring a private backend/service utilizing an Elastic Beanstalk app (EBS) residing within a VPC. The methods of the EBS will be made available through the API Gateway using a VPC Link.

Note: As of this writing the AWS API Gateway only allows a maximum of 5 VPC Links for one API Gateway in the same account. You can ask to have this limit increased through AWS support.

VPC

First, you will need to configure a Virtual Private Cloud (VPC), I will not cover this step here since it would require a separate post. However, I would recommend that you take a look at the following video for directions if you're unsure how to accomplish this step.

Elastic Beanstalk

The second step is setting up an Elastic Beanstalk (EBS) application. (This guide assumes you're using the AWS console). One of the advantage of using EBS over pure EC2 instances is that EBS comes with a bunch of pre-configured features such as auto scaling and rolling updates. You can achieve the same goal (private service) using a just some EC2 instances and a Network Load Balancer (click the link to see how) of course, but for this guide I'll assume that you are using an EBS application.

The first step is to create an EBS application, followed by creating a new environment. The environment is actually where all the configuration takes place. Refer to https://aws.amazon.com/elasticbeanstalk for more information on how EBS actually works.

When setting up a new environment choose the "Web server environment" for now. Give your environment a name and choose a platform (I'm using Node.js). Just use the "Sample application" if you don't have a finished app bundle.

Important: Click the «Configure more options» button, not the «Create Environment»!

Associating an Elastic Beanstalk environment with a VPC must be done during initial setup of the environment. It cannot be done after the environment has been created.

more options.png

After you have clicked the «Configure more options» button you’ll find yourself on the «Configure <your env name>» page and you’ll see three radio buttons titled «Configuration presets». Choose the «High availability» option. This will give us the option to configure a Load Balancer. Now open / «modify» the «Load Balancer» tile. Choose the «Network Load balancer» and click «Save». The API Gateway VPC Link will only work with Network load balancers.

In the «Network» tile (left bottom one), select the VPC you have already created. Under «Load Balancer settings» the «Visibility» setting should be «Internal» since this service should not be accessible from the Internet. Select the private subnets (for all zones in your VPC) for both the load balancer and the subnets. No public subnets should be checked. Click «Save».

Setup anything else like scaling and so on and then click "Create environment».

API Gateway

Create an API Gateway if you haven't done so already.

  1. In the left menu, select «VPC links»

  2. Give the VPC link a name and select the Load Balancer you created under the EBS section in the «Target NLB». It will only show the (cryptic) name of the load balancer here, but if you go to the EC2 service, and select «load balancers» you can match any tags to the Load Balancer name.

  3. Click «Create».

While you wait for the connection to be created you can configure the API. I would recommend using stage variables to store a reference to the VPC link id. This would allow you to easier maintain several stages pointing to separate backends. AWS documentation for using stage variables to refer to a VPC link.

To configure a method in the api using the VPC link, see the following image.

api.png

In this example the «VPC link» is set as a stage variable, the same goes for the «Endpoint URI». As already stated, but worth repeating, this is a good idea because it makes it considerable easier to deal with multiple stages. You have to set this up for each of the methods in your api.

That’s it.

Useful links: