An AWS Journey

John Tucker
10 min readMar 8, 2020

--

Thought to document my journey in learning Amazon Web Services (AWS) through building a hypothetical product company from a startup to an enterprise. The material is principally delivered in the form of videos; expecting to deliver a video or two each week over several months.

note: While this material has educational value, it is not organized as training curriculum. Would highly recommend A Cloud Guru for that.

Also, the entire set of videos is available as a YouTube playlist for convenience.

Account Setup

We start by creating an AWS account and configuring the recommended security settings; including creating an IAM account for day-to-day tasks.

Static Website Using a Custom Domain Name

We setup a static website served using HTTP (not HTTPS) on both a naked, e.g., example.com, and sub-domain, e.g., www.example.com. We closely follow the relevant AWS article:

Example: Setting Up a Static Website Using a Custom Domain Name Registered with Route 53

The React application used in this example is available for download.

Static Website Using a Custom Domain Name (Revisited)

Here we revisit serving up the previous static website using HTTPS, cache control headers, and continuing to address error handling (for use with front-end code routing, e.g., React Router).

note: Found it surprisingly challenging to find complete documentation on this topic; ended up piecing together a number of them to come up with this solution.

Elastic Beanstalk Back-End

We now use Elastic Beanstalk to deliver a back-end to support an updated React application; persisting the todos in a DynamoDB database.

The particulars of the back-end code is outside of the scope of this article, but here are some details:

  • It is written in Python; targeting version 3.6.8. This is the current Python version provided by Elastic Beanstalk’s Python platform
  • Local development using pyenv, pipenv, AWS cli, and Docker (with Docker Compose); was able to use the amazon/dynamodb-local docker image for local DynamoDB development
  • The Elastic Beanstalk deployment will only requires two files, application.py and requirements.txt
  • We will set one environment variable, REGION, to refer to the AWS region holding the DynamoDB table, e.g., us-east-1

The back-end code is available for download. The updated front-end code is also available for download.

Infrastructure as Code with Cloudformation (Sort-Of)

One of the pain points of using Elastic Beanstalk is all the pointing and clicking we had to do to deploy our back-end API. Here we use Cloudformation to deploy our back-end API using an infrastructure as code approach.

In actuality, we will only deploy the Virtual Private Cloud (VPC) resources (see diagram below) of the solution using Cloudformation as we will quickly pivot to using Terraform in the next section to deliver the complete solution.

The Cloudformation configuration is available to download. Later, I recreated the same configuration in YAML too; download.

Infrastructure as Code with Terraform (Sort-Of)

note: In order to follow along with this section, you will need to install and configure AWS CLI and install Terraform.

In this section, we first create the VPC resources using Terraform. We then use Terraform to build out a security group, auto scaling group, and application load balancer (see diagram below). In this example, we however only deploy a static website using an Apache HTTP server.

After a bit of exploration, I learned that setting up a production-worthy EC2 based solution for deploying and maintaining a Python Flask application has some additional complexities. The steps roughly consist of:

  • Create an AMI that is configured with either Apache or NGINX HTTP servers with WSGI support
  • Regularly applying patches for the OS and software used in the AMI
  • Configuring AWS CodeDeploy to deploy the code updates

So, we will change gears yet again and use Docker containers to deliver our solution in the next section.

The Terraform configuration is available to download.

Bringing Up Back-End API Using Docker and Elastic Container Service (ECS)

note: To follow along this section, you will need to install Docker.

This longish section will bring up the back-end API using Docker and Elastic Container Service (ECS); with Fargate option.

The work essentially breaks down to a series of steps; some of which is visualized in a diagram below:

  1. We will use an open-source Docker container uwsgi-nginx-flask-docker to package our Flask application into a Docker image
  2. We will create a AWS Elastic Container Repository (ECR) that we will use to store our Docker image
  3. We will manually create a IAM Role that we will run our tasks with; to gain access to the DynamoDb table.
  4. Using Terraform, we will build out the core network (the same network we build out previously)
  5. Using Terraform, we build out the ECS cluster (and related resources)

The updated Docker compatible back-end code is available for download.

The core network is created with this Terraform configuration.

The ECS cluster and related resources are created with this Terraform configuration.

note: In a separate article, AWS Elastic Kubernetes Service (EKS) By Example, I explored using Elastic Kubernetes Service (EKS) and concluded that for this exercise ECS with Fargate is a better fit (less complicated and less expensive).

Automating the Back-End Build with CodeCommit and CodeBuild

We store our code in CodeCommit (source code management). With CodeCommit as the source, we use CodeBuild to automate building the Docker images and push them to ECR. We then manually trigger ECS to deploy the latest version of the Docker image.

note: To follow along this section, you will need to install GIT.

There are numerous steps in this video:

  1. We start by creating a new CodeCommit repository for the back-end code
  2. We follow the instructions provided by Setup Steps for SSH Connections to AWS CodeCommit Repositories on Linux, macOS, or Unix to create and configure a new user with specific access for when we want to commit code; it is good practice to limit the use of our administrator credentials for only administrator tasks
  3. We finish following the article by cloning the back-end code repository to a local repository on our development machine
  4. We then copy the back-end code into the local repository, commit the changes, and push it to the CodeCommit repository
  5. We configure the development user to be able to push images to the Elastic Container Repository and do a test push
    note: It turns out that this step is not required and, on-camera, I neglected to run the aws configure command to login the development user (although I did this off-camera)
  6. We create a CodeBuild configuration file, buildspec.yml, commit it, and push it to the CodeCommit repository
  7. We create a CodeBuild project tied to our CodeCommit repository; including creating a IAM role, codebuild-backend-service-role, with a policy to allow pushing to ECR
  8. We manually trigger a CodeBuild build and observe the new image in ECR
  9. We finish by updating the ECS service with the Force a new deployment option to deploy the latest ECR image

The updated back-end code is available for download.

Automating the Back-End Deployment with CodePipeline

Here we replace our CodeBuild project with one that is integrated with CodePipeline that automates the full deployment cycle; triggered from a new commit in CodeCommit to ECS deployment.

This solution closely follows the AWS article; Tutorial: Continuous Deployment with CodePipeline.

The updated back-end code is available for download.

Automating the Front-End Deployment with CodePipeline

Following the same pattern as the back-end, we automate the front-end deployment using CodeCommit and CodePipeline.

The updated front-end code is available for download.

User Authentication with Amazon Cognito User Pools

Here we want to be able to have users authenticate on the front-end application and be able to securely use those credentials on the back-end application.

The solution consists of:

  1. Setting up Amazon Cognito User Pool
  2. Updating the front-end code; solution available for download
  3. Updating the back-end code; solution available for download

An explanation of the front-end and back-end code is available in the article: OAuth 2.0: Authorization Code Grant Flow with PKCE for Web Applications By Example.

Addendum (Mar. 19, 2020): During the creation of the video, I questioned storing the public keys in the backend environment; thinking I should pull them live (presumably caching them). But, after reading the AWS documentation, they suggest that downloading these keys is a one time step. So did not make any changes.

This is a one time step before your web APIs can process tokens. Now you can perform the following steps each time the ID token or the access token are used with your web APIs.

— AWS — Verifying a JSON Web Token

Addendum (Mar. 19, 2020): During deployment, learned some things that required changing.

  • Previously, the load balancer for our ECS tasks were checking the /todos endpoint. But with this change, the endpoint is secured with an token, thus the health checks were failing. Solution was to update the code and load balancer configuration (terraform) to use an open /hc endpoint

User Authorization with Amazon Cognito Identity Pools

Here we build upon the authentication work we did in the previous video and now enable it so that each user only can read, create, and delete their own todos using Amazon Cognito Identity Pools.

The updated back-end code is available for download

Email Notifications with Simple Notification Service and Lambda

First, the changes that come in this section are fairly substantial; was tough to figure out how to communicate these changes. Broadly, however, this section introduces a feature that enables users to receive email notifications when todos are created.

The relevant elements and flow are captured in the following diagram (explained in video).

The updated Terraform configuration is available for download.

The updated backend code is available for download.

The Lambda function is available for download.

Automating the Lambda Deployment; Introducing CodeDeploy

First, wanted to clarify that this approach is not built using the AWS Serverless Application Model (SAM) approach; where there is a standard approach to deployments. Rather it uses CodeCommit, CodeBuild, CodePipeline, and CodeDeploy, to deploy the Lambda without the SAM overhead.

In a separate exercise, I explored SAM and found it a much more guided approach to the development of Lambda functions. Was able to essentially recreate what is done here using the following resources:

On the other hand, it introduces another whole layer of abstraction that I found bit overwhelming.

The flow in this solution is some what unusual (see diagram below) as CodePipeline cannot (for Lambda deployment) integrate with CodeDeploy. Rather, we use CodePipeline to integrate CodeCommit and CodeBuild; then CodeBuild directly interacts with CodeDeploy.

The updated Terraform configuration is available for download.

The Lambda code (and build script) is available for download.

Continuous Deployment of Terraform Configurations

The last bit of continuous deployment involves the two Terraform configurations: network and application.

The updated Terraform network configuration is available for download.

The updated Terraform application configuration is available for download.

Cleaning Up Developer Access

Earlier we created a Developer group and a developer user. Given that we now have much of our infrastructure being built using CodeCommit, CodeBuild, CodeDeploy, and CodePipeline, we can now grant read only access to these resources to this group.

note: Technically, the todosrus-net and todosrus-app repositories are particularly powerful as they trigger builds with administrator access; likely not something to be done as a regular developer. But for the purpose of this exercise, we are not going to worry too much about this.

Securing the VPC

We now move the load balancer into a new security group and move the ECS tasks into a private subnet and into another security group; diagram explained in the video.

The updated Terraform network configuration is available for download.

The updated Terraform application configuration is available for download.

A Legacy EC2 Web Application

Here we introduce a legacy (of course imagined) web application that needs to continue to run in an EC2 instance (not suitable to port to Docker). Along the way we build an AWS Auto Scaling Group and a bastion host.

We build the AMI that is the basis of our EC2 instances using a Packer configuration; download.

The updated Terraform network configuration is available for download.

The updated Terraform application configuration is available for download.

The article, Securely Connect to Linux Instances Running in a Private Amazon VPC, was helpful in understanding how to properly use the bastion host.

Securing the Bastion Host

Here we secure the bastion host based on the material; Linux Bastion Hosts on the AWS Cloud: Quick Start Reference Deployment.

The updated Terraform application configuration is available for download.

EC2 Operations Automation with Ansible

Here we take a side-trip and explore using a popular automation tool, Ansible, to help with EC2 operations.

Updated Packer configuration, download; including the playbook, download.

The updated Terraform application configuration is available for download.

Example SSH command to pass commands through bastion host.

ssh ubuntu@10.0.10.133 -o "proxycommand ssh -W %h:%p ubuntu@54.87.165.82"

Ansible hosts file; download.

Example Ansible ping playbook; download.

Example Ansible upgrade playbook; download.

EC2 Operations Automation with System Manager

We switch to using System Manager to automate EC2 operations.

The updated Terraform application configuration is available for download.

The updated Packer Ansible playbook, download.

The updated Ansible ping.yml and update.yml files used with System Manager.

Miscellaneous Operations Topics; State Manager, CloudTrail, and Config

The title pretty much says it all.

Creating a Developer API Using AWS API Gateway

This section was sufficiently complex that I rather wrote a lengthy article on the topic; AWS API Gateway By Example. The video essentially tells you the same thing.

Next Steps

My plan is to create one or two videos per week and post them here. Stay tuned if you are following along.

--

--

John Tucker
John Tucker

Written by John Tucker

Broad infrastructure, development, and soft-skill background