Simplified Your Deployment: GitHub Workflow with EC2, ECR, and Docker

In software development, efficient deployment workflows are crucial to ensure seamless and reliable application delivery.
Aug 1 2023 · 7 min read

Introduction

In software development, efficient deployment workflows are crucial to ensure seamless and reliable application delivery. With the rise of containerization and cloud computing, developers have a powerful arsenal of tools at their disposal to streamline the deployment process. 

In this blog, we will explore how to set up deployment workflow using GitHub, Docker, Nginx, Amazon Elastic Container Registry (ECR), and Amazon Elastic Compute Cloud (EC2).

Whether you are a seasoned DevOps professional or just getting started with application deployment, this guide will walk you through the entire process step-by-step.

So, let’s roll up our sleeves and embark on this exciting journey of mastering GitHub Workflow for deploying your applications.

Refer to our ready-to-use blog platform in case you want to see the working example of the workflow we’re going to implement in this blog.


Sponsored

Develop a growth mindset that sees failures as opportunities for growth with Justly today.


Note: In this example, we will deploy the node.js application on AWS EC2. You can deploy applications with any language like GoLang, Python, or PHP using the same approach.

Create Dockerfile

To deploy the application as a container, we will create a Dockerfile first, which builds our application as an image.

Create a file named Dockerfile in your app’s root directory and add the below code,

# Base node image
FROM node:20

# Initialize work directory
WORKDIR /app

# Bundle app source
COPY . .

# Install dependecies
RUN yarn install --frozen-lockfile

# Declare node env
ENV NODE_ENV production

# Build application
RUN yarn build

# Register application port
EXPOSE 3000

# Start application
CMD ["yarn", "start"]

Setup GitHub workflow

As we are working with GitHub workflow, create a deploy.yml file in the .github/workflows directory of your project’s root.

By doing this, the workflow will trigger automatically on certain actions based on configuration.

name: Deploy application

# run workflow on every push
on:
  push:


jobs:
  deploy:
    # run workflow on ubuntu with permissions
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Checkout
        uses: actions/checkout@v2.3.3

      # setup latest node version
      - uses: actions/setup-node@v1
        with:
          node-version: "20"

      # We need to configure AWS creds using aws actions to deploy code on AWS EC2
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/<role-name>
          aws-region: ${{ secrets.AWS_REGION }}

Deploy on ECR

We are ready with the Dockerfile and deploy.yml, it's time to build and deploy the docker image on AWS ECR.

Note: You can deploy this image on Docker Hub, GitLab registry, or on any container registry system as per your preferences and tech stack.

We need to create an ECR repository on the AWS console. Here are the steps for it.

  • Log in to the AWS console and Go to the Amazon Elastic Container Registry.
  • From the left sidebar, go to Repositories and create a repository.

We will use AWS ECR action to push the docker image. Append the below code in the deploy.yml file.

        # AWS ECR action for login, will automatically 
        # login using AWS credentials to push docker image

      - name: Login to Amazon ECR
        # id for ECR. You can add anything as id
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1

      - name: Build, tag, and push image to Amazon ECR
        env:
          # registry url, will take automatically from login output 
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: repository-name-which-you-created-on-console
          IMAGE_TAG: image-tag-name
        run: |
          # build image
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .         

          # push docker image
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG

NGINX configuration

We need to add the nginx configuration on the EC2 instance to use it as a proxy server.

Create nginx/nginx.conf and nginx/conf.d/application.conf in the project’s root.

You can find the default nginx.conf file content on GitHub.

Below is the basic code for application.conf forwarding an HTTP request to the application server.

 server {
    listen              80;
    listen              [::]:80;
    server_name         SERVER_URL; # add URL of your server (IP/domain name)

    location / {
            proxy_pass         http://localhost:3000;
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-Proto $scheme;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

Prepare docker-compose

We will use docker-compose to create and start a docker container using docker swarm.

Note: Assuming that you have created an EC2 instance on the AWS console. If not yet, follow the AWS steps to create an EC2 instance.

To enable docker swarm on EC2,

  • ssh into the EC2 instance using the ssh private key
  • Run docker swarm init

Steps to get image URI from ECR,

  • Go to AWS console -> ECR -> repositories -> your-repository-name.
  • From the list of images, copy the ImageURI, you want to use.

Now create a docker-compose.yml file in the project’s root and add the below code into it.

Using this file, docker will create node.js, database, and nginx containers on EC2 with volumes and networks.

version: "3.8"

services:
  # below steps will create node.js container on 3000 port
  node-app:
    image: <ecr-image-uri>
    deploy:
      replicas: 1
    depends_on:
      - db
    volumes:
      - uploads:/srv/app/public/uploads
    ports:
      - "3000:3000"
    environment:
      AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
      AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
      ...

  # below steps will create postgres database container on 5432 port
  db:
    image: postgres:15.1-alpine
    environment:
      POSTGRES_USER: ${DATABASE_USERNAME}
      ...
    volumes:
      - dbData:/var/lib/postgresql/data/
    ports:
      - "5432:5432"

 # below steps will create nginx container on default 80 port
  nginx:
    image: nginx:latest
    deploy:
      replicas: 1
    volumes:
      - /etc/nginx/certs/:/etc/nginx/certs/
      - /etc/nginx/conf.d/:/etc/nginx/conf.d/
      - /etc/nginx/nginx.conf:/etc/nginx/nginx.conf
    networks:
      - outside

networks:
  outside:
    external:
      name: "host"

volumes:
  dbData:
  uploads:

Deploy an application to EC2

Now, we are ready with all the ingredients, let’s start preparing the recipe.

Firstly, Create deploy.sh in the project’s root as below,

#!/bin/bash

# login to the ecr to get pushed docker image
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com

# copy all nginx files to the /etc/nginx directory
sudo mkdir -p /etc/nginx/conf.d && sudo mv nginx.conf /etc/nginx/ && sudo mv conf.d/website-blog.conf /etc/nginx/conf.d/
sudo mkdir -p /etc/nginx/certs && echo -e $SSL_PRIVATE_KEY > /etc/nginx/certs/blog.live.domain.name.key && echo -e $SSL_PUBLIC_KEY > /etc/nginx/certs/blog.live.domain.name.cert

# deploy docker stack with 3 containers from docker-compose.yml
docker stack deploy --with-registry-auth -c ./docker-compose.yaml application-stack-name

This file will be helpful to create seamless containers on the EC2 instance by running given commands.

Now, we will write a workflow code to deploy a docker container using swarm manager on AWS EC2.

Add the below code in the deploy.yml file,

- name: Deploy
        env:
          # add ssh-private-key to env variable
          SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
        run: |
          # Copy ssh private key to the file with permissions
          echo "$SSH_PRIVATE_KEY" > ssh_private_key && chmod 600 ssh_private_key
         
          # Create application folder on ec2 and copy nginx directory into it
          scp -i ssh_private_key -r nginx ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_ADDRESS }}:application

          # Copy deploy.sh file in the application folder inside the EC2 instance,
          cat deploy.sh | ssh -i ssh_private_key ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_ADDRESS }} 'cat > ./application/deploy.sh'
          
          # Copy docker-compose to EC2, configure envrinment variabes and run deploy.sh
          cat docker-compose.yaml | ssh -i ssh_private_key ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_ADDRESS }} sudo PUBLISH_PORT=${{secrets.PUBLISH_PORT}} 'bash -c "cd application && cat > docker-compose.yaml && chmod -R 755 ./deploy.sh && ./deploy.sh && cd .. && rm -rf application"'

Final workflow file

name: Deploy application

# run workflow on every push
on:
  push:

jobs:
  deploy:
    # run workflow on ubuntu with permissions
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Checkout
        uses: actions/checkout@v2.3.3

      # setup latest node version
      - uses: actions/setup-node@v1
        with:
          node-version: "20"


      # We need to configure AWS creds using aws actions to deploy code on AWS EC2
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/<role-name>
          aws-region: ${{ secrets.AWS_REGION }}


      # AWS ECR action for login, will automatically 
      # login using AWS credentials to push docker image
      - name: Login to Amazon ECR
        # id for ECR. You can add anything as id
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1


      - name: Build, tag, and push image to Amazon ECR
        env:
          # registry url, will take it automatically from login output 
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: repository-name-which-you-created-on-console
          IMAGE_TAG: image-tag-name
        run: |
          # build image
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .         
          # push docker image
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG


      - name: Deploy
        env:
          # add ssh-private-key to env variable
          SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
        run: |
          # Copy ssh private key to the file with permissions
          echo "$SSH_PRIVATE_KEY" > ssh_private_key && chmod 600 ssh_private_key
         
          # Create application folder on ec2 and copy nginx directory into it
          scp -i ssh_private_key -r nginx ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_ADDRESS }}:application

          # Copy deploy.sh file in the application folder inside the EC2 instance,
          cat deploy.sh | ssh -i ssh_private_key ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_ADDRESS }} 'cat > ./application/deploy.sh'
          
          # Copy docker-compose to EC2, configure envrinment variabes and run deploy.sh
          cat docker-compose.yaml | ssh -i ssh_private_key ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_ADDRESS }} sudo PUBLISH_PORT=${{secrets.PUBLISH_PORT}} 'bash -c "cd application && cat > docker-compose.yaml && chmod -R 755 ./deploy.sh && ./deploy.sh && cd .. && rm -rf application"'

Set up required secrets in the workflow on GitHub and push your code to GitHub.

On push, the workflow will be started, and deploy your application on the EC2 instance. The application will be started on,

http://ec2-instance-ip-address:3000

Do ssh on the EC2 instance to see all the running containers.


Conclusion

By following the above step-by-step guide, developers can gain valuable insights into building resilient production environments. Embracing this powerful workflow streamlines the development process, enabling teams to deliver applications with confidence and speed.

Your suggestions and feedback are highly appreciated. Add them in the comment section 💬.

That’s it for today. Keep exploring for the best✌️.


Similar articles

 


sumita-k image
Sumita Kevat
Sumita is an experienced software developer with 5+ years in web development. Proficient in front-end and back-end technologies for creating scalable and efficient web applications. Passionate about staying current with emerging technologies to deliver.


sumita-k image
Sumita Kevat
Sumita is an experienced software developer with 5+ years in web development. Proficient in front-end and back-end technologies for creating scalable and efficient web applications. Passionate about staying current with emerging technologies to deliver.

contact-footer
Say Hello!
footer
Subscribe Here!
Follow us on
2024 Canopas Software LLP. All rights reserved.