Docker Learning Note (v)

E.Y.
5 min readAug 8, 2020

--

Photo by Will Turner on Unsplash

This is a series blogs about Docker and Kubernetes — This is the 5th blog following up to my last one.

In last blog, we discuss about moving docker image to the production environment.

So what is the process of deploying Dockerised app to the cloud (using AWS)?

At a high level:

  • push code to Github
  • Travis pulls the code
  • Travis builds a test image to test code
  • Travis builds prod images
  • Travis pushes built prod images to Docker Hub
  • Travis pushes project to AWS elastic beanstalk (EB)
  • EB pulls images from Docker Hub and deploys

A necessary step of this is to using Travis to build our CI/CD pipeline.

On Travis, create a yaml file to tell Travis to have a copy of docker running:

//.travis.ymlsudo: required
services:
- docker
build our image using Dockerfile.dev
before_install:
- docker build -t elfiy/docker-frontend -f Dockerfile.dev
#use Docker.dev because we only want to run test on dev env as the #docker file for prod env has no such capabilities (See below docker files)
tell Travis how to run test suite
script:
- docker run -e CI=true elfiy/docker-frontend npm run test
tell Travis how to deploy to AWS
deploy:
provider: elasticbeanstalk
region: "us-east-2"
app: "docker-frontend"
env: "DockerFrontend-env"
bucket_name: "elasticbeanstalk-us-east-2-838626446375" //S3 bucket
bucket_path: "docker-frontend"
on:
branch: master //not let feature branch get deployed
access_key_id: $AWS_ACCESS_KEY //Env var using Travis
secret_access_key: $AWS_SECRET_KEY

Referenced Dockerfiles:

//===========Dockefile.devFROM node:alpine
WORKDIR ‘/app’
COPY package.json .
RUN npm install
COPY . .
CMD [“npm”, “run”, “start”]
//===========Dockefile (for prod)FROM node:alpine as builder
WORKDIR ‘/app’
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY — from=builder /app/build /usr/share/nginx/html
# copy the build folder to this /usr/share/nginx/html

In more complex project, let’s say you have a react-express app that has a client, server and worker folder using redis and postgres. To deploy this app to AWS using Docker, you first need to:

  • Create your application (e.g. react-express-redis/postgres )

Docker compose (dev/test)

  • Dockerise your react/server/worker app (create a Dockerfile.dev for each folder)
  • Add Docker-compose (with 5 images: redis/postgres/client/api/worker)
  • Add Nginx Path Routing to route incoming request to React/Express server (if request to client then “/”, if to server “/api/”)
  • Add Nginx image to docker-compose.yml

Push image to Docker Hub

  • Add Production Dockerfiles for each client/server/worker/nginx
  • For Nginx especially to have a 2nd instance of Nginx (other than 80 for routing) to expose port 3000 for frontend UI along with React
  • Set up Travis.yml and active repo on Travis
  • Push to Github and let Travis to run CI and push to Docker Hub

Deploy to AWS

  • Add multi-container config files Dockerrun.aws.json
  • RDS Database Creation
  • ElastiCache Redis Creation
  • Creating a Custom Security Group (in link to VPC)
  • Applying Security Groups to ElastiCache
  • Applying Security Groups to Elastic Beanstalk
  • Setting Environment Variables
  • IAM Keys for Deployment
  • AWS Keys in Travis
sudo: required
services:
- docker
#build and test our client image using Dockerfile.dev
before_install:
- docker build -t elfiy/docker-complex/nginx-redis -f
./client/Dockerfile.dev ./client
#tell Travis how to run test suite
script:
- docker run -e CI=true elfiy/docker-complex/nginx-redis npm test
#build prod container using Dockerfile
after_success:
- docker build -t elfiy/multi-client ./client
- docker build -t elfiy/multi-nginx ./nginx
- docker build -t elfiy/multi-server ./server
- docker build -t elfiy/multi-worker ./worker
# Login to docker CLI
- echo “$DOCKER_PASSWORD” | docker login -u “$DOCKER_ID” — password-stdin
# Take those images and push them to docker hub
- docker push elfiy/multi-client
- docker push elfiy/multi-nginx
- docker push elfiy/multi-server
- docker push elfiy/multi-worker
# Deploy to AWS
deploy:
provider: elasticbeanstalk
region: us-east-2
app: multi-docker
env: MultiDocker-env
bucket_name: elasticbeanstalk-us-east-2–838626446375
bucket_path: docker-multi
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY

Apart from Travis, the other key participant is AWS, and we are using below services from AWS in our project:

Elastic Beanstalk

is an orchestration service offered by Amazon Web Services for deploying applications which orchestrates various AWS services, including EC2, S3, Simple Notification Service, CloudWatch, autoscaling, and Elastic Load Balancers.

S3

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services’ (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

IAM

Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge.

Elastic Container Service

is a fully managed container orchestration service. ECS is a great choice to run containers for several reasons. First, you can choose to run your ECS clusters using AWS Fargate, which is serverless compute for containers. Second, ECS is used extensively within Amazon to power services such as Amazon SageMaker, AWS Batch, Amazon Lex, and Amazon.com’s recommendation engine, ensuring ECS is tested extensively for security, reliability, and availability.

ElastiCache

Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.

Relational Database Service

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.

//Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "client",
"image": "elfiy/multi-client",
"hostname": "client",
"essential": false,
"memory": 128
},{
"name": "server",
"image": "elfiy/multi-server",
"hostname": "api",
"essential": false,
"memory": 128
},{
"name": "worker",
"image": "elfiy/multi-worker",
"hostname": "worker",
"essential": false,
"memory": 128
},{
"name": "nginx",
"image": "elfiy/multi-nginx", //image name
"hostname": "host",
"essential": true, //if true, will shut down related services when this fail
"portMappings": [{ "hostPort": 80, "containerPort": 80 }], //forming container links
"links:": ["client", "server"],
"memory": 128
}]
}

So that’s so much of it today! Happy Reading :).

🐬 🐳 🎽 ❄️ 💦 🌊🐬 🐳 🎽 ❄️ 💦 🌊🐬 🐳 🎽 ❄️ 💦 🌊🐬 🐳 🎽 ❄️ 💦 🌊🐬 🐳 🎽 ❄️ 💦 🌊🐬 🐳 🎽 ❄️ 💦 🌊🐬 🐳 🎽 ❄️ 💦 🌊🐬 🐳 🎽 ❄️ 💦 🌊💦 🌊

--

--

No responses yet