Day 50: AWS ECS (Elastic Container Service)

Day 50: AWS ECS (Elastic Container Service)

#90daysofdevops

·

7 min read

🚀 Introduction

In this blog i will provides an insightful guide on how to effectively use Docker with AWS services like Elastic Container Registry (ECR) and Elastic Container Service (ECS) to deploy containerized applications. Whether you're a beginner or looking to refine your skills, this blog will walk you through the prerequisites, deployment process, and best practices for managing Docker containers on AWS.


🔸AWS ECS (Elastic Container Service)

AWS ECS is a service that helps you run and manage containers (small, lightweight environments for your applications). Think of it as a container manager that allows you to easily deploy, organize, and scale applications. Instead of worrying about setting up your own servers, ECS does all the heavy lifting for you.

🔸AWS Fargate

Fargate is a feature of ECS that makes things even easier. Normally, when you run containers on ECS, you need to manage the servers (EC2 instances) where the containers run. With Fargate, you don’t have to do that—it automatically handles the servers for you. You just tell Fargate how much CPU and memory your container needs, and it takes care of running it.

In Simple Terms:

  • ECS: The "boss" that organizes and manages your containers.

  • Fargate: The "assistant" that takes care of the servers, so you don’t have to.

Example:

  • ECS is like the manager who schedules and organizes the chefs (containers).

  • Fargate is like having a fully automated kitchen where you don’t worry about the equipment—it’s ready to use whenever you need it.


🔸Difference Between Amazon ECS And AWS Fargate

FeatureAmazon ECSAWS Fargate
ScalingRequires manual scaling based on EC2 instance capacityAuto-scaling based on container requirements is more dynamic
EncryptionVarious server-side encryption options (SSE-S3, SSE-KMS, and SSE-C)Similar server-side encryption options for data at rest
Use CasesGranular control, specific EC2 needs, existing investmentsServerless approach, no EC2 management, dynamic workloads
Ease of UseInvolves manual infrastructure managementSimplified container management, no infrastructure management
Cost ModelAdditional costs for EC2 instance running and managementGranular billing per vCPU and memory per second for Fargate

  1. First install docker and nodejs in your system

    1. sudo apt update

    2. sudo apt installdocker.io

    3. sudo apt install -y nodejs

    4. sudo apt install npm

    5. To update npm to the latest version: sudo npm install -g npm@latest

  2. After that clone repository by typing git clone <url> and cd into repository

  3. After changing directory build docker image by typing docker build -t my-img .

  4. After building docker image type docker ps to check that our container is running & then test your image locally by typing curl http://localhost:8000

  5. After that create repository in ECR and push docker image that we have created now . In previous blog we have seen that how to push docker image in ECR. Go into ECR Console → create repository → click on created repository → click on view push command and copy paste the command into your system or in EC2.

  6. Here our image get push into ECR

  7. After that go in ECS and create cluster

  8. After creating cluster create task definitions

    Copy the image URI from ECR; we will need it later.

    Give port number 8000 because we are going to run this application/image on port 8000

    Under healthcheck click on info you will get command just copy paste it and write your port number, just as given below picture

    OR

  9. Task definitions get created.

  10. Go back again in cluster and click on that cluster which you have created. Scroll down and you will see services click on create, to create services.

    You can keep desired tasks 1 and Availability Zone Rebalancing untick (OFF)

  11. What is desired tasks

    In AWS ECS, while creating a service, the desired tasks refer to the number of container instances (tasks) you want to run at any given time.

    Think of it like this:

    • A task is a running instance of your container (like a mini version of your application).

    • The desired tasks tell ECS how many copies of your containerized application you want running simultaneously.

Example:

  1. If you set desired tasks = 2, ECS will make sure 2 containers of your application are always running.

  2. If one container crashes or stops, ECS will automatically create a new one to maintain 2 running tasks.

Why is it Important?

The desired task count ensures that your application is always available and can handle the workload. For example:

  • Low traffic: You might need just 1 or 2 tasks.

  • High traffic: You might set a higher number (like 5 or 10 tasks).

Key Points:

  • Desired tasks are part of service configuration.

  • ECS maintains the desired number of tasks for reliability and scaling.

  • If you enable auto-scaling, ECS can adjust the number of tasks based on traffic, but the desired tasks act as the baseline.

This concept ensures your application is scalable.

You can keep desired tasks 1 and Availability Zone Rebalancing untick (OFF)

Under Auto Scaling:

  1. What is number of tasks in auto scaling

    In ECS auto-scaling, the number of tasks refers to the range of container instances (tasks) that your service can scale to, based on the workload or traffic.

    Key Concepts:

    1. Minimum Tasks: The smallest number of tasks that ECS should always run, even if there is no traffic. This ensures your application is always available.

    2. Maximum Tasks: The largest number of tasks ECS can scale up to during high traffic or heavy workload.

    3. Desired Tasks: The starting or target number of tasks that ECS maintains under normal conditions. Auto-scaling adjusts this number as needed.

How it Works:

  • Scaling Up: If traffic or resource usage increases (e.g., CPU or memory utilization exceeds a set threshold), ECS auto-scaling will add more tasks, up to the maximum limit.

  • Scaling Down: If traffic decreases and resource usage drops below a threshold, ECS auto-scaling will reduce the number of running tasks, but not below the minimum limit.

Example:

Let’s say you configure auto-scaling like this:

  • Minimum tasks: 2

  • Maximum tasks: 10

  • Desired tasks: 4

  • During normal traffic, ECS will maintain 4 tasks.

  • If traffic spikes, ECS can scale up to 10 tasks.

  • During low traffic, ECS can scale down to 2 tasks, but never below this limit.

Why is it Useful?

Auto-scaling ensures your application:

  • Handles high traffic by adding tasks dynamically.

  • Saves costs during low traffic by running fewer tasks.

  • Always stays available by maintaining the minimum number of tasks.

In summary, the number of tasks in auto-scaling helps you balance application performance, cost-efficiency, and availability.

  1. After that go in Load balancer & copy paste the DNS name on browser.


🚀 Conclusion

In this blog, we have seen how to effectively deploy Docker containers using AWS services such as ECR and ECS. We explored the step-by-step process of building and pushing Docker images, creating task definitions, and managing containers using ECS clusters and services. Additionally, we discussed important features like health checks,load balancing, auto scaling, and rolling updates to ensure smooth and efficient operations. By leveraging these tools and best practices, you can simplify container management and focus on delivering high-quality, scalable applications.


Thanks for reading to the end; I hope you gained some knowledge.❤️🙌

Linkedln

Twitter

Github


Youtube video for reference

This is Github repository link that we have deployed in the above context

Additional Links for ECR & Fargate

Â