Imagine transforming your deployment process from a time-consuming task to an automated breeze. That’s exactly what we will achieve with this fantastic Bitbucket Pipeline script! We’ll guide you through building a Docker image, pushing it to Amazon Elastic Container Registry (ECR), and updating an ECS service seamlessly. Let’s dive into the details and unlock the magic of automation.
Pipeline Overview
Our pipeline has three main stages:
- Building the Docker image
- Pushing the Docker image to ECR
- Create new Task Defination
- Updating the ECS service
Let’s break down the script:
1. Setting the Base Docker Image
image: python:3.8-slim |
- Purpose: Defines the base Docker image for the pipeline, which is python:3.8-slim. This image includes Python 3.8 and a minimal set of libraries, ensuring a lightweight and consistent environment.
2. Defining the Pipeline Steps
pipelines: default: – step: name: **Build, Push Docker Image, and Update ECS Service** |
- Purpose: Sets up the default pipeline with a single step named “Build, Push Docker Image, and Update ECS Service”. This makes the pipeline’s goal clear and manageable.
3. Configuring Services and Caches
services: – docker caches: – pip |
- Purpose:
- Services: Specifies that the Docker service is required for building and pushing the image.
- Caches: Uses a pip cache to speed up the installation of Python packages.
4. Displaying System Information (Optional)
script: – echo “**System Information:**” – echo “**CPU Info:**” – lscpu – echo “**Memory Info:**” – cat /proc/meminfo |
- Purpose: Prints out system information (CPU and memory details) for debugging and verification purposes. Knowing the system’s specifications can help in diagnosing build issues.
5. Installing Dependencies
– echo “**Installing dependencies…**” – apt-get update – apt-get install -y python3-pip jq git – pip3 install awscli |
- Purpose: Updates the package list and installs required dependencies (python3-pip, jq, git, and awscli). These tools are essential for subsequent steps, such as interacting with AWS services and processing JSON.
6. Configuring AWS CLI
– echo “**Configuring AWS CLI…**” – aws configure set aws_access_key_id “$AWS_ACCESS_KEY_ID” – aws configure set aws_secret_access_key “$AWS_SECRET_ACCESS_KEY” – aws configure set default.region ap-south-1 |
- Purpose: Configures the AWS CLI with the necessary credentials and sets the default region. This enables the pipeline to authenticate and interact with AWS services.
7. Fetching Dockerfile from AWS Secrets Manager (Optional)
– echo “**Fetching Dockerfile from AWS Secrets Manager…“ – DOCKERFILE_SECRET=$(aws secretsmanager get-secret-value –secret-id arn:aws:secretsmanager:ap-south-1:************:secret:Dockerfile –query ‘SecretString’ –output text) – echo “$DOCKERFILE_SECRET” > Dockerfile – cat Dockerfile |
- Purpose: Retrieves the Dockerfile stored in AWS Secrets Manager and writes it to a local file. This ensures that the Dockerfile remains secure and can be dynamically updated without modifying the pipeline.
8. Fetching Environment Variables from AWS Secrets Manager
– echo “**Fetching environment variables from AWS Secrets Manager…**” – ENV_SECRET=$(aws secretsmanager get-secret-value –secret-id arn:aws:secretsmanager:ap-south-1:************:secret:env –query ‘SecretString’ –output text) – echo “$ENV_SECRET” > .env – echo “**Printing the contents of .env file…**” – cat .env |
- Purpose: Retrieves environment variables from AWS Secrets Manager and writes them to a .env file. This keeps sensitive information secure and allows the Docker build to use these variables.
9. Building the Docker Image
– echo “**Building the Docker image…**” – IMAGE_TAG=$(git rev-parse –short HEAD) – echo “IMAGE_TAG=${IMAGE_TAG}” – docker build –file Dockerfile –build-arg ENV_FILE=env -t ECR-name:${IMAGE_TAG} . |
- Purpose: Builds the Docker image using the retrieved Dockerfile and environment variables. The image is tagged with the current Git commit hash for versioning.
10. Tagging and Pushing the Docker Image
– echo “**Tagging the Docker image…**” – docker tag ECR-name:${IMAGE_TAG} ************.dkr.ecr.ap-south-1.amazonaws.com/ECR-name:${IMAGE_TAG} – echo “**Logging in to Amazon ECR…**” – aws ecr get-login-password –region ap-south-1 | docker login –username AWS –password-stdin ************.dkr.ecr.ap-south-1.amazonaws.com – echo “**Pushing the Docker image to ECR…**” – docker push ************.dkr.ecr.ap-south-1.amazonaws.com/ECR-name:${IMAGE_TAG} |
- Purpose: Tags the Docker image appropriately and pushes it to Amazon ECR. This step ensures that the latest image is available for deployment.
11. Fetching and Updating the ECS Task Definition
– echo “**Fetching the latest task definition…**” – | TASK_DEFINITION=$(aws ecs describe-task-definition –task-definition ECR-name) echo “$TASK_DEFINITION” > current-taskdef.json echo “**Current task definition:**” cat current-taskdef.json – echo “**Updating the task definition with the new Docker image…**” – | CONTAINER_DEFINITIONS=$(echo “$TASK_DEFINITION” | jq –arg IMAGE “************.dkr.ecr.ap-south-1.amazonaws.com/ECR-name:${IMAGE_TAG}” ‘ .taskDefinition.containerDefinitions[0].image = $IMAGE | { containerDefinitions: .taskDefinition.containerDefinitions, family: .taskDefinition.family, executionRoleArn: .taskDefinition.executionRoleArn, taskRoleArn: .taskDefinition.taskRoleArn, networkMode: .taskDefinition.networkMode, volumes: .taskDefinition.volumes, requiresCompatibilities: .taskDefinition.requiresCompatibilities, cpu: .taskDefinition.cpu, memory: .taskDefinition.memory, runtimePlatform: .taskDefinition.runtimePlatform }’) echo “$CONTAINER_DEFINITIONS” > updated-taskdef.json echo “**Updated task definition:**” cat updated-taskdef.json |
- Purpose:
- Fetching: Retrieves the current ECS task definition and saves it to a file.
- Updating: Modifies the task definition to use the new Docker image and saves the updated definition to a file.
12. Registering the New Task Definition
– echo “**Registering new task definition…**” – | TASK_DEF_ARN=$(aws ecs register-task-definition –cli-input-json file://updated-taskdef.json –query “taskDefinition.taskDefinitionArn” –output text) echo “New Task Definition ARN: $TASK_DEF_ARN” |
- Purpose: Registers the updated task definition with ECS and retrieves the new task definition ARN. This step is crucial for updating the ECS service with the latest image.
13. Updating the ECS Service
– echo “**Checking current service desired task count…**” – | DESIRED_TASK_COUNT=$(aws ecs describe-services –services service-name –cluster cluster-name –query “services[0].desiredCount” –output text) echo “Current Desired Task Count: $DESIRED_TASK_COUNT” – echo “**Updating ECS service to use the latest task definition revision…**” – | if [ -z “$TASK_DEF_ARN” ]; then echo “Task Definition ARN is empty. Cannot proceed with update.” exit 1 fi if [ “$DESIRED_TASK_COUNT” -eq 0 ]; then echo “Desired task count is 0. Updating to 1…” aws ecs update-service –cluster cluster-name –service service-name –task-definition $TASK_DEF_ARN –desired-count 1 –force-new-deployment else echo “Desired task count is already greater than 0. No update needed.” aws ecs update-service –cluster cluster-name –service service-name –task-definition $TASK_DEF_ARN –force-new-deployment fi |
- Purpose:
- Checking: Retrieves the current desired task count of the ECS service.
- Updating: Updates the ECS service to use the new task definition. If the desired task count is zero, it sets it to one to ensure the service runs.
14. Defining the Docker Service Memory Allocation
definitions: services: docker: memory: 2048 |
- Purpose: Allocates 2048 MB of memory to the Docker service. This ensures the pipeline has sufficient resources for building and pushing the Docker image.
Conclusion
By using this Bitbucket Pipeline, we automate the entire process of building, tagging, pushing a Docker image, and updating an ECS service. This not only saves time but also reduces the potential for human error, ensuring a consistent and reliable deployment process.
The benefits of this pipeline include:
- Automation: Reduces manual intervention, increasing efficiency and consistency.
- Security: Uses AWS Secrets Manager to securely manage sensitive information.
- Scalability: Easily integrates with other AWS services, allowing for scalable deployments.
- Debugging: Provides system information and logs at each step for easier debugging and monitoring.
Implementing this pipeline can significantly streamline your deployment process, allowing your team to focus on developing features rather than managing deployments.