Running Kumologica container on AWS ECS Fargate
We have observed several examples and use cases of building a low-code API using Kumologica and deploying it on AWS Lambda and Azure Functions. In this article, we will guide you through the process of building and deploying a simple “Hello World” service as a Docker container within AWS Elastic Container Service (ECS) Fargate.
Architecture
Based on the design outlined in the diagram (Fig 1), we will be building Kumologica applications as Docker containers. Once the Docker image is ready, we will push it to AWS Elastic Container Registry (ECR), which stores all the Docker images. The Elastic Container Service (ECS) is AWS’s container management platform, responsible for managing workload availability, scaling, and networking. To deploy a workload to ECS, we need to create an ECS task definition, which specifies the Docker image to be used, the service name, and the launch type. In this case, we will select Fargate as the launch type.
Incoming request traffic to the deployed containers is load-balanced by the AWS Application Load Balancer. This tutorial focuses specifically on exposing services through the Application Load Balancer, although a standard approach would involve integrating the load balancer with an API Gateway, which can then be connected to Route 53.
Implementation
In this section we will get into the pre-requisites and step by step implementation of build and deployment using Kumologica and ECS fargate respectively.
Pre-requisite
In order implement the architecture we need the following platforms and tooling available.
1. Access to an AWS account.
2. Install AWS CLI in your local machine.
2. Download and install Kumologica.
3. Download and install Docker.
Once the above pre-requisites are met , then we can start the implementation steps.
Steps
Building the flow
We need to start by building a simple API service flow in Kumologica which we will later dockerize. To do this, lets open Kumologica designer.
1. File > New Project . This will open the popup for providing the project name and file location. Give the name as hello-kl
.
2. Drag and drop an Event Listener node from the pallet to the canvas. Click open the node and provide the following settings.
Display Name : [GET] /hello
Provider : AWS
Event Source : Amazon API Gateway
Verb : GET
URL : /hello
3. Wire the Event Listener node to a Logger node.
Display Name: Log_Entry
Level : INFO
Message : Inside the service
Log format : String
4. Finally wire an Event Listener End node to the logger to complete the flow.
Display Name : Success
Response : Http response
Status code : 200
Content-Type : application/json
Payload : {"status" : "HelloWorld"}
The flow would look as in Fig 2 shown below.
Dockerizing the API flow
In this section we will see how to dockerize the Kumologica flow. Ensure to have the Docker file and index.js file in the root folder of your Kumologica project (Fig 4) . These two files are not default in Kumologica project.
Following is the Docker file .
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
ENV PATH /app/node_modules/.bin:$PATH
COPY . .
EXPOSE 1880
CMD ["node","index.js"]
Following is the index.js file. Here hello-world-service-flow.json
is the flow file in the project folder.
const { NodeJsFlowBuilder } = require('@kumologica/runtime');
new NodeJsFlowBuilder('hello-world-service-flow.json').listen();
1. Now we will do the docker build with the following command.
docker build . -t hello-kl-docker-app
2. Once the docker build is completed we need to push the image to AWS ECR. For this we need to first login to ECR via docker client using the following command. The below command is for private registry . It may vary for public ECR registry.
aws ecr get-login-password --region <<aws region>> | docker login -u AWS -p $(aws ecr get-login-password --region <<aws region>>) <<aws accountid>>.dkr.ecr.<<aws region>>.amazonaws.com
3. Once the client login is established with ECR , we will tag the image using the following command.
docker tag hello-kl-docker-app <<aws accountid>>.dkr.ecr.<<aws region>>.amazonaws.com/hello-kl-docker-app:latest
4. Create an ECR private repo in AWS either via console or via command line. The repo name should match when pushing the docker image.
5. Once we have the repo created , let’s push the docker image with the following command.
docker push <<aws accountid>>.dkr.ecr.<<aws region>>.amazonaws.com/hello-kl-docker-app:latest
Setting up ECS
The first step in setting up the ECS Fargate is creating a cluster by providing the cluster name.
The next step is to create a task definition to deploy the container. Go to AWS ECS and create task definition .
a. Provide task definition family name.
b. Select AWS faragate as launch type.
c. Select the repo image uri and container name
d. Provide the port mapping with container port as 1880
Leave the memory , cpu params and other parameters as default
Deploying the service
To make a deployment to ECS we need to select either a service or a task. In this tutorial we are deploying an API so we select service.
Create a service under the cluster by selecting the launch type as Fargate.
a. Provide service name
b. Provide the launch type as Fargate
c. Select the task definition that was created in the earlier step.
d. leave the execution role as default
e. select the load balancing
1. Select application load balancer (Since API)
2. Provide the listener port as HTTP 80
3. Provide the health port as 1880 and resource path as the one defined in KL flow
Leave rest all default
Ensure that the security group attached to the application load balancer is having the inbound traffic opened for HTTP and the listener port allocated in the earlier service creation step.
Once deployed you can go the application load balancer section to fetch the ARecord url DNS generated. Use the path /hello with the DNS to invoke the endpoint. This will return with the message : {"status" : "HelloWorld"}
Conclusion
I hope the tutorial gave you a thorough understanding of how to build and deploy a Kumologica API flow into ECS Fargate. However, it does not include details on script-based provisioning of the ECS cluster, task definition, or ECS service deployment. These topics will be addressed in a separate article.