The post Managing ELB for a Kubernetes Cluster using AWS Load Balancer Controller appeared first on Indium.
]]>Running applications in a Kubernetes cluster has many advantages, including scalability, flexibility, and ease of management. However, to make our applications highly available and resilient, we often need a load balancer to distribute the incoming traffic across multiple pods or nodes. Amazon Web Services (AWS) offers the Elastic Load Balancer (ELB) service, which can be integrated with our Kubernetes cluster to achieve this. This blog post will explore how to manage ELB for a Kubernetes cluster using the AWS Load Balancer Controller.
The AWS Load Balancer Controller is an open-source project that simplifies the integration of AWS Elastic Load Balancers with Kubernetes clusters. A Kubernetes Ingress Controller automates the creation and management of AWS load balancers. This controller enables us to define Kubernetes resources like Ingress, Services, and Network Load Balancers as custom resources, making it easy to configure and manage AWS load balancers directly from our Kubernetes cluster.
Before we start managing ELBs with the AWS Load Balancer Controller, we should have the following prerequisites in place:
After installing the controller, we must configure it to manage our AWS load balancers. We can do this by creating an IAM policy, role, and ServiceAccount for the controller, as well as defining the necessary AWS annotations in our Kubernetes resources.
Visit Indium Software for expert solutions in Kubernetes cluster management and AWS integration. Elevate your application’s performance and reliability with our comprehensive services.
1. Create an IAM policy granting the controller the necessary permissions to manage AWS load balancers. We can use the AWS CLI to create this policy.
# curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
# aws iam create-policy –policy-name AWSLoadBalancerControllerIAMPolicy I am running a few minutes late; my previous meeting is running over.
–policy-document file://iam-policy.json
2. Create an IAM role and associate the IAM policy with it. Make sure to trust the AWS service account for the controller.
# aws eks describe-cluster –name <CLUSTER_NAME> –query “cluster.identity.oidc.issuer” I am running a few minutes late; my previous meeting is running over.
–output text
‘oidc.eks.<REGION_CODE>.amazonaws.com/id/EXAMPLE1234OI5DC1234OI5DCEXAMPLE”
# cat >load-balancer-role-trust-policy.json <<EOF
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“Federated”: “arn:aws:iam::<ACCOUNT_ID>:oidc-provider/<OIDC_URL>”
},
“Action”: “sts:AssumeRoleWithWebIdentity”,
“Condition”: {
“StringEquals”: {
“<OIDC_URL>: “sts.amazonaws.com”,
“<OIDC_URL>:sub”: “system:serviceaccount:kube-system:aws-load-balancer-controller”
}
}
}
]
}
EOF
# aws iam create-role I am running a few minutes late; my previous meeting is running over.
–role-name AmazonEKSLoadBalancerControllerRole I am running a few minutes late; my previous meeting is running over.
–assume-role-policy-document file://”load-balancer-role-trust-policy.json“
5. Attach the required Amazon EKS-managed IAM policy to the IAM role. Replace <ACCOUNT_ID> with our AWS account ID.
# aws iam attach-role-policy \
–policy-arn arn:aws:iam::<ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
–role-name AmazonEKSLoadBalancerControllerRole
3. Installing the AWS load balancer controller add-on
1. Run the update-kubeconfig AWS command to update the cluster name in the kubeconfig file and confirm that it updates the config file under ~/.kube/config:
# aws eks –region <REGION_CODE> update-kubeconfig –name cluster_name
2. Create the Kubernetes service account on our cluster. The Kubernetes service account named aws-load-balancer-controller is annotated with the IAM role that was created in the name AmazonEKSLoadBalancerControllerRole.
# cat >aws-load-balancer-controller-service-account.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: aws-load-balancer-controller
namespace: aws-load-balancer-controller
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/AmazonEKSLoadBalancerControllerRole
EOF
3. Run the below kubectl command to create the service account
# kubectl apply -f aws-load-balancer-controller-service-account.yaml
4. Install the AWS Load Balancer Controller using Helm V3
To install the AWS Load Balancer Controller, follow these steps:
# helm repo add eks https://aws.github.io/eks-charts
# helm repo update
# kubectl create namespace aws-load-balancer-controller
# helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
–namespace=aws-load-balancer-controller \
–set clusterName=<CLUSTER_NAME> \
–set serviceAccount.create=true \
–set serviceAccount.name=aws-load-balancer-controller
5. Verify the deployment
# kubectl get deployment -n aws-load-balancer-controller aws-load-balancer-controller
# Sample Nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
– name: nginx
image: nginx
ports:
– containerPort: 80
#SVC Exposing as clusterIP
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
spec:
ports:
– port: 80
protocol: TCP
selector:
app: nginx
Apply the deployment and service configuration by running the kubectl commands
# kubectl apply -f nginx_deploy.yml
To verify the deployment run the below command
# kubectl get deployment nginx
Update the Ingress resource with AWS-specific annotations to control how the controller configures the load balancer.
For example, we can specify the load balancer type (e.g., Application Load Balancer or Network Load Balancer) and configure SSL termination, authentication, and other load balancer settings. Here we’ll be using AWS Certificate Manager (ACM) for configuring HTTPS, We need to provide the ACM arn in the annotation “alb.ingress.kubernetes.io/certificate-arn”
Here’s an example of an Ingress resource with AWS annotations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: <acm_ssl_arn>
alb.ingress.kubernetes.io/listen-ports: ‘[{“HTTP”: 80}, {“HTTPS”:443}]’
alb.ingress.kubernetes.io/actions.ssl-redirect: >-
{
“Type”: “redirect”,
“RedirectConfig”: {
“Protocol”: “HTTPS”,
“Port”: “443”,
“Host”: “#{host}”,
“Path”: “/#{path}”,
“Query”: “#{query}”,
“StatusCode”: “HTTP_301”
}
}
spec:
rules:
– host: demo-app.example.com
http:
paths:
– path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
Once the controller is installed and configured, it will automatically create and manage AWS load balancers based on our Kubernetes resources. This means we can define and update our load balancers directly in our cluster’s YAML files, making it easier to manage our application’s network traffic.
Stay informed and optimize your AWS cloud infrastructure for enhanced performance.
Managing elastic load balancers for a Kubernetes cluster using the AWS load balancer controller simplifies the process of load balancer configuration and management. By integrating the controller with our cluster, we can define our load balancers as Kubernetes resources and let the controller handle the rest. This approach streamlines operations, increases automation, and ensures a consistent and reliable network infrastructure for our applications in the AWS cloud.
The post Managing ELB for a Kubernetes Cluster using AWS Load Balancer Controller appeared first on Indium.
]]>