simple-eks-cluster
Can you show the developers in Miztiik Unicorn Corp how to launch a kubernetes cluster in AWS and use kubectl
to interact with the cluster.
In this demo, let us launch a EKS1 cluster in a custom VPC using AWS CDK. The cluster will have the following attributes
c_admin
IAM role added to aws-auth configMap to administer the cluster from CLIt3.medium
instances running Amazon Linux 22
desired instances.This demo, instructions, scripts and cloudformation template is designed to be run in us-east-1
. With few modifications you can try it out in other regions as well(Not covered here).
yum install -y python3
yum install -y python-pip
pip3 install virtualenv
Get the application code
git clone https://github.com/miztiik/simple-eks-cluster
cd simple-eks-cluster
We will use cdk
to make our deployments easier. Lets go ahead and install the necessary components.
# You should have npm pre-installed
# If you DONT have cdk installed
npm install -g aws-cdk
# Make sure you in root directory
python3 -m venv .venv
source .venv/bin/activate
pip3 install -r requirements.txt
The very first time you deploy an AWS CDK app into an environment (account/region), you’ll need to install a bootstrap stack
, Otherwise just go ahead and deploy using cdk deploy
.
cdk bootstrap
cdk ls
# Follow on screen prompts
You should see an output of the available stacks,
eks-cluster-vpc-stack
eks-cluster-stack
Let us walk through each of the stacks,
Stack: eks-cluster-stack
As we are starting out a new cluster, we will use most default. No logging is configured or any add-ons.
Initiate the deployment with the following command,
cdk deploy eks-cluster-vpc-stack
cdk deploy eks-cluster-stack
After successfully deploying the stack, Check the Outputs
section of the stack. You will find the *ConfigCommand*
that allows yous to interact with your cluster using kubectl
Connect To EKS Cluster Consumer:
Connect the KafkaAdminInstance
instance using SSM Session Manager3. Navigate to /var/kafka/
directory. Kafka has been preinstalled and if user-data script had ran successfully, we should have a kafka topic created automatically for us. You can check the user data script status in logs on the instance at /var/log/miztiik-automation-*.log
. The same log had been pushed to cloudwatch as well.
Let us verify the kafka topic exists
# Set kubeconfig
aws eks update-kubeconfig \
--name 1_cdk_c \
--region us-east-1 \
--role-arn arn
iam:
role/eks-cluster-stack-cAdminRole655A13CE-XBF2V3PPV4FI
# List nodes
kubectl get no
# Sample Output
(.venv) simple-eks-cluster]# kubectl get no
NAME STATUS ROLES AGE VERSION
ip-10-10-0-90.ec2.internal Ready <none> 36h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 36h v1.18.9-eks-d1db3c
# Watching the status of nodes,
(.venv) simple-eks-cluster]# kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
ip-10-10-0-90.ec2.internal Ready <none> 15h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 15h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 15h v1.18.9-eks-d1db3c
ip-10-10-0-90.ec2.internal Ready <none> 15h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 15h v1.18.9-eks-d1db3c
ip-10-10-0-90.ec2.internal Ready <none> 15h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 15h v1.18.9-eks-d1db3c
ip-10-10-0-90.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
ip-10-10-0-90.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
ip-10-10-0-90.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
ip-10-10-0-90.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
ip-10-10-1-229.ec2.internal Ready <none> 16h v1.18.9-eks-d1db3c
You may face an error on the GUI2. For example, You may not be able to see workloads or nodes in your AWS Management Console.
Make sure you using the same user/role you used to deploy the cluster. If they are different then you need to update the console user to kubernetes configmap. This doc3 has the instructions for the same
Here we have demonstrated how to use AWS for launching highly available EKS cluster. You can extend this launching your workloads using service
and deployment
manifests.
If you want to destroy all the resources created by the stack, Execute the below command to delete the stack, or you can delete the stack from console as well
# Delete from cdk
cdk destroy
# Follow any on-screen prompts
# Delete the CF Stack, If you used cloudformation to deploy the stack.
aws cloudformation delete-stack \
--stack-name "MiztiikAutomationStack" \
--region "${AWS_REGION}"
This is not an exhaustive list, please carry out other necessary steps as maybe applicable to your needs.
This repository aims to show how to use AWS EKS to new developers, Solution Architects & Ops Engineers in AWS. Based on that knowledge these Udemy course #1, course #2 helps you build complete architecture in AWS.
Thank you for your interest in contributing to our project. Whether it is a bug report, new feature, correction, or additional documentation or solutions, we greatly value feedback and contributions from our community. Start here
Buy me a coffee ☕.
Level-200-green" alt="miztiik-success-green">
Level: 200