How to launch and use Amazon EKS

Amazon EKS is a managed service that helps make it easier to run Kubernetes on AWS. Through EKS, organizations can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes. Simply put, EKS is a managed containers-as-a-service (CaaS) that drastically simplifies Kubernetes deployment on AWS.
With Amazon EKS, the Kubernetes control plane — including the backend persistence layer and the API servers — is provisioned and scaled across various AWS availability zones, resulting in high availability and eliminating a single point of failure. Unhealthy control plane nodes are detected and replaced, and patching is provided for the control plane. The result is a resilient AWS-managed Kubernetes cluster that can withstand even the loss of an availability zone.

Amazon EKS Features
The main benefit of using Amazon EKS is that organizations can take full advantage of the reliability, availability, performance, and scale of the AWS platform, essential to which are integrations with AWS security and networking services.
- Managed Control Plane
- Managed Worker Nodes
- Service Discovery
- VPC Support
- Load Balancing
- Managed Cluster Updates
Steps to Set Up Amazon EKS
Getting started with Amazon EKS is straightforward, but it does come with a short list of prerequisites. If you’ve been running on AWS for quite a while, chances are you already have the prerequisite components set up.
Prerequisites
The following components must be installed and set up on your AWS account before you can get started with Amazon EKS:
- Kubectl
- AWS CLI
- AWS-IAM-Authenticator
Create the IAM Role
- Open the IAM console
- Choose Roles > Create role
- Select EKS from the list > choose Allows Amazon EKS to manage your clusters on your behalf for your use case> Next: Permissions

4. Click Next: Tags.
5. Click Next: Review
Create Amazon EKS Cluster
This will create eks cluster with 3 nodes of t2.micro for us. You can choose according to your requirement
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfigmetadata:
name: ekscluster
region: ap-south-1
nodeGroups:
- name: node1
desiredCapacity: 3
instanceType: t2.micro
ssh:
publicKeyName: eks


We can see that 3 instance are launched according to what we asked

Creating EFS for storing data


VPC is automatically created after cluster is created. Using that VPC we create our EFS.
Creating EFS Provisioner
kind: Deployment
apiVersion: apps/v1
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
- name: FILE_SYSTEM_ID
value: fs-1d9f15cc
- name: AWS_REGION
value: ap-south-1
- name: PROVISIONER_NAME
value: eks/aws-efs
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: fs-1d9f15cc.efs.ap-south-1.amazonaws.com
path: /

Replace the File system ID with the ID you get from EFS service page in the AWS management console or we can print the same through the Terraform code. Also, replace the server.
Modifying RBAC
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: thegreat
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

We have created a YML code to modify some permisssions using ROLE BASED ACCESS CONTROL (RBAC).
Creating Storage Class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: eks/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-jenkins
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-mysql
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi

Deployments
MySQL
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: jenkins-mysql
labels:
app: jenkins
spec:
selector:
matchLabels:
app: jenkins
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: jenkins
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: confidential
key: sqlrootpassword
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: confidential
key: sqluser
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: confidential
key: sqlpassword
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: confidential
key: sqldb ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-ps
mountPath: /var/lib/mysql
volumes:
- name: mysql-ps
persistentVolumeClaim:
claimName: efs-mysql

Jenkins
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: jenkins
labels:
app: jenkinso
spec:
selector:
matchLabels:
app: jenkinso
tier: deploy
strategy:
type: Recreate
template:
metadata:
labels:
app: jenkinso
tier: deploy
spec:
containers:
- image: jenkins
name: jenkins
env:
- name: JENKINS_SLAVE_AGENT_PORT
valueFrom:
secretKeyRef:
name: jen
key: dcpassword
- name: database_connection_database
valueFrom:
secretKeyRef:
name: mysecret
key: slave
ports:
- containerPort: 8080
name: jenkins
volumeMounts:
- name: jenkins-ps
mountPath: /var/jenkins_home
volumes:
- name: jenkins-ps
persistentVolumeClaim:
claimName: efs-jenkins

Output

And Our whole setup is ready
For queries or suggestions you can DM