Search
Close this search box.

Deploying Kubernetes on AWS using Terraform Pt. 2

Today I’m going to walk you through how to deploy Kubernetes on AWS using Terraform. In Part 1 of this series, I mentioned that I had a tough time getting my terraform demonstration to work because of my auto-scaling configuration. That resulted in endless instances being spun up on my AWS account and prevented me from creating more due to the limit. This time around I was able to get my demo working successfully! The procedure and setup are different from my first attempt however I had an in-depth overview of Kubernetes in the first webinar so I will include them in the same series.

The bulk of this blog post will be a guide for the demo but here are my blog posts on Terraform and Kubernetes. These give a bit more context to the tools and technology used in the demo. Now let’s walk you through how to deploy Kubernetes on AWS using Terraform.

Pre-Requisites:

  • An AWS account with the IAM permissions listed on the EKS module documentation,
  • Install and configure AWS CLI
  • Install and configure AWS IAM Authenticator
  • Install kubectl
  • Install wget (required for the eks module)

These prerequisites only require a handful of commands so they shouldn’t be to hard to setup. I won’t be going over them in this demo but all of the commands will be included in my blog post for this webinar.

So for this demo we’re going to use the following repository:

https://github.com/hashicorp/learn-terraform-provision-eks-cluster

Clone that project then open the root directory in your code editor and your command line.

Here, you will find the six files we are going to use setup our environment on AWS.

  • “vpc.tf” is where we are going to provision our VPC, subnets, and availability zones using the AWS VPC Module.
  • “security-groups.tf” is where we are going to provision the security groups for our EKS cluster.
  • “eks-cluster.tf” is where we are going to provision all the resources, such as AutoScaling Groups, that are required to set up an EKS cluster in the private subnets and bastion servers to access the cluster using the AWS EKS Module.
  • If we take a look at line 14 real quick, you can see that in our AutoScaling group section, we’ve declared for 3 odes to be created in our Kubernetes instance. You’ll see those later once we spin up our Kubernetes dashboard
  • “outputs.tf” defines the output configuration.
  • “versions.tf” sets the required versions for all of the services & modules that we are going to be using today.

So overall, we are going to have 1 virtual private cloud, 3 availability zones, and six subnets (3 public and 3 private)

Getting Started: Launch Kubernetes Cluster on AWS

Run the following command:

terraform init

This is going to initialize your configuration files then run the following command:

terraform apply

This command will execute your configuration files. Once you accept the prompt, terraform will then go off and provision your cluster on AWS.

Configuring Kubectl

Great so now that our EKS cluster is running, you need to configure kubectl, which is the command-line tool we are going to use to control our Kubernetes cluster. Enter the following command:

aws eks --region us-east-2 update-kubeconfig --name {insert cluster name}

This is going to connect to the EKS cluster we just deployed, fetch the access credentials than save them to our local machine. Please note that you will need to change the command I just used with the values from your Terraform output.

Deploying the Kubernetes Metrics Server & Dashboard

Next, To verify that our Kubernetes cluster is configured correctly and running, we’re going to deploy and access our Kubernetes dashboard. Before that, we also need to download and unzip the Kubernetes metrics server since it is not deployed by default with our EKS cluster. You can do that with the following command:

wget -O v0.3.7.tar.gz https://codeload.github.com/kubernetes-sigs/metrics-server/tar.gz/v0.3.7 && tar -xzf v0.3.7.tar.gz

Next we’ll deploy it with the following command:

kubectl apply -f metrics-server-0.3.7/deploy/1.8+/

Finally we will check to see if it was deployed successfully:

kubectl get deployment metrics-server -n kube-system

Now it is time to deploy our Kubernetes dashboard:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

This command gathers all the resources necessary for the dashboard

Finally, we’ll create a proxy server that will allow us to navigate to the dashboard from the browser:

kubectl proxy

Once that is done we can view the dashboard here.

In a new git window, let’s create a new Cluster Role Binding with the following command:

kubectl apply -f https://raw.githubusercontent.com/hashicorp/learn-terraform-provision-eks-cluster/master/kubernetes-dashboard-admin.rbac.yaml

Then we’ll generate the token with the following command:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep service-controller-token | awk '{print $1}')

Copy that token and paste it into your browser and then you should be able to login to your dashboard to see all of your nodes. With that, you have successfully launched a Kubernetes Cluster on AWS with Terraform. Be sure to keep an eye out for my next webinar where I’ll be walking you through how to deploy an app on your newly launched Kubernetes, using Terraform! Below you’ll find links to my slide deck for this presentation, a recording of my webinar.

Cassandra.Link

Cassandra.Link is a knowledge base that we created for all things Apache Cassandra. Our goal with Cassandra.Link was to not only fill the gap of Planet Cassandra but to bring the Cassandra community together. Feel free to reach out if you wish to collaborate with us on this project in any capacity.

We are a technology company that specializes in building business platforms. If you have any questions about the tools discussed in this post or about any of our services, feel free to send us an email!