Create Kubernetes cluster with Terraform

Introduction

This article is the first of a series that will explain how to install a vault solution, vaultwarden, on a GKE cluster, and how to connect to it with a VPN : algo-vpn.

I followed these steps for professional purpose, to have a vault solution for a team of less than 10 peoples. If it could be useful for anybody, that’s great ! 😁

In this article, we will see how to create a kubernetes cluster on GCP. This cluster will be used to deploy the algo VPN. If you already have a Kubernetes cluster, you can skip the article and go to next step (installing algo VPN on the cluster).

The whole steps will be:

  1. Create Kubernetes cluster
  2. Install algo VPN
  3. Install vault solution

Cluster Creation

The cluster will be a zonal cluster, in zone europe-west4-a. Zonal clusters are less expensive, they offer redundancy, but if the entire zone is down, the service will not be available until the zone is up again.Regional cluster prevents this kind of failure, but are 3x more expensive.

The cluster will contains 1 node pool, with only 1 node, as it is a demo cluster. For production cluster, you should have more nodes (at least 3).

We will use a pre-emptible node to avoid spending too much money 😁. Pre-emptible node have a limited life-time of 24h, so google will shut them done after 24h, but we will install later a solution to always have a running node.

Source

Source of this project may be found here : https://github.com/daniel-jantrambun/roadToVault

Requirements

Some requirements are needed to install a GKE cluster:

  • an active GCP account, linked to a billing account (note that when you create a new GCP account, Google is offering you $300 for 90 days)
  • following API in GCP should be activated:
    • compute api
    • storage api
    • container api

Terraform service account

Service account

Create a service account terraform in GCP IAM with roles:

  • Compute Admin
  • Compute Network Admin
  • Kubernetes Engine Admin
  • Service Account User
  • Storage Admin

You can create the service account with the following script:

export PROJECT_ID=<your_project_id>
export SERVICE_ACCOUNT=terraform

gcloud iam service-accounts create $SERVICE_ACCOUNT --display-name="$SERVICE_ACCOUNT"

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/compute.admin"

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/compute.networkAdmin"

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/container.clusterAdmin"

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/storage.admin"
Create Service Account

Service account key

In Service Account menu, create a new json key and store it in the terraform folder of your project.

Create new key
export PROJECT_ID=<your_project_id>
export SERVICE_ACCOUNT=terraform
export KEY_FILE=~/terraform-sa-private-key.json

gcloud iam service-accounts keys create KEY_FILE \
    --iam-account=SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com

Google Storage bucket

The terraform state file will be stored in a GCS bucket backend.

On GCP console, create a bucket (you can name it as you want, but remember that bucket names are unique in GCP). We only need a regional bucket.

Choose standard as storage default class.

Initiate terraform

Make sure your gcloud is connected with command:

gcloud auth application-default login

Once you’re connected, go on the terraform directory of your project, and initialise your terraform project:

terraform init -backend-config="bucket=<your_bucket_name>"

In the previously created bucket, there now should be a folder state that contains a file default.tfstate.

Create Cluster

In the terraform.tfvars file, replace the values of variables:

terraform_json_key
project_id
terraform_bucket_name

with your own values.

in the main.terraform file, we initiate the google provider and set gcs as a backend to store the tfstate file.

We create a new vpc, as the default vpc will be in conflict with the vpc created by algo in next step.

# VPC
resource "google_compute_network" "vpc" {
name = "${var.project_id}-vpc"
auto_create_subnetworks = "false"
}
resource "random_id" "cluster" {
byte_length = 4
}
resource "random_id" "services" {
byte_length = 4
}
# Subnet
resource "google_compute_subnetwork" "subnet" {
name = "${var.project_id}-subnet"
region = var.region
network = google_compute_network.vpc.name
ip_cidr_range = "192.168.1.0/24"
private_ip_google_access = true
secondary_ip_range {
range_name = "${var.cluster_secondary_range_name}-${random_id.cluster.hex}"
ip_cidr_range = "10.56.0.0/14"
}
secondary_ip_range {
range_name = "${var.services_secondary_range_name}-${random_id.services.hex}"
ip_cidr_range = "10.60.0.0/20"
}
}
view raw vpc.terraform hosted with ❤ by GitHub

In addition to the cluster, we create a NAT configuration using Cloud Router, in order to be able to pull images when we will deploy the vault solution.

We are now ready to create the cluster. Check that our terraform configuration is correct:

terraform plan

If everything is ok, you can create the cluster by applying your terraform plan:

terraform apply

The installation should take a few minutes.

At the end, you should have the following line:

Apply complete! Resources: 8 added, 0 changed, 0 destroyed.

And the output variables should be printed.

This cluster will be used to install our vault. πŸ™‚

Clean up

To avoid un-useful billing, we can destroy the cluster

terraform destroy

More infos:

using_gke_with_terraform


About me

Leave a Reply

Your email address will not be published. Required fields are marked *