Bootstrapping a GKE Cluster (Part 1)

For those wishing to get some hands-on experience with running containers, Google Cloud provides a $300 / 12-month credit for new users. Kubernetes is tightly integrated with Google’s cloud administration panel and gcloud client, making tools like kops and kubernetes-dashboard unnecessary. Coupled with offerings like the Container Registry, GKE is a convenient choice to start testing quickly.

Prerequisites

  • Enable billing on your account
  • Install the Cloud SDK for your distribution and initialize it
  • Add the kubectl component with gcloud components install kubectl
  • Install the latest version of Terraform

Initializing the Cluster

Manual

As an ad-hoc test, the gcloud client makes it very simple to stand up a cluster:

$ gcloud container clusters create <cluster_name> \
    --num-nodes 1 \
    --region us-east1 \
    --zone us-east1-b \
    --additional-zones=us-east1-c,us-east1-d \
    --machine-type n1-standard-1 \
    --enable-cloud-logging \
    --enable-cloud-monitoring \
    --enable-master-authorized-networks \
    --master-authorized-networks=$(curl -s http://pasteip.me/api/cli)/32

That will create a cluster of four nodes: one master and three hosts in three different zones. GKE provides the master for free and it will scale it accordingly.

Then set up your environment for kubectl:

$ gcloud container clusters get-credentials <cluster_name>
$ kubectl config current-context
$ kubectl get nodes
NAME                                            STATUS    ROLES     AGE       VERSION
gke-<cluster_name>-default-pool-5522b96b-4n2s   Ready     <none>    21m       v1.9.7-gke.6
gke-<cluster_name>-default-pool-5522b96b-hb6m   Ready     <none>    19m       v1.9.7-gke.6
gke-<cluster_name>-default-pool-5522b96b-k09x   Ready     <none>    19m       v1.9.7-gke.6

But in order to be able exactly replicate your environment for disaster-recovery or being able to audit problems, getting used to deploying infrastructure as code pays off in the long run. Let’s do the same thing in Terraform.

Terraform

We’ll be using an existing Terraform module for GKE. Your main.tf will look like this:

provider "google" {
  credentials = "${file("credentials.json")}"
  project = "yourproject"
  region = "us-east1"
}

provider "google-beta" {
  credentials = "${file("credentials.json")}"
  project = "yourproject"
  region = "us-east1"
}

module "gke-cluster" {
  source = "google-terraform-modules/kubernetes-engine/google"
  version = "1.19.1"

  general = {
    name = "cluster_name"
    env  = "prod"
    zone = "us-east1-b"
  }

  master = {
    username = "admin"
    password = "${random_string.password.result}"
  }

  default_node_pool = {
    node_count   = 3
    machine_type = "n1-standard-1"
    remove       = false
  }

  node_additional_zones = [
    "us-east1-c",
    "us-east1-d"
  ]
}

resource "random_string" "password" {
  length  = 16
  special = true
  number  = true
  lower   = true
  upper   = true
}

Go to the Credentials section of the dashboard and create a service account key using the default GCE user. Save the credentials.json to disk and change the Terraform plan to point to the location of the file.

Run terraform init to download the provider modules. Then run terraform plan within the directory with main.tf and, if satisfied with the proposed changes, run terraform apply.

The Core Components

Installing Helm

Helm is a sort of package manager for Kubernetes applications, bundling all the manifests for the different resources under a single, simplified YAML template. The repository of Helm charts is constantly growing, so you’re more than likely to find that an application already has a chart to use or adapt.

First download the Helm binary and then create a cluster admin role for the tiller service account that Helm uses to orchestrate releases:

$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule \
    --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

Then install Helm into your k8s cluster:

$ helm init --service-account=tiller

Optionally, install the helm diff plugin, which will show us diffs of proposed changes to our charts:

$ helm plugin install https://github.com/databus23/helm-diff --version master

Installing Weave Flux

Although you can deploy containers by calling helm install on the command line, there are benefits to representing your Kubernetes cluster as version-controlled code in a repository. It provides an audit trail for troubleshooting issues and being able to roll back changes. Duplicating or recovering your entire cluster is now just a matter of pointing an operator to your git repo and letting it handle the deployments.

We’ll use a tool called Weave Flux that leverages Helm charts and a git repository in a method known as GitOps. First create a new repository that will host your manifests and then install flux with:

$ kubectl create namespace flux
$ helm repo add weaveworks https://weaveworks.github.io/flux
$ helm install --name flux \
--set rbac.create=true \
--set helmOperator.create=true \
--set git.url=git@github.com:<youruser>/<newrepo> \
--namespace flux weaveworks/flux

Change <youruser> and <newrepo> to your git account and the aforementioned repository. Follow the deployment until it reports as ready with: kubectl -n flux get deployment/flux -w

At startup Flux generates a SSH key and logs the public key. Find the SSH public key with:

$ kubectl -n flux logs deployment/flux | grep identity.pub | cut -d '"' -f2

In order to sync your cluster state with your repository you need to copy the public key and add it as a deploy key with write access to your GitHub repository.

Open GitHub, navigate to your repo, go to Setting > Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.

From here onward, all the remaining components will be checked into git as HelmRelease resource definitions, with the Flux Helm Operator monitoring the repository for changes and deploying charts.

An additional exercise for a reader would be to incorporate installing Helm and Flux through the Terraform config, as it now has a Helm provider to deploy tiller and charts. This would further automate the bootstrapping.

Structuring Your Repository

This is a matter of personal preference. The helm operator will inspect all YAML files for recognizable Kubernetes definitions. At the top level, I create directories for each resource type. At the second level, I define all the namespaces that the cluster will be using. Below that, I place the actual manifests:

$ tree
.
├── namespaces
│   ├── build.yaml
│   ├── core.yaml
│   ├── dev.yaml
│   ├── prod.yaml
│   └── stage.yaml
├── releases
│   ├── build
│   │   └── drone.yaml
│   ├── core
│   │   ├── cert-manager.yaml
│   │   ├── external-dns.yaml
│   │   ├── grafana.yaml
│   │   ├── nginx-ingress.yaml
│   │   ├── oauth2-proxy.yaml
│   │   ├── prometheus.yaml
│   │   ├── sealed-secrets.yaml
│   │   └── weave-scope.yaml
│   ├── dev
│   │   ├── gcs-proxy.yaml
│   │   └── redis-ha.yaml
│   ├── prod
│   └── stage
├── resources
│   ├── certificates
│   │   ├── production.issuer.yaml
│   │   └── staging.issuer.yaml
│   ├── crds
│   │   ├── dev
│   │   ├── prod
│   │   └── test
│   └── credentials
│       ├── build
│       │   └── drone-github-creds.yaml
│       ├── core
│       │   ├── dns-credentials.yaml
│       │   ├── grafana-admin-creds.yaml
│       │   └── oauth2-proxy-creds.yaml
└── sealed.crt

The namespace definitions are arbitrary, and are in the same format as if you were running kubectl apply:

---
apiVersion: v1
kind: Namespace
metadata:
  name: core

Those with a keen eye may have spotted the directories with credentials and had a red flag moment, and for good reason. Under no circumstances should Kubernetes secrets be checked into the repository. Ideally you either use Google’s KMS service or a production Vault cluster. However, as a Day One solution for smaller projects, Bitnami created an operator called sealed-secrets that stores sensitive data in a SealedSecret resource definition that contains the data encrypted with a keypair before it is checked into a repository.

The sealed-secret operator contains a private key that can decrypt the SealedSecret and then create a Secret CRD for deployments to use.

Deploying SealedSecrets

In order to create sealed secrets from a client workstation, install the kubeseal binary with:

$ sudo curl -sL https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.8.3/kubeseal-linux-amd64 -o /usr/local/bin/kubeseal
$ sudo chmod 0755 /usr/local/bin/kubeseal

Then add the following HelmRelease under your releases directory:

apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: sealed-secrets
  namespace: core
  annotations:
    flux.weave.works/automated: "false"
spec:
  releaseName: sealed-secrets
  chart:
    repository: https://kubernetes-charts.storage.googleapis.com/
    name: sealed-secrets
    version: 1.0.1
  values:
    image:
      repository: quay.io/bitnami/sealed-secrets-controller
      tag: v0.7.0

Then retrieve the public key that the operator generated:

$ kubeseal --fetch-cert \
--controller-name=sealed-secrets \
--controller-namespace=core \
> sealed.crt

Creating a SealedSecret

The first step to create a sealed secret is the same as if it were a regular Kubernetes secret. Then use kubeseal with the public key to seal it:

$ kubectl -n core create secret generic app-credentials --from-literal=clientSecret=639ca5a2f4b --dry-run -o json > /tmp/app-credentials.json
$ kubeseal --format=yaml --cert=sealed.crt < /tmp/app-credentials.json > credentials/app-credentials.yaml
$ rm -rf /tmp/app-credentials.json

You can then commit the sealed secret to the repository where the flux helm operator will apply it and the sealed-secrets operator will decrypt it and create a standard Kubernetes secret.

Conclusion

In the next installment we’ll deal with deploying the services that will allow us to further build out our cluster and services.