Using Terraform and GitHub Actions to orchestrate our infrastructure and deployments we will create a two node Kubernetes cluster with a load balancer and deploy a simple Hello World app.

Contents

What you’ll need

  • To have completed part 1 of this series and to have access to the accounts used there
  • A domain name and access to the DNS settings

Kubernetes in DigitalOcean is very reasonably priced as you only need to pay for the compute you assign to your cluster. The control plane is managed by DigitalOcean and is free. In this tutorial, we will be deploying two nodes and one load balancer, so the cost will be $30/month at current prices. If you sign up using this link you will receive $100 credit to use, which will comfortably cover the costs of this tutorial.

Create the Kubernetes cluster

In order to create a Kubernetes cluster, we can use a digitalocean_kubernetes_cluster resource. Edit main.tf and append the following to the end, keeping the formatting exactly as it is below, leaving a single line after the previous declaration. Terraform is very picky about formatting and whitespace - see here for details on the style conventions.

This will provision a two node cluster with using the smallest node size possible (1vCPU, 2GB ram).

resource "digitalocean_kubernetes_cluster" "mycluster" {
  name   = "mycluster"
  region = "lon1"
  # Get latest version from https://docs.digitalocean.com/products/kubernetes/changelog/
  version = "1.20.7-do.0"

  node_pool {
    name       = "nodepool"
    size       = "s-1vcpu-2gb"
    node_count = 2
  }
}

When you commit, you should see your action being executed in the GitHub Actions tab and eventually, a Kubernetes cluster will be provisioned in your DigitalOcean account.

Create ingress controller

Next, we will install the NGINX ingress controller. This will create a load balancer with a public IP address and will allow us to direct traffic to our in-cluster services.

Copy the folliwing into the end of main.tf, once again leaving a single line from the previous declaration.

provider "helm" {
  kubernetes {
    host                   = resource.digitalocean_kubernetes_cluster.mycluster.endpoint
    token                  = resource.digitalocean_kubernetes_cluster.mycluster.kube_config[0].token
    cluster_ca_certificate = base64decode(resource.digitalocean_kubernetes_cluster.mycluster.kube_config[0].cluster_ca_certificate)
  }
}

resource "helm_release" "ingress_nginx" {
  name             = "ingress-nginx"
  namespace        = "ingress-nginx"
  create_namespace = true
  repository       = "https://kubernetes.github.io/ingress-nginx"
  chart            = "ingress-nginx"
}

When you commit, the action should run and eventually complete. If you look in your DigitalOcean account, you will now see that there is a load balancer.

If you browse to the IP address associated with the load balancer, you should now see the standard NGINX 404 page.

Create GitHub Actions workflow for cluster workload

Create a new PAT token in DigitalOcean so that GitHub actions can automate workload deployment.

pattoken

Create a secret DIGITALOCEAN_ACCESS_TOKEN in your GitHub repository for your DigitalOcean token.

pattoken

Set up the domain

Create a DNS record for your test website in your domain registrar or DNS provider’s portal (e.g. test.mydomain.com)

Deploy the app

Create a new file called hello-world.yaml and commit into the root of your repository.

You must replace “__YOUR_DOMAIN__” with your domain name (e.g. test.mydomain.com)

apiVersion: v1
kind: Namespace
metadata:
  name: hello-world
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: hello-world
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: hello-world
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 2
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: hashicorp/http-echo
        args:
        - "-text=Hi there!"
        ports:
        - containerPort: 5678
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: hello-world-ingress
  namespace: hello-world
spec:
  rules:
  - host: __YOUR_DOMAIN__
    http:
      paths:
      - backend:
          serviceName: hello-world
          servicePort: 80

Edit file .github/workflows/terraform/yml and add these actions to the end

    - name: Install doctl
      uses: digitalocean/action-doctl@v2
      with:
        token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}

    - name: Save DigitalOcean kubeconfig with short-lived credentials
      run: doctl kubernetes cluster kubeconfig save --expiry-seconds 600 mycluster

    - name: Deploy to DigitalOcean Kubernetes
      run: kubectl apply -f $GITHUB_WORKSPACE/hello-world.yaml

When you commit, your action should run and your test deployment should be deployed.

If all has gone well, after a short while when you browse to your test domain, you should see the text “Hi there!”.

Troubleshooting

If you do not see “Hi there!” there could be a number of reasons. Firstly, DNS can take a while to propogate. If the DNS does resolve, and the workflow completed, the best place to start is to browse to the Kubernetes dashboard via the DigitalOcean portal. From there you should be able to see if there are any specific errors which relate to your deployment.

Tear the infrastructure down

In order to ensure that you will not be billed for your infrastructure for longer than necessary, reset your respository to the state it was in when we started and then commit. The action will run and the cluster will be torn down. Make sure that the action completes and that you check your DigitalOcean account to avoid any nasty surprises.

If you wish to be able to deploy again in order to do part 3 of this series, create a branch or tag in git before you reset your repository so you can easily get back to a working state by running your workflow again.

Alternatively, in Terraform Cloud, under Settings -> “Destruction and Deletion” it is possible to queue a destroy plan, which will destroy all resources managed by Terraform in that workspace.