Spinnaker by Example: Part 1

John Tucker
codeburst
Published in
9 min readNov 6, 2020

--

A step-by-step walk-through of installing and using Spinnaker to deploy applications to a Google Kubernetes Engine (GKE) cluster.

First, what is Spinnaker?

Spinnaker is an open-source, multi-cloud continuous delivery platform that helps you release software changes with high velocity and confidence.

— Spinnaker — Concepts

Why should you care?

This step-by-step walk-through is closely aligned with the official Install and Configure Spinnaker documentation. The difference is that this step-by-step walk-through provides a lot more detail for a particular scenario, that of deploying applications to a GKE cluster.

The final set of configuration files provided throughout this series of articles is available for download.

Prerequisites

If you wish to follow along, you will need the following:

  • A Google Cloud Platform (GCP) project
  • A workstation; preferably macOS or Linux (I used Linux in this article)
  • Google Cloud SDK, including gcloud CLI, installed on a workstation (I used version 316.0.0)
  • gcloud CLI initialized with the GCP project with a user with the Project /Owner role, i.e., roles/owner
  • Terraform CLI 0.13.5 or greater patch version (I used version 0.13.5)

Install Halyard

One of the challenges of using Spinnaker is navigating the myriad of its components; the first being Halyard.

Halyard is a command-line administration tool that manages the lifecycle of your Spinnaker deployment, including writing & validating your deployment’s configuration, deploying each of Spinnaker’s microservices, and updating the deployment.

— Spinnaker — Install Halyard

Here you should be thinking Halyard is to Spinnaker as kubeadm is to Kubernetes; i.e., it is a CLI tool for installing and managing other software components. The documentation recommends that you run Halyard on a machine with 12GB of RAM. Wow, that is a lot for most any software, much less for a CLI administration tool. Also, as we will observe, Halyard maintains state on the machine it runs on. The importance here is that we will have to consider how to persist and share this state so that multiple individuals can use Halyard.

As a first pass, we will simply install and use Halyard on a Google Compute Engine (GCE) instance. We use Terraform to automate this process.

We create an empty folder with the following folders/files in it:

gcp/terraform.tfvars

Update this file with your GCP project and preferred region/zones; provide three zones (the additional two zones will be needed later for our GKE cluster).

gcp/variables.tf

gcp/main.tf

gcp/modules/halyard/variables.tf

gcp/modules/halyard/main.tf

Things to observe:

  • As a best practice, we create a GCP service account with minimal access and associate it with our GCE instance
  • We install the Halyard, along with gcloud and kubectl, CLI using the instance’s startup script

From the gcp folder, we initialize the Terraform project by executing:

$ terraform init

We then create resources by executing:

$ terraform apply

We can now login to the GCE instance using:

$ gcloud compute ssh halyard

Now we complete the installation, answering a couple of straightforward questions, by executing:

$ sudo bash /InstallHalyard.sh

Create Google Kubernetes Engine (GKE) Cluster

Spinnaker supports deploying applications to a variety of computing architectures, e.g., AWS EC2, GCP GCE, and Kubernetes. In this example, we will be using Kubernetes provided by GKE.

Here we create a GKE cluster by first updating gcp/terraform.tfvars; supplying appropriate values for the region you selected earlier.

Please note: Halyard’s startup script installed kubectl version 1.17.x that is compatible with 1.16.x, 1.17.x, and 1.18.x clusters.

We create the following folders / files:

gcp/modules/cluster/variables.tf

gcp/modules/cluster/main.tf

Things to observe:

  • The initial_node_count and remove_default_pool arguments are used to delete the default node pool; allowing us to separately manage a node pool resource
  • The ip_allocation_policy argument configures a recommended VPC-native cluster; the private CIDR blocks were chosen to not overlap each other or the blocks in GCP’s default VPC
  • The master_auth argument disables any authentication other than OAuth authentication
  • The workload_identity_config and workload_metadata_config arguments enable Workload Identity
  • The auto_upgrade and upgrade_settings arguments allow us to manually control upgrades

We then update the file gcp/main.tf by adding in the cluster module block:

As before, from the gcp folder we execute:

$ terraform init

And then create the cluster with:

$ terraform apply

We get credentials for our cluster by executing; supplying the same GCP region that you supplied earlier:

$ gcloud container clusters get-credentials my-cluster --region [REPLACE]

We can indeed confirm that we have access to the cluster by executing:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-cluster-my-node-pool-29a2b570-dh6x Ready <none> 10m v1.16.13-gke.401
gke-my-cluster-my-node-pool-55a4e0e9-lfzh Ready <none> 10m v1.16.13-gke.401
gke-my-cluster-my-node-pool-74048b9c-dd7f Ready <none> 10m v1.16.13-gke.401

Generate Kubernetes Cluster Credentials

While we have Kubernetes cluster credentials associated with ourselves, users with the Project/Owner role, here we want to generate a separate set of credentials. These credentials will be used later to configure Spinnaker to be able to manage resources in this cluster.

We create the following files/folders:

k8s/main.tf

k8s/modules/spinnaker/main.tf

Please note: Here we took the easy path and simply granted the service account full access to the cluster. Spinnaker provides instructions if you wish to limit the service account’s access to the absolute minimum.

In the k8s folder, we execute:

$ terraform init

and execute the following to create the Kubernetes service account:

$ terraform apply

We extract the authentication token for this service account by executing:

$ kubectl get secret \
$(kubectl get serviceaccount spinnaker-service-account \
-n spinnaker \
-o jsonpath='{.secrets[0].name}') \
-n spinnaker \
-o jsonpath='{.data.token}' | base64 --decode

We now extract the cluster configuration, the certificate-authority-data, and server values, from the kubectl configuration file located in the file .kube/config in your user account folder.

With the authentication token, certificate-authority-data, and server values in hand, we login to the halyard GCE instance:

$ gcloud compute ssh halyard

We convert the certificate-authority-data value into a certificate file by executing; replacing as appropriate:

$ echo [REPLACE] | base64 -d > ca.crt

We create our cluster configuration by executing:

$ kubectl config set-cluster my-cluster \
--server=[REPLACE] \
--certificate-authority=ca.crt \
--embed-certs=true

Because we embedded the certificate, we can now delete the ca.crt file.

We create our user configuration by executing; replacing with the authentication token we extracted earlier.

$ kubectl config set-credentials spinnaker-service-account \
--token=[REPLACE]

We create our context by executing:

$ kubectl config set-context spinnaker-service-account@my-cluster \
--user=spinnaker-service-account \
--cluster=my-cluster

We use our context by executing:

$ kubectl config use-context spinnaker-service-account@my-cluster

Finally, we can confirm that we got everything correct by executing; should show a list of the cluster’s nodes:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-cluster-my-node-pool-29a2b570-dh6x Ready <none> 69m v1.16.13-gke.401
gke-my-cluster-my-node-pool-55a4e0e9-lfzh Ready <none> 69m v1.16.13-gke.401
gke-my-cluster-my-node-pool-74048b9c-dd7f Ready <none> 69m v1.16.13-gke.401

Choose Cloud Providers

Now that we have installed Halyard, created a GKE cluster, and generated Kubernetes credentials, we can enable Spinnaker to deploy applications to the GKE cluster. Technically, we are configuring Halyard at this point for when we later use it to install Spinnaker; definitely confusing.

In Spinnaker, Providers are integrations to the Cloud platforms you deploy your applications to.

In this section, you’ll register credentials for your Cloud platforms. Those credentials are known as Accounts in Spinnaker, and Spinnaker deploys your applications via those accounts.

— Spinnaker — Choose Cloud Provider

Continuing from the halyard GCE instance, we enable Halyard with the kubernetes provider:

$ hal config provider kubernetes enable

We finally enable Spinnaker (technically Halyard at this point) to deploy applications to the GKE cluster.

$ hal config provider kubernetes account add spinnaker-service-account@my-cluster \
--context spinnaker-service-account@my-cluster

Choose Your Environment

As indicated earlier, we have not installed Spinnaker yet; we have been configuring Halyard. Here we continue to configure Halyard; this time we specify where we are going to install Spinnaker to.

The preferred type of installation is distributed, i.e., onto a Kubernetes cluster. The good news is that we have already configured Halyard with the GKE cluster, so we can use it as follows:

$ hal config deploy edit \
--type distributed \
--account-name spinnaker-service-account@my-cluster

It is important to note that one could have created a separate Kubernetes cluster to install Spinnaker to. Here, however, we are going to install Spinnaker onto the same cluster as where we want to deploy our applications to.

Choose a Storage Service

There is one more thing we need to configure Halyard with before we install Spinnaker; the location Spinnaker will use to store persistent data. As we are using a GCP project, the natural choice is to use Google Cloud Storage (GCS). The simple approach to configuring Halyard with GCS is to have it automatically create the GCS bucket.

To this end, we first need to create a GCP service account that can create GCS buckets. Back to our Terraform configuration on our workstation; we create the following files/folders:

Addendum 11/7/20: Changed the account_id from spinnaker-service-account to spinnaker; just bothered me that I used the type of object as part of its name.

gcp/modules/storage/main.tf

And we update gcp/main.tf; adding the storage module block:

From the gcp folder, we execute:

$ terraform init

And then create the service account:

$ terraform apply

We obtain the service account’s email address by listing the service accounts:

$ gcloud iam service-accounts list

We create an authentication key for the service account by executing; using the service account’s email address:

$ gcloud iam service-accounts keys create account.json \
--iam-account [REPLACE]

We copy the contents of the account.json file and login to the halyard GCE instance:

$ gcloud compute ssh halyard

We then create a file, account.json, with the copied content.

Let us verify that we are properly configured by authenticating the gcloud CLI with the service account; providing the service account’s email address:

$ gcloud auth activate-service-account [REPLACE] \
--key-file=account.json

Running the following command, listing the project’s GCS buckets:

$ gsutil ls

If you, like me, don’t have any buckets, the response will be empty. The important thing, however, is that this command should return without error.

Now we configure Halyard to use a GCS bucket; replacing the bucket location, e.g., us.

$ hal config storage gcs edit \
--project $(gcloud config get-value project) \
--bucket-location [REPLACE] \
--json-path account.json

Addendum 11/7/20: At this point, I was tempted to delete the account.json file, but later found out that the previous command configures Halyard to use this file; not copy its contents as I first imagined.

And by executing:

$ hal config storage edit --type gcs

Deploy and Connect

Finally, we are ready to install Spinnaker; first identifying a version (I chose 1.23.1).

$ hal version list

We then set the version using:

$ hal config version edit \
--version [REPLACE]

We install Spinnaker:

$ hal deploy apply

After a bit of time, you can confirm that Spinnaker is installed and ready (make sure all the pods' containers are in a ready state):

$ kubectl get all -n spinnaker
NAME READY STATUS RESTARTS AGE
pod/spin-clouddriver-5496c8d7ff-vb7wz 1/1 Running 0 23m
pod/spin-deck-57f9798c48-2cf8j 1/1 Running 0 23m
pod/spin-echo-b5b54f68-wf8zq 1/1 Running 0 23m
pod/spin-front50-6594c9484b-zm4mz 1/1 Running 0 23m
pod/spin-gate-5bfffb9b59-xqknd 1/1 Running 0 23m
pod/spin-orca-6cffb9cb68-g6wmj 1/1 Running 0 23m
pod/spin-redis-6b58966c8b-wzwzd 1/1 Running 0 23m
pod/spin-rosco-567cfb55d-kqbnl 1/1 Running 0 23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/spin-clouddriver ClusterIP 192.168.254.118 <none> 7002/TCP 23m
service/spin-deck ClusterIP 192.168.27.224 <none> 9000/TCP 23m
service/spin-echo ClusterIP 192.168.117.44 <none> 8089/TCP 23m
service/spin-front50 ClusterIP 192.168.84.226 <none> 8080/TCP 23m
service/spin-gate ClusterIP 192.168.22.122 <none> 8084/TCP 23m
service/spin-orca ClusterIP 192.168.45.184 <none> 8083/TCP 23m
service/spin-redis ClusterIP 192.168.32.178 <none> 6379/TCP 23m
service/spin-rosco ClusterIP 192.168.149.113 <none> 8087/TCP 23m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/spin-clouddriver 1/1 1 1 23m
deployment.apps/spin-deck 1/1 1 1 23m
deployment.apps/spin-echo 1/1 1 1 23m
deployment.apps/spin-front50 1/1 1 1 23m
deployment.apps/spin-gate 1/1 1 1 23m
deployment.apps/spin-orca 1/1 1 1 23m
deployment.apps/spin-redis 1/1 1 1 23m
deployment.apps/spin-rosco 1/1 1 1 23m
NAME DESIRED CURRENT READY AGE
replicaset.apps/spin-clouddriver-5496c8d7ff 1 1 1 23m
replicaset.apps/spin-deck-57f9798c48 1 1 1 23m
replicaset.apps/spin-echo-b5b54f68 1 1 1 23m
replicaset.apps/spin-front50-6594c9484b 1 1 1 23m
replicaset.apps/spin-gate-5bfffb9b59 1 1 1 23m
replicaset.apps/spin-orca-6cffb9cb68 1 1 1 23m
replicaset.apps/spin-redis-6b58966c8b 1 1 1 23m
replicaset.apps/spin-rosco-567cfb55d 1 1 1 23m

The last step is to load the Spinnaker UI; we re-login to the halyard GCE instance (mapping ports this time):

$ gcloud compute ssh halyard \
-- -L 9000:localhost:9000 -L 8084:localhost:8084

From the halyard GCE instance, we execute:

$ hal deploy connect

Finally, from our workstation, we navigate to http://localhost:9000 to show the Spinnaker UI.

Next Steps

In the next article, Spinnaker by Example: Part 2, we will configure Spinnaker.

--

--