A simple example of using Kubernetes watches.
Recently I ran into a situation where I needed to maintain a list of endpoints — specifically the IP addresses of a Kubernetes service. Because the service was backed by a deployment with a horizontal pod autoscaler, the list of endpoints (pods) was updated regularly and unpredictably.
One approach to maintaining this list of endpoints is to regularly — say every minute — query the endpoints URL. To illustrate this, we start a kubectl proxy on our workstation with:
$ kubectl proxy --port 8080
More on Hiera and a quick bit on files/templates.
In the previous article, we parameterized a class, my_parameters::my_class, in a module. The obvious problem with this approach is that this class is an implementation detail of the module and as such should not be exposed outside of the module, i.e., we had to supply the parameter using Hiera as
my_parameters::my_class::greeting: Hola Mundo.
Here we will rather parameterize the module and use that parameter in the class. Let us first create a parametrized module by creating a my_parameters_refactor module with the classes my_class and the usual main (my_parameters_refactor) class. …
Exploring variables, facts, and parameters.
So far we have only been using hard-coded values, e.g., the string Hello World. Here we introduce Puppet variables (which behave more like constants) to help eliminate duplicated hard-coded values.
Variables store values so that those values can be accessed in code later.
After you’ve assigned a variable a value, you cannot reassign it. …
Learning Puppet Code through example.
Yes… Puppet, for many, is a fading technology as we have collectively moved towards immutable infrastructure. At the same time, there are plenty of legacy systems that still use it and I recently happened upon such a system.
While Puppet has been around for some time, I could not find a tutorial that synchronized with me. The closest I could find was a paid course Puppet Quick Start by A Cloud Guru and even that was a little rough. Thus, I was motivated to write this series of articles.
Another important observation is that — because Puppet has been around for some time — most of the tutorials I found used dated Puppet Code patterns. This series of articles is based on the most recent Puppet Code 6.19, using up-to-date recommended patterns. …
Wrapping up our step-by-step walk-through by deploying an application to a GKE cluster.
This is part of a series (starting with Spinnaker by Example: Part 1) providing a step-by-step walk-through for installing and using Spinnaker to deploy applications to a Google Kubernetes Engine (GKE) cluster. The final set of configuration files provided throughout this series of articles is available for download. So far our focus has been on the installation and configuration aspects of Spinnaker. In this article, we wrap up this series with a simple example of deploying an application to a GKE cluster.
The bad news is that using Spinnaker is surprisingly confusing; I believe it is related to how flexible and powerful it is. The good news, however, is that Spinnaker provides a number of Codelabs that walk one through particular scenarios. The better news is that there is a Codelab, Kubernetes Source To Prod, that walks through our particular scenario of deploying an application to a Kubernetes cluster. This article is closely aligned with this Codelab; just provides a bit more detail and is updated for the latest version of Spinnaker (1.23.1). …
Continuing our step-by-step walk-through with a couple of post-installation configuration items.
This is part of a series starting with my article Spinnaker by Example: Part 1, which provides a step-by-step walk-through of installing and using Spinnaker to deploy applications to a Google Kubernetes Engine (GKE) cluster. The final set of configuration files provided throughout this series of articles is available for download.
As you can see, there are a number of things we are instructed to set up.
You’ve installed and configured Spinnaker, but there are still a few other things to set up:
- Configure your image bakery
- Enable security (auth/auth) for your Spinnaker installation
- Set up continuous integration
A step-by-step walk-through of installing and using Spinnaker to deploy applications to a Google Kubernetes Engine (GKE) cluster.
First, what is Spinnaker?
Spinnaker is an open-source, multi-cloud continuous delivery platform that helps you release software changes with high velocity and confidence.
— Spinnaker — Concepts
Why should you care?
This step-by-step walk-through is closely aligned with the official Install and Configure Spinnaker documentation. The difference is that this step-by-step walk-through provides a lot more detail for a particular scenario, that of deploying applications to a GKE cluster.
The final set of configuration files provided throughout this series of articles is available for download. …
Exploring the open-source Kubernetes’ backup and restore solution through a concrete example on Google Kubernetes Engine (GKE).
Having mostly worked on smallish Kubernetes installations, I never understood the need for a Kubernetes backup and restore solution, here’s why:
But what happens if one or both of these are not true? …
Exploring Google’s relatively new Config Sync Kubernetes multi-cluster management tool.
Config Sync allows cluster operators to manage single clusters, multi-tenant clusters, and multi-cluster Kubernetes deployments using files, called configs, stored in a Git repository.
Some configs are Kubernetes object manifests. Other configs are not object manifests, but instead provide information needed by Config Sync itself. You can write configs in YAML or JSON. Config Sync watches for updates to these files and applies changes to all relevant clusters automatically.
— Google — Config Sync Overview
Please note: Prior to learning about Config Sync, my go-to approach to managing Kubernetes resources across one or more clusters was Terraform configurations stored in a GIT repository evaluated by a CI/CD pipeline. …
Exploring different ways of running Kubernetes on Google Cloud Platform (GCP).
Upon reading the Kubernetes documentation, you will observe that there are three fundamentally different ways of running Kubernetes on GCP.
Please note: There are other solutions that are outside of the scope of this article; RedHat OpenShift comes to mind.
Broadly speaking, the trade-off when selecting a choice is between simplicity and flexibility. Interestingly, cost is not a significant factor as running your own high-availability control plane on GCE is comparable to the cost of GKE that provides it for you. …