Argo CD for Cluster Administration by Example (Part 1)
Exploring Argo CD with a focus on Kubernetes cluster administration, i.e., deploying infrastructure workloads, for a fleet of clusters.
First what is Argo CD?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
Prerequisites
If you wish to follow along in this first part, you will need access to a single Kubernetes cluster (v1.25+); where we will learn Argo CD concepts in a simplified environment. In later parts, we will need additional clusters.
This series of articles was written using identical kind clusters (v1.30); each with a control plane and two worker nodes.
You will also need to have installed the kubeclt CLI.
While we will exclusively interact with Argo CD via its CRDs throughout this series, we can still use the argocd CLI to visualize our work.
Single Cluster Installation
We consider the types of installations; choosing the core installation as it is aligned with our cluster administration scenario.
Argo CD has two type of installations: multi-tenant and core.
The multi-tenant installation is the most common way to install Argo CD. This type of installation is typically used to service multiple application developer teams in the organization and maintained by a platform team.
The Argo CD Core installation is primarily used to deploy Argo CD in headless mode. This type of installation is most suitable for cluster administrators who independently use Argo CD and don’t need multi-tenancy features. This installation includes fewer components and is easier to setup. The bundle does not include the API server or UI, and installs the lightweight (non-HA) version of each component.
For Kubernetes 1.30, the tested version of Argo CD is 2.13; of which the latest patch version is 2.13.3.
note: Briefly looked at the Helm chart installation method and concluded that it 1) was maintained separately from Argo CD itself and 2) is fairly complicated.
We follow the official instructions on installing Argo CD Core to our initial single cluster.
$ kubectl create namespace argocd
$ kubectl apply \
--namespace=argocd \
-f https://raw.githubusercontent.com/argoproj/argo-cd/v2.13.3/manifests/core-install.yaml
The installation creates various cluster-level resource, e.g., RBAC resources and several CRDs: application, applicationset, and appproject.
In the argocd namespace, it installs RBAC resources, a number of workload resources, and network policies (essentially controls access to the ArgoCD workloads).
Here we use it to bring up the web UI served from the CLI itself.
$ kubectl config set-context --current --namespace=argocd
$ argocd login --core
$ argocd admin dashboard -n argocd
Browsing to http://localhost:8080 we indeed see the web UI.
AppProject
In the cluster administration scenario, we do not need to secure Argo CD from ourselves. Nevertheless, we still need at least one appproject (aka. project).
Projects provide a logical grouping of applications, which is useful when Argo CD is used by multiple teams. Projects provide the following features:
* restrict what may be deployed (trusted Git source repositories)
* restrict where apps may be deployed to (destination clusters and namespaces)
* restrict what kinds of objects may or may not be deployed (e.g. RBAC, CRDs, DaemonSets, NetworkPolicy etc…)
* defining project roles to provide application RBAC (bound to OIDC groups and/or JWT tokens)
It turns out that the core installation does not automatically create the default appproject.
Every application belongs to a single project. If unspecified, an application belongs to the default project, which is created automatically and by default, permits deployments from any source repo, to any cluster, and all resource Kinds.
Still needing an appproject, we create it ourselves by applying the following manifest.
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: default
namespace: argocd
spec:
sourceRepos:
- '*'
destinations:
- namespace: '*'
server: '*'
clusterResourceWhitelist:
- group: '*'
kind: '*'
note: One non-intuitive feature is that all of the Argo CD custom resources are namespaced and that they must be created in the same namespace as the Argo CD workloads, here argocd.
Application
We start simple by deploying a “workload” consisting of a namespace and a configmap therein from a folder in a Git repository.
note: In this series, we will be using a public repository; at the same time Argo CD supports private repositories.
To accomplish this we will create a directory-type application.
A group of Kubernetes resources as defined by a manifest.
and more specifically.
A directory-type application loads plain manifest files from .yml, .yaml, and .json files.
We apply the following manifest.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: simple
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
path: simple
repoURL: https://github.com/larkintuckerllc/argocd-examples
targetRevision: HEAD
note: One non-intuitive feature is that we have to specify the Kubernetes API endpoint (server) such that it can used by the Argo CD workloads; here it is the built-in kubernetes service in the default namespace.
note: Another non-intuitive feature is the meaning of spec.destination.namespace. Buried in a reference document is the following.
The namespace will only be set for namespace-scoped resources that have not set a value for .metadata.namespace
We inspect the status of the application.
$ kubectl get application --namespace=argocd
NAME SYNC STATUS HEALTH STATUS
simple OutOfSync Healthy
and through the web UI.
note: Continuing with the non-intuitive features, we see that the application is both Healthy and OutOfSync.
Apparently, we have to sync the application.
Sync The process of making an application move to its target state. E.g. by applying changes to a Kubernetes cluster.
Which can be done by patching the application. We start by creating a file; patch.yaml (change out the username as you see fit)
operation:
initiatedBy:
username: bugsbunny
sync:
syncStrategy:
hook: {}
and then we patch the application.
$ kubectl patch app simple \
--namespace=argocd \
--patch-file=patch.yaml \
--type merge
Now we can see that the application is synced.
% kubectl get application --namespace=argocd
NAME SYNC STATUS HEALTH STATUS
simple Synced Healthy
From the web UI we also see the application is synced.
We can also observe that Argo CD deployed the manifests into the cluster.
$ kubectl get configmap --namespace=simple
NAME DATA AGE
kube-root-ca.crt 1 2m41s
simple 2 2m41s
note: Kubernetes automatically creates the kube-root-ca.crt configmap in every namespace.
Automated Sync Policy
Having to manually sync the application seems like an unnecessary step; let us see if we can avoid this.
Argo CD has the ability to automatically sync an application when it detects differences between the desired manifests in Git, and the live state in the cluster. A benefit of automatic sync is that CI/CD pipelines no longer need direct access to the Argo CD API server to perform the deployment. Instead, the pipeline makes a commit and push to the Git repository with the changes to the manifests in the tracking Git repo.
To implement this we simply update the simple application by applying the following (the bold lines are new).
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: simple
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
path: simple
repoURL: https://github.com/larkintuckerllc/argocd-examples
targetRevision: HEAD
syncPolicy:
automated: {}
Will leave it to the reader (with their own repository) to validate that this does indeed work (it does). This does leave us thinking about the synchronization interval.
The automatic sync interval is determined by the timeout.reconciliation value in the argocd-cm ConfigMap, which defaults to 180s (3 minutes).
Also, Argo CD has a number of options to bypass automated sync policy safety mechanisms:
- Automatic pruning: Automated sync will not delete resources when Argo CD detects the resource is no longer defined in Git
- Automatic Pruning with Allow-Empty: Automated sync will not delete resources when Argo CD detects there are no target resources
- Automatic Self-Healing: Automated sync will not change resources that are manually updated on the cluster
ApplicationSet
Having observed how applications are used to synchronize manifests with a cluster, we will touch upon a powerful (and complex) feature; applicationset.
As the name would suggest, an applicationset is a higher-level resource that manages applications which unlocks a wide range of features.
The ApplicationSet controller, supplements Argo CD by adding additional features in support of cluster-administrator-focused scenarios. The ApplicationSet controller provides:
* The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters with Argo CD
* The ability to use a single Kubernetes manifest to deploy multiple applications from one or multiple Git repositories with Argo CD
* Improved support for monorepos: in the context of Argo CD, a monorepo is multiple Argo CD Application resources defined within a single Git repository
* Within multitenant clusters, improves the ability of individual cluster tenants to deploy applications using Argo CD (without needing to involve privileged cluster administrators in enabling the destination clusters/namespaces)
As we only have a single cluster at this point, will explore the feature where we can use an applicationset to deploy multiple applications.
Here we have two folders, simple-a, and simple-b, in the same repository where we want to create an application for each.
We apply the following manifest; will explain it below.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: simple
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- git:
repoURL: https://github.com/larkintuckerllc/argocd-examples
revision: HEAD
directories:
- path: simple-applicationset/*
template:
metadata:
name: "{{.path.basename}}"
spec:
project: default
source:
path: "{{.path.path}}"
repoURL: https://github.com/larkintuckerllc/argocd-examples
targetRevision: HEAD
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated: {}
Some observations:
- Generators are responsible for generating parameters, which are then rendered into the spec.template block of the application resource
- The Git directory generator generates parameters using the directory structure of a specified Git repository.
- This example uses the Go templating syntax
- Because the template parameters use the curly brace special character in the YAML, we are required to use quotes
Here we indeed see that the simple applicationset created the simple-a and simple-b applications.
% kubectl get applications --namespace=argocd
NAME SYNC STATUS HEALTH STATUS
simple Synced Healthy
simple-a Synced Healthy
simple-b Synced Healthy
We can see the same applications in the web UI.
Next Steps
In Part 2 we will switch to a multi-cluster environment and explore more interesting (and complicated) uses of applicationsets.