Developing in a Multi-Cluster kind Environment
Motivation
One will run into a problem when developing in a multi-cluster kind environment.
kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Specifically, it is not immediately clear what server address pods (containers) in one cluster are to use when accessing the Kubernetes API (control plane) of another cluster.
TL;DR
It happens that the kind CLI has a command and option that provides us with a suitable address that can be used from pods running on any of the kind clusters’ pods. Here the address for cluster named a.
$ kind get kubeconfig --name a --internal | grep server
server: https://a-control-plane:6443
Walk Through
If you are looking to follow-along, you will need to have the following installed on your workstation:
We first create two clusters named a and b.
$ kind create cluster --name a
$ kind create cluster --name b
Here we see that kind automatically populated our workstation’s kubeconfig file; in particular we observe the clusters’ server address.
$ kubectl config view --output json | jq -r '.clusters[] | "\(.name), \(.cluster.server)"'
kind-a, https://127.0.0.1:60772
kind-b, https://127.0.0.1:60804
We can also use a kubectl command to confirm what the cluster’s server address is.
$ kubectl cluster-info --context=kind-a
Kubernetes control plane is running at https://127.0.0.1:60772
CoreDNS is running at https://127.0.0.1:60772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Let us see what the cluster’s server address is when using kubectl from a pod (container) running on the cluster.
We first, however, need to grant the default / default serviceaccount sufficient access to the Kubernetes API; here by applying the following resource to the a cluster (which is actually much more than sufficient):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: default-default-cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
And then we create the default / debug pod (uses the default / default service account); here by applying the following resource to the a cluster:
apiVersion: v1
kind: Pod
metadata:
labels:
app: debug
name: debug
namespace: default
spec:
containers:
- args: ["while true; do sleep 600; done;"]
command: ["/bin/bash", "-c", "--"]
image: bitnami/kubectl:latest
name: debug
volumeMounts:
- mountPath: /scratch
name: scratch
volumes:
- emptyDir: {}
name: scratch
note: We create a scratch folder as the container as a read-only filesystem; we will need to write a file later.
and then get a shell into the pod (container) using:
$ kubectl exec \
--context=kind-a \
--namespace=default \
--stdin \
--tty \
debug \
-- /bin/bash
From the pod (container), we again use the kubectl command to confirm what the cluster’s server address is.
$ kubectl cluster-info
Kubernetes control plane is running at https://10.96.0.1:443
CoreDNS is running at https://10.96.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
Here we get a different address which is the default / kubernetes service’s cluster IP address.
$ kubectl get services --namespace=default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h32m
The challenge we have here is that the address https://127.0.0.1:60772 is only usable from the workstation and the address https://10.96.0.1:443 is only usable from pods running on the a cluster. Neither are suitable addresses from pods running on the b cluster.
It happens that the kind CLI has a command and option that provides us with a third (and suitable) address that can be used from pods running on any of the kind clusters’ pods.
$ kind get kubeconfig --name a --internal | grep server
server: https://a-control-plane:6443
Let us confirm that this is the case.
We first need to generate a token for the default / default service account; here by applying the following resource to the a cluster:
apiVersion: v1
kind: Secret
metadata:
name: default-default-token
namespace: default
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
and then getting (and saving for later) the token using:
kubectl get secret \
--context=kind-a \
--namespace=default \
default-default-token \
--output=json \
| jq -r '.data.token' \
| base64 -d
We can also obtain (and saving for later) cluster a’s CA certificate using:
kubectl get secret \
--context=kind-a \
--namespace=default \
default-default-token \
--output=json \
| jq -r '.data."ca.crt"' \
| base64 -d
Now we can create the default / debug pod on cluster b; here by applying the earlier pod resource to the b cluster. Similar to before, we get a shell into the pod (container) using:
$ kubectl exec \
--context=kind-b \
--namespace=default \
--stdin \
--tty \
debug \
-- /bin/bash
Here we write the value of the saved cluster a’s CA certificate to a file in the scratch folder.
$ cat << EOF > /scratch/ca.crt
[CA.CRT]
EOF
We now use the kubectl command to confirm what the cluster’s server address is; here supplying the saved token (and the cluster’s server address).
$ kubectl cluster-info \
--certificate-authority=/scratch/ca.crt \
--server=https://a-control-plane:6443 \
--token=[TOKEN]
Kubernetes control plane is running at https://a-control-plane:6443
CoreDNS is running at https://a-control-plane:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
This illustrates that we have indeed accessed cluster a’s Kubernetes API from a pod (container) on cluster b.