Traffic Director by Example: Part 3

John Tucker
codeburst
Published in
5 min readMar 14, 2021

--

We wrap up this series by exploring one of Traffic Director’s advanced traffic management features.

This article is part of the series that starts with Traffic Director by Example: Part 1.

So far we have used Traffic Director simply for service discovery; here we examine one of the advanced traffic management features documented in Configuring advanced traffic management.

Traffic Splitting

To illustrate Traffic Director’s traffic splitting feature, we will use it to deploy a canary release.

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody

— martinFowler.com — CanaryRelease

Please note: For this article, I am borrowing pieces from another article that I wrote, Istio by Example.

In preparation for this example, we create another A records in our private DNS zone.

  • app-1 and 10.0.0.3

We next, create (copy to the workstation and apply using kubectl CLI) a namespace in our GKE cluster.

We then similarly deploy the app-1-v1 resources; this serves as our production release.

and then deploy the app-1-v2 resources; this serves as our canary release.

Next, we deploy two Kubernetes services; one for the production release and one for the canary release.

Things to observe:

  • These services create two Network Endpoint Groups (NEGs)
  • Coming from a Kubernetes native, i.e., Istio, service mesh implementation, it feels a little weird creating two services. At the same time, this makes more sense as we are not using them for service discovery

We can see the key resources we created in the GKE cluster.

$ kubectl get all -n namespace-1
NAME READY STATUS RESTARTS AGE
pod/app-1-v1-69749dd8c8-r2d7c 1/1 Running 0 7m39s
pod/app-1-v2-58f99f44bb-twqgl 1/1 Running 0 7m32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app-1-v1 ClusterIP 10.8.14.20 <none> 80/TCP 4m21s
service/app-1-v2 ClusterIP 10.8.8.163 <none> 80/TCP 4m14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app-1-v1 1/1 1 1 7m40s
deployment.apps/app-1-v2 1/1 1 1 7m33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/app-1-v1-69749dd8c8 1 1 1 7m40s
replicaset.apps/app-1-v2-58f99f44bb 1 1 1 7m33s

From the Compute Engine > Network Endpoint Group we can see the two new NEGs; each with a single pod from their respective deployment.

Next, we create a Traffic Director service from the Network services > Traffic Director menu and pressing the Create Service button. We supply the values:

  • Name: app-1-v1
  • Backend Type: Network endpoint groups
  • Network endpoint group: app-1-v1 (us-central1-a)
  • Maximum RPS: 5
  • Health Check: Create health check

For the Health Check we supply the values:

  • Name: app-1
  • Protocol: HTTP

We then press the Save and Continue button to save the Health Check. Then press the Continue and Done button to create the Traffic Director service.

We repeat the previous steps to create a second Traffic Director service with values:

  • Name: app-1-v2
  • Backend Type: Network endpoint groups
  • Network endpoint group: app-1-v2 (us-central1-a)
  • Maximum RPS: 5
  • Health Check: app-1

While we have created new Traffic Director services, we have not specified the logic, Routing Map Rules, that direct traffic to them. In this case we want to capture any traffic to the 10.0.0.3 VIP on port 80 and then direct 70% of the traffic to the app-1-v1 service and 30% to the app-1-v2 service.

From the Traffic Director page, we select the Routing rule maps tab and press the Create Routing Rule Map button.

We name the Routing Map Rule app-1.

We then add a Forwarding Rule using the Add forwarding rule button; providing Name: app-1, Custom IP: 10.0.0.3, and Port: 80 (default).

We then select:

  • Mode: Advanced host, path and route rule

For the default matcher, (Default) action for any unmatched host, select the app-1-v-1 service. Press the Add hosts and path matcher button and supply the values:

  • Hosts: *
  • Path matcher (matches, actions, and services)

Things to observe:

  • The syntax for this YAML was obtained from the Code guidance link provided in the form

We press the Save button to create the Route Map Rule.

Things to observe:

  • Thinking more about all of this, it is the Route Map Rule (combined with the DNS A record) that is comparable to a Kubernetes service; i.e., they both provide a VIP and a DNS name

At this point our new Traffic Director services are fully configured.

Verifying the Configuration

As we did previously, we can verify the configuration by logging into both the td-demo-vm-client GCE VM instance as well as the td-demo-gke-client GKE pod.

In both cases, we confirm that the client’s traffic to the VIP 10.0.0.3 (or the more friendly app-1.example.private name) on port 80 is intercepted and sent to one of the GKE pods backing one of the two Traffic Director service; with responses from pods backing the app-1-v1 service occurring more frequently.

$ curl http://app-1.example.private
<html>
<body>
Hello World v1!
</body>
</html>
$ curl http://app-1.example.private
<html>
<body>
Hello World v2!
</body>
</html>
$ curl http://app-1.example.private
<html>
<body>
Hello World v1!
s</html>

Wrap up

While there is a lot more to explore, these articles gave us enough of a framework to explore on our own. Hope you found them useful.

--

--