A Kustomize Journey

Introduction

This is a post about my learning journey with kustomize. As part of moving to a git-ops approach, I did play around with argo-cd as the ‘D’ in the CI/CD pipeline. Overall, I like argo-cd’s gitops approach very much. One issue with argo-cd is that it does not support helm charts very well, I needed to provide argo-cd with instructions to fetch my template from S3, and then do a helm template to merge the values.yaml and the chart to generate the proper manifests.

We have some technical debt in our system. Our helm charts mount all of our application secrets (about 30 or so secret values in one Kubernetes secret) into all of our microservices. This is not good from a security perspective and makes it hard to see what microservice is using which secret. We have the same issue with environment variables. We pass a common set of environment variables to all of our microservices.

I considered a helm replacement tool because I am starting to dislike helm. Helm templates are difficult to author unless you do this for a living. They are hard to debug. Values files are often more than 1000 lines long. It is difficult, messy and error-prone to override certain yaml with the –set command line option. Helm also kind of hides the generated manifest. Sure you can make it appear with the –debug flag, or simply run ‘helm template’, but it is not as visible as I would like.

Enter kustomize

Kustomize is a tool that uses a purely declarative approach to configuration customization. It is natively available in kubectl and is also available as a stand-alone binary. The challenge for me was how to convert from helm to kustomize. I first ran ‘helm template’ in the same way as in our CI/CD system, to generate the manifests. Our microservices are simple. They consist of a deployment, a service, an ingress and sometimes a configmap. The deployment always follows the same scheme, with the following variables:

  • Application Name
  • Number of replicas
  • Application namespace
  • Node Selector
  • Image tag
  • Liveness path

The environment variables and secrets mounted as environment variables are different for each microservice. This is why we have a deployment.yaml in the project/overlays/dev folder.

The service also takes the same scheme, but only needs the application name, the label, and the selector. The Application Name and Application namespace from above are sufficient to customize this one.

The ingress, however, is a different thing. In our application, we have 0, 1, 2, 3 or more ingress blocks. It differs from microservice to microservice. This is why you will find the ingress in the overlays/dev folder. I am only showing the dev folder, but we also have a qa and production, which hold the same files with slightly different content. Below you can see the layout of the files on the filesystem.

.
├── kustomize-base
│  ├── configuration.yaml
│  ├── deployment.yaml
│  ├── environment.properties
│  ├── kustomization.yaml
│  └── service.yaml
└── oom-api-analytics
   └── overlays
      └─── dev
         ├── deployment.yaml
         ├── environment.properties
         ├── ingress.yaml
         └── kustomization.yaml
         └── render.sh

Examining the kustomize-base files

Let us first look at the files in the kustomize base folder. You will find this folder located ‘one folder up’ from where one might expect it. I am doing this because all the projects share the same kustomize-base folder.

./kustomize-base/configuration.yaml

varReference:
- path: metadata/name
  kind: Deployment
- path: metadata/namespace
  kind: Deployment
- path: metadata/labels/app
  kind: Deployment
- path: spec/template/namespace
  kind: Deployment
- path: spec/replicas
  kind: Deployment
- path: spec/selector/matchLabels
  kind: Deployment
- path: spec/template/metadata/labels/app
  kind: Deployment
- path: spec/template/metadata/namespace
  kind: Deployment
- path: spec/template/spec/containers/image
  kind: Deployment
- path: spec/template/spec/containers/livenessProbe/httpGet/path
  kind: Deployment
- path: spec/template/spec/containers/name
  kind: Deployment
- path: spec/template/spec/nodeSelector/eks_namespace
  kind: Deployment
- path: spec/template/spec/serviceAccountName
  kind: Deployment
- path: metadata/name
  kind: Service
- path: metadata/labels/app
  kind: Service
- path: spec/selector/app
  kind: Service
- path: metadata/namespace
  kind: Service

In order to do variable replacements, you must specify where you are going to patch things. The Kustomize philosophy prevents a search/replace approach – for the replacements think targeted and typed.

./kustomize-base/environment.properties

APPLICATION_NAME=name0
NUMBER_REPLICAS=1
APPLICATION_NAMESPACE=namespace0
NODE_SELECTOR=nodeSelector0
IMAGE_TAG=imageTag0
LIVENESS_PATH=/status0
IMAGE=theImage

This contains the ‘base’ version of the values. We are not using these key-value pairs, we replace them with the overlay/dev/environment.properties, but we must have something there to satisfy kustomize.

./kustomize-base/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: $(APPLICATION_NAME)
  namespace: $(APPLICATION_NAMESPACE)
  labels:
    app: $(APPLICATION_NAME)-deployment
spec:
  replicas: $(NUMBER_REPLICAS)
  selector:
    matchLabels:
      app: $(APPLICATION_NAME)-deployment
  minReadySeconds: 10
  revisionHistoryLimit: 30
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      namespace: $(APPLICATION_NAMESPACE)
      labels:
        app: $(APPLICATION_NAME)
    spec:
      nodeSelector:
        eks_namespace: $(NODE_SELECTOR)
      serviceAccountName: $(NODE_SELECTOR)
      containers:
        - name: $(APPLICATION_NAME)-deployment
          image: $(IMAGE):$(IMAGE_TAG)
          imagePullPolicy: "Always"
          volumeMounts:
          ports:
            - containerPort: 80
              name: "http"
              protocol: TCP
          livenessProbe:
            httpGet:
              path: $(LIVENESS_PATH)
              port: 80
            initialDelaySeconds: 30

 


The above has the deployment yaml, with the variables specified, e.g. $(APPLICATION_NAME).

./kustomize-base/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: $(APPLICATION_NAME)-service
  namespace: $(APPLICATION_NAMESPACE)
  labels:
    app: $(APPLICATION_NAME)-service
spec:
  ports:
    - protocol: TCP
      port: 80
      name: "http"
      targetPort: "http"
  selector:
    app: $(APPLICATION_NAME)-service
  type: ClusterIP

This is a fairly standard service definition. Again, you will see variables that start with $, which kustomize replaces in the final manifest.

./kustomize-base/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- deployment.yaml
- service.yaml

configurations: 
- configuration.yaml

configMapGenerator:
- name: ops-properties
  envs: [environment.properties]

vars:
- name: APPLICATION_NAME
  objref: { kind: ConfigMap, name: ops-properties, apiVersion: v1 }
  fieldref: { fieldpath: data.APPLICATION_NAME }
- name: NUMBER_REPLICAS
  objref: { kind: ConfigMap, name: ops-properties, apiVersion: v1 }
  fieldref: { fieldpath: data.NUMBER_REPLICAS }
- name: APPLICATION_NAMESPACE
  objref: { kind: ConfigMap, name: ops-properties, apiVersion: v1 }
  fieldref: { fieldpath: data.APPLICATION_NAMESPACE }
- name: NODE_SELECTOR
  objref: { kind: ConfigMap, name: ops-properties, apiVersion: v1 }
  fieldref: { fieldpath: data.NODE_SELECTOR }
- name: IMAGE_TAG
  objref: { kind: ConfigMap, name: ops-properties, apiVersion: v1 }
  fieldref: { fieldpath: data.IMAGE_TAG }
- name: IMAGE
  objref: { kind: ConfigMap, name: ops-properties, apiVersion: v1 }
  fieldref: { fieldpath: data.IMAGE }
- name: LIVENESS_PATH
  objref: { kind: ConfigMap, name: ops-properties, apiVersion: v1 }
  fieldref: { fieldpath: data.LIVENESS_PATH }

The above is the base kustomziation.yaml. It identifies the two files kustomize treats as bases, then it points to the configuration.yaml file. Next is a configMapGenerator. We need the configmap as an intermediate step. Kustomize generates this configmap from the environment.properties file. Finally, there is a vars section which identifies each variable, and points to where kustomize finds the variable value in the generated configmap. Turns out we don’t really need the configmap in our final manifest. The configmap is necessary during the kustomize variable replacement process.

Examining the kustomize-overlay files

./overlays/dev/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: $(APPLICATION_NAME)
  namespace: $(APPLICATION_NAMESPACE)
spec:
  template:
    spec:
      containers:
      - name: $(APPLICATION_NAME)-deployment
        env:
        - name: my_key
          value: my_value
        - name: ACCESS_TOKEN
          valueFrom:
            secretKeyRef:
              name: dev-secrets
              key: ACCESS_TOKEN

Here we have a file that kustomize merges with the base deployment. Note that for this to happen, the kind: Deployment, the metadata name, and the namespace must match the declaration in the base. Here we are adding one environment variable and one secret to the deployment. These are different for each microservice, but we only add only the variables and secrets needed by the microservice here.

./overlays/dev/environment.properties

APPLICATION_NAME=app
APPLICATION_NAMESPACE=dev
NUMBER_REPLICAS=1
NODE_SELECTOR=dev
IMAGE_TAG=MYTAG
LIVENESS_PATH=/status
IMAGE=123456789.dkr.ecr.us-east-2.amazonaws.com/app

Kustomize uses the above file to patch the values into the deployment.yaml and service.yaml defined in the kustomize-base folder.

./overlays/dev/ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: "api-analytics-ingress"
  namespace: dev
  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-Forwarded-Prefix "/app/analytics";
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  ingressClassName: dev-nginx
  rules:
  - host: "myurl.com"
    http:
      paths:
      - path: /app/analytics?(.*)
        pathType: Prefix
        backend:
          service:
            name: api-analytics-service
            port:
              number: 80

The above is the helm-generated ingress. Kustomize adds this manifest verbatim to the final output.

./overlays/dev/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../../kustomize-base

resources:
- ingress.yaml

patches:
- deployment.yaml

configMapGenerator:
- name: ops-properties
  envs: [environment.properties]
  behavior: replace

We need another kustomization.yaml for our overlay environment. This one specifies both bases and the ingress.yaml resource. It tells the system to deployment.yaml as a patch. Finally, we need the configMapGenerator to enable our variable replacement magic.

Running things…

To get the rendered manifest, go to a terminal, change to the ./overlays/dev folder, and run kustomize build .When developing do use the alias kb=’kustomize build’ to save typing!

apiVersion: v1
data:
  APPLICATION_NAME: name2
  APPLICATION_NAMESPACE: namespace
  IMAGE_TAG: imageTag
  LIVENESS_PATH: /status
  NODE_SELECTOR: nodeSelector
  NUMBER_REPLICAS: "1"
kind: ConfigMap
metadata:
  name: ops-properties-9d5h25hhth
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: name2-service
  name: name2-service
  namespace: namespace
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: name2-service
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: name2-deployment
  name: name2
  namespace: namespace
spec:
  minReadySeconds: 10
  replicas: "1"
  revisionHistoryLimit: 30
  selector:
    matchLabels:
      app: name2-deployment
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: name2
      namespace: namespace
    spec:
      containers:
      - env:
        - name: my_key
          value: my_value
        - name: ACCESS_TOKEN
          valueFrom:
            secretKeyRef:
              key: ACCESS_TOKEN
              name: dev-secrets
        image: imageTag
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /status
            port: 80
          initialDelaySeconds: 30
        name: name2-deployment
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
      nodeSelector:
        eks_namespace: nodeSelector
      serviceAccountName: nodeSelector
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-Forwarded-Prefix "/analytics";
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$1
  name: api-analyticss-ingress
  namespace: dev
spec:
  ingressClassName: dev-nginx
  rules:
  - host: my.domain.net
    http:
      paths:
      - backend:
          service:
            name: analytics-service
            port:
              number: 80
        path: //analytics?(.*)
        pathType: Prefix

If you need the manifest stored as separate files, I found a neat script in a helm issue on GitHub (see acknowledgements below).

#!/usr/bin/env bash

# helm-fan-out

if [ -z "$1" ]; then
    echo "Please provide an output directory"
    exit 1
fi

awk -vout=$1 -F": " '
   $0~/^# Source: / {
       file=out"/"$2;
       if (!(file in filemap)) {
           filemap[file] = 1
           print "Creating "file;
           system ("mkdir -p $(dirname "file"); echo -n "" > "file);
       }
   }
   $0!~/^#/ {
       if (file) {
           print $0 >> file;
       }
   }'

Be aware that this only works for helm-generated scripts, not generic manifests. Save it to fanout.sh, and chmod +x fanout.sh. You can then kb . | fanout.sh ./folder, where ./folder is where the separated manifest files go. This script is useful in separating helm template-generated manifests into separate files.

The pesky configmap

You may have noticed that kustomize generates real configmap, ops-properties-9d5h25hhthd. While kustomize produces this configmap in order to maintain declarative config, I don’t have a use for it. I dislike the idea of having many useless configmaps in my namespace. When rendering, we can tell kustomize to produce an out.yaml file containing all processed manifests. I then use a little trick to filter the generated configmap out by piping the output of kustomize to tail -n +13 . This works because the generated configmap is in the first 13 lines of the file.

Rendering manifests with kustomize – the Image Tag replacement

The following script will help with the image tag replacement and removal of the configmap mentioned above.

./overlays/dev/render.sh

#!/bin/bash
# this will render the kustomize, remove the configmap, and save the output
# in ../../kustomize-rendered-ENV, where ENV is one of (dev, qa, prod)
# invoke with ./render.sh image_tag
image_tag=$1
if [ -z "$image_tag" ]; then
    echo "Please pass in an image tag value as the first argument."
    exit 1
fi
result=${PWD##*/}
result=${result:-/}

# the next line replaces the IMAGE=IMAGE_TAG with tag provided in the arguments
sed -i -E "s/(IMAGE_TAG=)(.*)/\\1$image_tag/g" environment.properties

# render the fully defined manifests, without the intermediate configmap
kustomize build . | tail -n +13 > ../../kustomize-rendered-$result/manifest.yaml

You can run the above render.sh either manually, or from the CI/CD system. I need to pass in an image tag, and the rest happens automatically.  The CI/CD system will need to add, commit and push only the manifest.yaml. You can then point argo-cd to the ./kustomize-rendered-dev folder. Argo-cd will then deploy your manifest from there when the CI/CD system commits and pushes it to git. This approach also lets you see your manifest in straight yaml, with no weird parameters or helm {{moustaches}}.

Acknowledgements:

https://gist.github.com/hermanbanken/3d0f232ffd86236c9f1f198c9452aad9  – This discusses the approach shared in this article.

https://github.com/helm/helm/issues/4680 — Helm Fan-out shell script

https://stackoverflow.com/questions/159118/how-do-i-match-any-character-across-multiple-lines-in-a-regular-expression — multi-line regex replacement

Leave a Reply