Published on

Coaxing Helm into Deleting a Kubernetes Namespace

Authors

Problem

In Helm when you install a release using the namespace flag like the following: helm install chartA --namespace=my-awesome-namespace. Then helm creates the my-awesome-namespace for you if it does not exist already and installs chartA into that namespace. If you do not specify a namespace using this flag then helm installs the release into the default namespace.

The problem is if you delete the release, helm does not delete the namespace it created while installing the release (in our example it leaves my-awesome-namespace). Deleting a namespace is very useful to do in Kubernetes if you are done with it as there may still be some unused resources lying around like secrets and pvcs. By deleting the namespace everything inside it is deleted.

Solution

Doing a bit of Googling I found that helm delete currently does not have the ability to delete namespaces as can be seen here. This may change at some point but in the meantime a colleague of mine pointed out another approach that works - using a namespace template yaml.

When you define the templates for your helm chart also define a namespace yaml similar to the below:

apiVersion: v1
kind: Namespace
metadata:
  name: your-namespace

Helm has a way of accessing the namespace passed into the install using: {{ . Release.Namespace }} in the above this would be replaced with my-awesome-namespace. The problem with using this in your template is that helm first creates the namespace using what you provided in the --namespace flag and then tries to apply your namespace template using the namespace that has already been created. Helm runs charts in the context of a namespace and since it is trying to run this namespace template in a namespace that already exists it results in helm returning an error.

To get around this define some custom property for it in your values.yaml file with a default value:

my:
  custom:
    name: sandbox-namspace

Then in your namespace yaml update it as follows:

apiVersion: v1
kind: Namespace
metadata:
  name: { { .Values.my.custom.name } }

When you helm install the chart that contains this it will be installed in the context of whatever namespace you provided it (ie the namespace you see when you do a helm ls or helm status) but it will actually create the template namespace using the value you provide it in a --set flag or values.yaml file.

Now if you helm delete this chart, helm will also delete the namespace. It seems that during a helm install helm runs something similar to: kubectl apply -f yourKubeFile.yaml and conversely when deleting it runs: kubectl delete -f yourKubeFile.yaml

Caveat

Pretend we have 2 charts chartA and chartB. chartA is installed first which creates the namespace as that is where the namespace yaml is defined. Now you install chartB in the namespace created when chartA was installed. One thing to watch out for is if you delete chartA before deleting chartB, Kubernetes will wipe the namespace clean (which is what we want) but if you run a helm status chartB, helm will say how all the resources in that chart are missing. You can still helm delete chartB without issues but just to keep helm in sync rather delete all releases in the namespace created by chartA before deleting chartA.