How to clean-up from Terraform the k8s resources left up and running by helm_release resource after a destroy

1.7k Views Asked by At

I am experiencing an issue using the helm_release resource in Terraform.

I basically deployed a kube-prometheus-stack that includes many k8s resources and that works smoothly.

The problem arose when I tried to destroy (or remove) this part, since Helm does not delete all resources (it probably has to do with some garbage collection rule that keeps them up and running after delete) when uninstalls the chart. That means I end-up with:

  • chart uninstalled
  • resources still up and running
  • having to manually go there and remove everything, otherwise if I re-create the thing I get plenty of duplicates

I previously asked a question (that now I am closing) related to understanding if it was a problem with Helm (and is not, by design it deletes everything when it can, I am not sure if in the chart something can be done, but anyway I am assuming it won't be done now and quickly) now I would like to ask if somebody has ab idea on how I can manage this directly from Terraform.

Is there something I can use to, for instance, run a kubectl delete command on the labelled resources (or maybe the whole namespace) when the helm_release resource gets destroyed?

Note: I am not adding any code, since this has nothing to do with the code, but more about finding some hook or hack to run a cleanup only after destroy.

p.s.: I also tried to exploit Terraform cloud hooks post-apply, but I would prefer to solve this within depending on Terraform Cloud and, anyway, I didn't manage to create a dependency on whether the helm_release had been destroyed.

1

There are 1 best solutions below

4
On BEST ANSWER

If you need to solve this problem directly from Terraform, you could consider using a null_resource with a local-exec provisioner that is triggered when the helm_release resource gets destroyed.
The local-exec provisioner invokes a local executable, in this case kubectl, after a resource is destroyed.

For instance:

resource "helm_release" "example" {
  name       = "example"
  namespace  = "default"
  chart      = "stable/kube-prometheus-stack"
  # add your values here...
}

resource "null_resource" "cleanup" {
  triggers = {
    helm_release_id = helm_release.example.id
  }

  provisioner "local-exec" {
    when    = destroy
    command = "kubectl delete namespace ${helm_release.example.namespace}"
  }
}

The above script will run kubectl delete namespace command on the namespace after the helm_release resource is destroyed.

Do test that carefully: deleting the entire namespace, not just the resources created by the Helm chart, is not a casual operation!
If there are other resources in the namespace that you do not want to delete, you will need to modify the kubectl command to delete only the resources you want.

And note that you would need to have kubectl configured on the machine running Terraform and it needs to have appropriate permissions to delete resources in your Kubernetes cluster.

Also, this null_resource will not get created until after the helm_release is created, due to the dependency in the triggers block. So, if the helm_release creation fails for some reason, the null_resource and its provisioners will not be triggered.


Unfortunately, I am using Terraform Cloud in a CI/CD pipe, therefore I won't be able to exploit the local-exec. But the answer is close to what I was looking for and since I didn't specify about Terraform Cloud is actually right.
Do you have any other idea?

The local-exec provisioner indeed cannot be used in the Terraform Cloud as it does not support running arbitrary commands on the host running Terraform.

Kubernetes Provider lifecycle management

An alternative solution in this context would be to use Kubernetes providers in Terraform to manage lifecycle of the resources that are left behind.

For example, let's say your Helm chart leaves behind a PersistentVolumeClaim resource. You could manage this using the Kubernetes provider in Terraform:

provider "kubernetes" {
  # configuration for your Kubernetes cluster
}

resource "helm_release" "example" {
  name       = "example"
  namespace  = "default"
  chart      = "stable/kube-prometheus-stack"
  # add your values
}

data "kubernetes_persistent_volume_claim" "pvc" {
  metadata {
    name      = "my-pvc"
    namespace = helm_release.example.namespace
  }
}

resource "kubernetes_persistent_volume_claim" "pvc" {
  depends_on = [helm_release.example]

  metadata {
    name      = data.kubernetes_persistent_volume_claim.pvc.metadata.0.name
    namespace = data.kubernetes_persistent_volume_claim.pvc.metadata.0.namespace
  }

  spec {
    access_modes = data.kubernetes_persistent_volume_claim.pvc.spec.0.access_modes
    resources {
      requests = {
        storage = data.kubernetes_persistent_volume_claim.pvc.spec.0.resources.0.requests["storage"]
      }
    }

    volume_name = data.kubernetes_persistent_volume_claim.pvc.spec.0.volume_name
  }
}

In this example, the kubernetes_persistent_volume_claim resource will delete the PVC when the Terraform stack is destroyed.

You would have to do this for every type of resource that is left behind, so it can be a bit tedious, but it is an option.

Kubernetes Provider for Job or a script

Another approach would be using the Kubernetes provider to call a Kubernetes Job or a script that cleans up the resources left behind:

provider "kubernetes" {
  # configuration for your Kubernetes cluster goes here
}

resource "helm_release" "example" {
  name       = "example"
  namespace  = "default"
  chart      = "stable/kube-prometheus-stack"
  # add your values here...
}

resource "kubernetes_job" "cleanup" {
  metadata {
    name      = "cleanup-job"
    namespace = helm_release.example.namespace
  }

  spec {
    template {
      metadata {}
      spec {
        container {
          name    = "cleanup"
          image   = "appropriate/curl" # or any image that has kubectl or equivalent tool
          command = ["sh", "-c", "kubectl delete ..."] # replace ... with the actual cleanup commands
        }
        
        restart_policy = "Never"
      }
    }

    backoff_limit = 4
  }

  depends_on = [helm_release.example]
}

In this second example, the kubernetes_job resource is triggered when the helm_release resource is created, running a cleanup script. The cleanup script could delete any resources that are left behind by the Helm chart.

Remember that in both cases, the Kubernetes provider needs to be properly configured and that the Kubernetes cluster permissions must allow the actions you are trying to perform.


Regarding the second example, the OP asks if it is possible for the kubernetes_job to be triggered automatically when the helm_release resource gets destroyed.

Unfortunately, Terraform's built-in resources and providers do not provide a direct way to execute something only upon the destruction of another resource. The provisioner block is a way to do this, but as we discussed, it is not suitable for Terraform Cloud and cannot be used with the Kubernetes provider directly.

As an indirect solution, you can create a Kubernetes job that is configured to delete the resources as soon as it is launched, and then use a depends_on reference to the helm_release in the job's configuration. That way, whenever the Helm release is created, the job will be launched as well. When you run terraform destroy, the Helm release will be destroyed and the job will be launched once more, thereby cleaning up the resources.

However, this approach is not perfect because it will also run the job when the resources are first created, not only when they are destroyed.

To address that, you could write your cleanup script such that it is idempotent and will not fail or cause any negative side effects if it is run when it is not necessary (i.e., upon creation of the Helm release).
For example, your script could first check if the resources it is supposed to clean up actually exist before attempting to delete them:

provider "kubernetes" {
  # configuration for your Kubernetes cluster goes here
}

resource "helm_release" "example" {
  name       = "example"
  namespace  = "default"
  chart      = "stable/kube-prometheus-stack"
  # add your values here...
}

resource "kubernetes_job" "cleanup" {
  depends_on = [helm_release.example]

  metadata {
    name      = "cleanup-job"
    namespace = helm_release.example.namespace
  }

  spec {
    template {
      metadata {}
      spec {
        container {
          name    = "cleanup"
          image   = "appropriate/curl" # or any image that has kubectl or equivalent tool
          command = ["sh", "-c", 
                     "if kubectl get <resource> <name>; then kubectl delete <resource> <name>; fi"]
                     # replace <resource> and <name> with the actual resource and name
        }

        restart_policy = "Never"
      }
    }

    backoff_limit = 4
  }
}

In this example, the command checks if a specific Kubernetes resource exists before attempting to delete it. That way, the job can be safely run whether the Helm release is being created or destroyed, and the cleanup will only occur if the resource exists.

Do replace <resource> and <name> with the actual resource and name of the resource you wish to check and potentially delete.