How to import a generated Kubernetes cluster's namespace in terraform

4.1k Views Asked by At

In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.

The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):

The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.

The command I used to import the namespaces is pretty straightforward:

terraform import kubernetes_namespace.my_new_namespace my_new_namespace

I also tried using the -provdier="" and -config="" but to no avail.

My Kubernetes provider configuration is this:

provider "kubernetes" {
  version = "~> 1.8"

  host  = module.gke.endpoint
  token = data.google_client_config.current.access_token

  cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}

An example for a namespace resource I am trying to import is this:

resource "kubernetes_namespace" "my_new_namespace" {
  metadata {
    name = "my_new_namespace"
  }
}

The import command results in the following:

Error: Get http://localhost/api/v1/namespaces/my_new_namespace: dial tcp [::1]:80: connect: connection refused

It's obvious it's doomed to fail since it's trying to reach localhost instead of the actual cluster IP and configurations.

Is there any workaround for this use case?

Thanks in advance.

2

There are 2 best solutions below

1
On

(1) Create an entry in your kubeconfig file for your GKE cluster.

gcloud container clusters get-credentials cluster-name

see: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry

(2) Point terraform Kubernetes provider to your kubeconfig:

provider "kubernetes" {
  config_path = "~/.kube/config"
}
0
On

the issue lies with the dynamic data provider. The import statement doesn't have access to it.

For the process of importing, you have to hardcode the provider values.

Change this:

provider "kubernetes" {
  version                = "~> 1.8"
  host                   = module.gke.endpoint
  token                  = data.google_client_config.current.access_token
  cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}

to:

provider "kubernetes" {
  version                = "~> 1.8"
  host                   = "https://<ip-of-cluster>"
  token                  = "<token>"
  cluster_ca_certificate = base64decode(<cert>)
  load_config_file       = false
}
  • The token can be retrieved from gcloud auth print-access-token.
  • The IP and cert can be retrieved by inspecting the created container resource using terraform state show module.gke.google_container_cluster.your_cluster_resource_name_here

For provider version 2+ you have to drop load_config_file.

Once in place, import and revert the changes on the provider.