I'm trying to add kubectl provider for terraform module and I follow the docs from Terraform kubectl. I run terraform init
and provider is installed with success but when I try to add a sample config, for ex: ( or thers from here )
resource "kubectl_server_version" "current" {}
and run terraform plan
I got the following msg:
Error: Could not load plugin
Failed to instantiate provider "registry.terraform.io/hashicorp/kubectl" to
obtain schema: unknown provider "registry.terraform.io/hashicorp/kubectl"
and when I tun terraform init
( with the resource in place in module k8s )
Error: Failed to install provider
Error while installing hashicorp/kubectl: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/kubectl
some outputs:
$terraform plugins
├── provider[registry.terraform.io/hashicorp/kubernetes] 1.13.2
├── provider[registry.terraform.io/gavinbunney/kubectl] 1.9.1
├── module.k8s
│ ├── provider[registry.terraform.io/hashicorp/kubectl]
│ └── provider[registry.terraform.io/hashicorp/kubernetes]
$terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/kubernetes v1.13.2
- Using previously-installed gavinbunney/kubectl v1.9.1
$terraform -v
Terraform v0.13.4
+ provider registry.terraform.io/gavinbunney/kubectl v1.9.1
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
....
some config files:
terraform.tf
terraform {
required_version = "0.13.4"
backend "gcs" {
...
}
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "1.13.2"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.9.1"
}
....
terraform successfully init the gavinbunney/kubectl
provider but when I add resource "kubectl_manifest" ...
in k8s.module terraform is trying to load hashicorp/kubectl
provider
what i'm missing? :)
Update with terraform 1.4.0
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.18.1"
}
}
provider "kubernetes" {
host = module.k8s.cluster_endpoint
cluster_ca_certificate = base64decode(module.k8s.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.k8s.cluster_name]
}
}
resource "kubernetes_namespace" "velero" {
metadata {
name = var.velero_ns
}
}
Seems like the problem was that I had the
resource "kubectl_server_version" "current" {}
among with other resources fromhashicorp/kubernetes
resources in same module and terraform was trying to loadkubectl
fromhashicorp/kubectl
.When I added
gavinbunney/kubectl
's resources in main.tf all works good :)