I am working on an open source project with Terraform that will allow me to set up ad-hoc environments through GitHub Actions. Each ad-hoc environment will correspond to a terraform workspace. I'm setting the workspace by exporting TF_WORKSPACE before running terraform init, plan and apply. This works the first time around. For example, I'm able to create an ad-hoc environment called alpha. In my S3 backend I can see that the state file is saved under the alpha folder. The issue is that when I run the same pipeline to create another ad-hoc environment called beta, I get the following message:

Initializing the backend...
╷
│Error: Currently selected workspace "beta" does not exist
│
│
╵
Error: Process completed with exit code 1.

Here is the section of my GitHub action that is failing: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/ad_hoc_env_create_update.yml#L110-L142

I have been over this article: https://support.hashicorp.com/hc/en-us/articles/360043550953-Selecting-a-workspace-when-running-Terraform-in-automation but I'm still not sure what I'm doing wrong in my automation pipeline.

The alpha workspace did not exist, but it seemed to be able to create it and use it as the workspace in my first run. I'm not sure why other workspaces are not able to be created using the same pipeline.

3

There are 3 best solutions below

0
On

I got some help from @apparentlymart on the Terraform community forum. Here's the reply: https://discuss.hashicorp.com/t/help-using-terraform-workspaces-in-an-automation-pipeline-with-tf-workspace-currently-selected-workspace-x-does-not-exist/40676/2

In order to make the pipeline work I had to use terraform commands in the following order:

terraform init ...
terraform workspace create ${WORKSPACE} || echo "Workspace ${WORKSPACE} already exists or cannot be created"
export TF_WORKSPACE=$WORKSPACE
terraform apply ...
terraform output ...

This allows me to create multiple ad hoc environments without any errors. The code in my example project has also been updated with this fix: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/ad_hoc_env_create_update.yml#L110-L146

0
On

It happened to me because I had left the files that Terraform generates when running locally in the branch, it is good practice to insert the .terraform folder and the *.hcl file inside the .gitignore file.

0
On

There is a problem in Terraform that terraform init with TF_WORKSPACE variable set will fail if there is already any non-default workspace present. The problem does not occur when calling plan or apply*.

In CI/CD automation one usually runs init->apply steps, and the problem is that for it to work TF_WORKSPACE should be unset in init and only set in apply.

The accepted answer suggests to explicitly create workspace before calling apply, but it's not required, as the workspace will be created implicitly during apply or plan anyways. Also running create workspace with TF_WORKSPACE fails.

How to reproduce the problem:

$ TF_WORKSPACE=foo terraform init -input=false # <- this works and creates workspace foo

$ terraform workspace list
* default
  foo

$ TF_WORKSPACE=bar terraform init -input=false # <- this fails
Initializing the backend...
Initializing modules...
╷
│ Error: Currently selected workspace "bar" does not exist
│
│
╵

*) it's true for state stored in s3 backend. It cannot be reproduced with locally stored state. I also did not verify other backend's behavior.