aws_db_parameter_group is trying to destroy in second 'apply'

647 Views Asked by At

My Jenkin pipeline has 4 terraform stages and one Ansible stage. In a terraform stage I am creating a Aurora RDS cluster with custom db-parameter-groups. In the 1st run there is no issue until the Ansible. After I fixed the Ansible issue I am trying to execute the pipeline again and now its failing in Terraform RDS stage since its trying to destroy the aws_db_parameter_group. Here is the terraform output :

[0m[1maws_security_group.main_security_group: Refreshing state... (ID: sg-91cdd8f8)[0m
[0m[1maws_rds_cluster.main_rds_cluster: Refreshing state... (ID: rds-cluster-acceptance)[0m
[0m[1maws_rds_cluster_instance.main_cluster_instance: Refreshing state... (ID: rds-cluster-acceptance-instance0)[0m
[0m[1mmodule.aurora-1.aws_db_parameter_group.rds_pg: Destroying... (ID: db-accep-oscar5-6)[0m[0m
[0m[1mmodule.aurora-1.aws_rds_cluster.main_rds_cluster: Modifying... (ID: rds-cluster-acceptance)[0m
final_snapshot_identifier: "rds-snapshot-acceptance-2017-09-19T05-46-02Z" => "rds-snapshot-acceptance-2017-09-19T06-13-53Z"[0m
[0m[1mmodule.aurora-1.aws_rds_cluster.main_rds_cluster: Modifications complete (ID: rds-cluster-acceptance)[0m[0m
[0m[1mmodule.aurora-1.aws_db_parameter_group.rds_pg: Still destroying... (ID: db-accep-oscar5-6, 10s elapsed)[0m[0m
[0m[1mmodule.aurora-1.aws_db_parameter_group.rds_pg: Still destroying... (ID: db-accep-oscar5-6, 20s elapsed)[0m[0m
[0m[1mmodule.aurora-1.aws_db_parameter_group.rds_pg: Still destroying... (ID: db-accep-oscar5-6, 30s elapsed)[0m[0m
[0m[1mmodule.aurora-1.aws_db_parameter_group.rds_pg: Still destroying... (ID: db-accep-oscar5-6, 40s elapsed)[0m[0m
[0m[1mmodule.aurora-1.aws_db_parameter_group.rds_pg: Still destroying... (ID: db-accep-oscar5-6, 50s elapsed)[0m[0m
[0m[1mmodule.aurora-1.aws_db_parameter_group.rds_pg: Still destroying... (ID: db-accep-oscar5-6, 1m0s elapsed)[0m[0m
1

There are 1 best solutions below

0
On

Destruction of resources on terraform apply mostly happens in the following cases:

  • the resource has been changed in your config and the change forces re-creation of the resource (for example, resource name)
  • your terraform state file (terraform.tfstate) is not persisted so Terraform does not know about a state and tries to destroy existing infrastructure