Migrate AWS ECS cluster IPV4 to IPV6

64 Views Asked by At

I'm trying to avoid this new cost (public IPv4) in aws for small projects because it will represent a big percentage of the cost.

In my ECS cluster, I use EC2 instances as capacity providers, therefore, I followed this doc to make every new EC2 instance use IPV6 instead of IPV4 -- which worked.

However these instances are not joining the cluster to be a capacity provider, so the services will not run properly. This cluster was working fine with IPV4. I've tried different configurations in the subnets like:

  • Enable resource name DNS A record on launch
  • Enable resource name DNS AAAA record on launch
  • Hostname type: Resource name
  • Hostname type: IP name
  • Enable DNS64

Also, my security groups already support inbounds and outbounds for both types (IPV4 / IPV6)

Am I missing something or ECS doesn't support this configuration (at least with EC2)? If that is the case, do you have a suggestion on how I can reduce this IPV4 cost? For instance, could I avoid this cost using ECS with Fargate?

I know that Fargate cost is bigger than EC2, but maybe EC2 + IPV4 could be more expensive than Fargate using IPV6

EDIT 1: This is my current route table enter image description here

EDIT 2: I was able to ping6 www.google.com which means the instance has an internet connection. However, checking the agent log (/var/log/ecs/ecs-agent.log) I got the error:

health check [HEAD http://localhost:51678/v1/metadata] failed with error: Head \"http://localhost:51678/v1/metadata\": dial tcp 127.0.0.1:51678: connect: connection refused" module=healthcheck.go
0

There are 0 best solutions below