Using an AWS S3 File Gateway in EC2, why does it only work in a public subnet and not a private subnet?

169 Views Asked by At

Whether I try to create an AWS S3 File Gateway (EC2) in the management console or with Terraform, I get the same problem below...

If I launch the EC2 instance in a public subnet, the gateway is created. If I try to launch the gateway in a private subnet (with NAT, all ports open in and out), it wont work. I get...

HTTP ERROR 500

I am running a VPN and able to ping the instance's private IP if I use the Management console. This is the same error code in terraform on a cloud 9 instance, which is also able to ping the instance.

Since I am intending to share the S3 bucket with NFS, its important that the instance reside in a private subnet. I'm new to trying out the AWS S3 File Gateway, I have read over the documentation, but nothing clearly states how to do this and why a private subnet would be different, so if you have any pointers I could look into, I'd love to know!

For any further reference (not really needed) my testing in terraform is mostly based on this github repository: https://github.com/davebuildscloud/terraform_file_gateway/tree/master/terraform

1

There are 1 best solutions below

0
On

I was unable to get the AWS console test to work, but I realised my Terraform test was poorly done - I mistakenly was skipping over a dependency that was establishing the VPC peering connection to the Cloud 9 Instance. once I fixed that it worked. Still I would love to know what would be required to get this to work through the Management Console too...