Can Cloud SQL Proxy connect a VM and Instance on different VPCs?

889 Views Asked by At

I have a Cloud SQL instance with a private IP located on a VPC.

I have a Compute Engine (VM) located on a different VPC.

When I login to the VM and try to establish a Cloud SQL Proxy connection it appears to be successful. It displays:

[root@mr-vm me]# /cloud_sql/cloud_sql_proxy -instances=mr-project:mr-region:mr-sql-instance=tcp:3311 -credential_file=/cloud_sql/mr-service-account-key.json
2022/05/09 10:34:32 Rlimits for file descriptors set to {&{8500 8500}}
2022/05/09 10:34:32 using credential file for authentication; [email protected]
2022/05/09 10:34:32 Listening on 127.0.0.1:3311 for mr-project:mr-region:mr-sql-instance
2022/05/09 10:34:32 Ready for new connections

When I attempt to connect (using iSQL or SQL Workbench) it appears to be successful because it outputs the following message to the terminal:

2022/05/09 10:43:12 New connection for "mr-project:mr-region:mr-sql-instance"

...and in Stackdriver the Cloud SQL instance has a log entry indicating the connection has be established with no errors or warnings in Stackdriver.

But the connection just waits and always times out with the following message:

[root@mr-vm me]sudo isql -v mysqldns                                                                   
[08S01][unixODBC][MySQL][ODBC 5.3(w) Driver]Lost connection to MySQL server at 'reading initial communication packet', system error: 0
[ISQL]ERROR: Could not SQLConnect

Further details (pre-emptive answers):

  • The Cloud SQL Admin API has been enabled.
  • The Cloud SQL Proxy is using a service account with "roles/cloudsql.client"
  • The SQL user account has been tested to confirm the username and password are valid.
  • The API dashboard indicates all communications are successful, code 200, no errors.

My current suspicion is the different VPCs are causing an issue but not raising an error anywhere.

Has anyone experience this issue before?

1

There are 1 best solutions below

0
On

Cloud SQL deploys instances with private IP into a Cloud SQL managed VPC (VPC A) and then peers that VPC with whatever VPC you've attached it to (VPC B). If you're then trying to connect from yet another VPC (VPC C), you'll need to peer your two VPCs (VPC B and C).

So the setup will look like this:

VPC C <--> VPC B <--> VPC A

However, there is a limitation in transitively peered VPCs: only directly peer VPCs can communicate with one another (see the docs). So in order to connect from VPC C to your Cloud SQL instance in VPC A, you'll need an intermediate proxy between VPCs.

One option that works well with the Cloud SQL Auth proxy is dante, a socks5 proxy. Deploy that on a GCE instance in VPC B and then launch the Cloud SQL Auth Proxy like so:

ALL_PROXY=socks5://$DANTE_VM_IP_ADDR:8000 \
    cloud_sql_proxy -instances=$INSTANCE_CONNECTION_NAME=tcp:5432

See the Cloud SQL Auth proxy README for details. There's also a good guide on running dante, and another one here.