PgBouncer unable to connect to Postgres Docker Compose Service

219 Views Asked by At

I'm suddenly seeing a strange, show stopping error in my application which has been happily using Docker Compose and, specifically, its networking stack for months.

Seemingly, out of nowhere my PgBouncer container is suddenly unable to connect to my Postgres container. The resulting error message is:

pgbouncer_1 | 2023-11-02 22:54:28.949 UTC [1] LOG S-0x56100966f7a0: user/pw@(bad-af):0 closing because: server DNS lookup failed (age=0s)

I have removed and reinstalled Docker multiple times, tried rolling back all of my images to older versions, etc. to no avail. I can confirm that I'm able to ping both services from my application's container and by extending the bitnami/pgbouncer image and installing ping, I can confirm that I'm able to connect to those sibling containers from an interactive shell.

Does anyone know what might be going on here and how I can further diagnose or potentially fix the issue? I found this GitHub issue which addresses this issue but, from what I can tell, my version of Docker should contain the fix.

Environment:

  • ankane/pgvector:v0.5.1
  • bitnami/pgbouncer:1.21.0-debian-11-r4
  • Docker version 24.0.7, build afdd53b
  • Docker Compose version v2.23.0
  • Debian 11

Output from docker network inspect:

[
    {
        "Name": "backend-api_default",
        "Id": "3fdd76ff82ae2f3d98c184ed33ccc56244161125edc283fcc4f5d72ea33dbdf1",
        "Created": "2023-11-02T19:05:54.381213131-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "54f6f251fa0c9c71637aa8c2a4a559309b62ac45b3a93d6476f897442d8cef67": {
                "Name": "backend-api-postgres-exporter-1",
                "EndpointID": "6ba129ed346276cc0e7e354d30083945cd04eb6b5d3d233b0021662a46171bd9",
                "MacAddress": "02:42:ac:15:00:05",
                "IPv4Address": "172.21.0.5/16",
                "IPv6Address": ""
            },
            "64c8a17b4b554fb4fa8b0a5e625b621db3c56f350105b41fd23ec925be0e5fe2": {
                "Name": "backend-api-pgbouncer-1",
                "EndpointID": "685b17404212c480253fb5fa1750e2084c7d9cd2e708a54761c05b3b8c2b9eb4",
                "MacAddress": "02:42:ac:15:00:06",
                "IPv4Address": "172.21.0.6/16",
                "IPv6Address": ""
            },
            "8e2bee065f29459b16051eaa416396f1bb73558d9eb153c8635ba08c4de8ca5b": {
                "Name": "backend-api-api-1",
                "EndpointID": "e3672ca8154acc093ed91997067a16e7cfad8fd352b19fd455e0ead6ee5ae907",
                "MacAddress": "02:42:ac:15:00:07",
                "IPv4Address": "172.21.0.7/16",
                "IPv6Address": ""
            },
            "bdb093e7d4373e9abeaf9f35a0618dfc0abc828fa3a493367a163c5b3051f7bc": {
                "Name": "backend-api-grafana-1",
                "EndpointID": "fd4266e9dfab487ab04a19d33f6477e9051f5ec64a2974def1a24f58ccfbd7ad",
                "MacAddress": "02:42:ac:15:00:04",
                "IPv4Address": "172.21.0.4/16",
                "IPv6Address": ""
            },
            "e0f8d0ae29e8377cfc811c5bd0497f527a8511f14f6e03ef0e18788e9137e796": {
                "Name": "backend-api-prometheus-1",
                "EndpointID": "7785eb85093d7466d60927564c76b4f5f0eb5ccbc5627832980db76cab9ac9fe",
                "MacAddress": "02:42:ac:15:00:02",
                "IPv4Address": "172.21.0.2/16",
                "IPv6Address": ""
            },
            "f6743ebafd14641978be56c60a7f2b5f8bf849d282b221c52cbc4a8b09692cce": {
                "Name": "backend-api-postgres-1",
                "EndpointID": "30c10329252a76eb2965c4f2b1366b04d32644f4d752354607450f1946358832",
                "MacAddress": "02:42:ac:15:00:03",
                "IPv4Address": "172.21.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "backend-api",
            "com.docker.compose.version": "2.23.0"
        }
    }
]

UPDATE: I have been able to work around the issue by specifying a DNS resolver for my pgbouncer service:

    dns:
       - 8.8.8.8

I recently modified the system this application is running on to use systemd-resolved and it seems like, despite me adding a Docker-specific DNSStubListenerExtra=172.17.0.1 to my systemd-resolved config that something is going awry at this level.

1

There are 1 best solutions below

0
On BEST ANSWER

I'm all but convinced this issue was because I'd enabled systemd-resolved for DNS resolution on the machine where this application was running. I think the problem only arose either after I'd restarted the docker service or after rebuilding the docker-compose managed network.

Regardless, the only way I was able to get intra-container DNS working as it had been was to either specify a DNS server for the PG Bouncer container or, better, specify a fallback DNS server (in addition to the local systemd-resolved endpoint) in /etc/docker/daemon.json:

{
  "dns": ["127.0.0.53", "8.8.8.8"]
}